url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.14B
1.87B
| node_id
stringlengths 18
19
| number
int64 3.74k
6.19k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 2
33.9k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5777 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5777/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5777/comments | https://api.github.com/repos/huggingface/datasets/issues/5777/events | https://github.com/huggingface/datasets/issues/5777 | 1,677,655,969 | I_kwDODunzps5j_v-h | 5,777 | datasets.load_dataset("code_search_net", "python") : NotADirectoryError: [Errno 20] Not a directory | {
"login": "jason-brian-anderson",
"id": 34688597,
"node_id": "MDQ6VXNlcjM0Njg4NTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/34688597?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jason-brian-anderson",
"html_url": "https://github.com/jason-brian-anderson",
"followers_url": "https://api.github.com/users/jason-brian-anderson/followers",
"following_url": "https://api.github.com/users/jason-brian-anderson/following{/other_user}",
"gists_url": "https://api.github.com/users/jason-brian-anderson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jason-brian-anderson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jason-brian-anderson/subscriptions",
"organizations_url": "https://api.github.com/users/jason-brian-anderson/orgs",
"repos_url": "https://api.github.com/users/jason-brian-anderson/repos",
"events_url": "https://api.github.com/users/jason-brian-anderson/events{/privacy}",
"received_events_url": "https://api.github.com/users/jason-brian-anderson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Note:\r\nI listed the datasets and grepped around to find what appears to be an alternative source for this:\r\n\r\nraw_datasets = load_dataset(\"espejelomar/code_search_net_python_10000_examples\", \"python\")",
"Thanks for reporting, @jason-brian-anderson.\r\n\r\nYes, this is a known issue: the [CodeSearchNet](https://github.com/github/CodeSearchNet) repo has been archived (Apr 11, 2023) and their source data files are no longer accessible in their S3: e.g. https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/python.zip gives 403 Forbidden error. See:\r\n- https://huggingface.co/datasets/code_search_net/discussions/3\r\n\r\nWe have contacted one of the authors of the dataset to find a solution. I'll keep you informed.\r\n\r\nCC: @hamelsmu",
"cc: @julianeagu",
"This issue is fixed because we are hosting the CodeSearchNet data files in the Hugging Face Hub. See: https://huggingface.co/datasets/code_search_net/discussions/7",
"I learned that @mallamanis has uploaded the dataset [here as well](https://zenodo.org/record/7908468) ",
"Thanks @hamelsmu for the Zenodo link. I am adding it to the dataset card on the Hugging Face Hub, so that the community knows about this \"official\" source data. I guess this link is not well known, because some community members already hosted an \"unofficial\" version of the data on Zenodo as well: https://zenodo.org/record/7857872\r\n\r\n"
] | 2023-04-21T02:08:07 | 2023-06-05T05:49:52 | 2023-05-11T11:51:56 | NONE | null | ### Describe the bug
While checking out the [tokenizer tutorial](https://huggingface.co/course/chapter6/2?fw=pt), i noticed getting an error while initially downloading the python dataset used in the examples.
The [collab with the error is here](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter6/section2.ipynb#scrollTo=hGb69Yo3eV8S)
```
from datasets import load_dataset
import os
os.environ["HF_DATASETS_CACHE"] = "/workspace"
# This can take a few minutes to load, so grab a coffee or tea while you wait!
raw_datasets = load_dataset("code_search_net", "python")
```
yeilds:
```
ile /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:524, in xlistdir(path, use_auth_token)
522 main_hop, *rest_hops = _as_str(path).split("::")
523 if is_local_path(main_hop):
--> 524 return os.listdir(path)
525 else:
526 # globbing inside a zip in a private repo requires authentication
527 if not rest_hops and (main_hop.startswith("http://") or main_hop.startswith("https://")):
NotADirectoryError: [Errno 20] Not a directory: '/workspace/downloads/25ceeb4c25ab737d688bd56ea92bfbb1f199fe572470456cf2d675479f342ac7/python/final/jsonl/train'
```
I was able to reproduce this erro both in the collab and on my own pytorch/pytorch container pulled from the dockerhub official pytorch image, so i think it may be a server side thing.
### Steps to reproduce the bug
Steps to reproduce the issue:
1. run `raw_datasets = load_dataset("code_search_net", "python")`
### Expected behavior
expect the code to not exception during dataset pull.
### Environment info
i tried both the default HF_DATASETS_CACHE on Collab, and on my local container. i then pointed to the HF_DATASETS_CACHE to a large capacity local storage and the problem was consisten across all 3 scenarios. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5777/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5776 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5776/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5776/comments | https://api.github.com/repos/huggingface/datasets/issues/5776/events | https://github.com/huggingface/datasets/issues/5776 | 1,677,116,100 | I_kwDODunzps5j9sLE | 5,776 | Use Pandas' `read_json` in the JSON builder | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-04-20T17:15:49 | 2023-04-20T17:15:49 | null | CONTRIBUTOR | null | Instead of PyArrow's `read_json`, we should use `pd.read_json` in the JSON builder for consistency with the CSV and SQL builders (e.g., to address https://github.com/huggingface/datasets/issues/5725).
In Pandas2.0, to get the same performance, we can set the `engine` to "pyarrow". The issue is that Colab still doesn't install Pandas 2.0 by default, so I think it's best to wait for this to be resolved on their side to avoid downgrading decoding performance in scenarios when Pandas 2.0 is not installed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5776/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5775 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5775/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5775/comments | https://api.github.com/repos/huggingface/datasets/issues/5775/events | https://github.com/huggingface/datasets/issues/5775 | 1,677,089,901 | I_kwDODunzps5j9lxt | 5,775 | ArrowDataset.save_to_disk lost some logic of remote | {
"login": "Zoupers",
"id": 29817738,
"node_id": "MDQ6VXNlcjI5ODE3NzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/29817738?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zoupers",
"html_url": "https://github.com/Zoupers",
"followers_url": "https://api.github.com/users/Zoupers/followers",
"following_url": "https://api.github.com/users/Zoupers/following{/other_user}",
"gists_url": "https://api.github.com/users/Zoupers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zoupers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zoupers/subscriptions",
"organizations_url": "https://api.github.com/users/Zoupers/orgs",
"repos_url": "https://api.github.com/users/Zoupers/repos",
"events_url": "https://api.github.com/users/Zoupers/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zoupers/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"We just fixed this on `main` and will do a new release soon :)"
] | 2023-04-20T16:58:01 | 2023-04-26T12:11:36 | 2023-04-26T12:11:17 | NONE | null | ### Describe the bug
https://github.com/huggingface/datasets/blob/e7ce0ac60c7efc10886471932854903a7c19f172/src/datasets/arrow_dataset.py#L1371
Here is the bug point, when I want to save from a `DatasetDict` class and the items of the instance is like `[('train', Dataset({features: ..., num_rows: ...}))]` , there is no guarantee that there exists a directory name `train` under `dataset_dict_path`.
### Steps to reproduce the bug
1. Mock a DatasetDict with items like what I said.
2. using save_to_disk with storage_options, u can use local sftp. code may like below
```python
from datasets import load_dataset
dataset = load_dataset(...)
dataset.save_to_disk('sftp:///tmp', storage_options={'host': 'localhost', 'username': 'admin'})
```
I suppose u can reproduce the bug by these steps.
### Expected behavior
Should create the folder if it does not exists, just like we do locally.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-6.2.10-arch1-1-x86_64-with-glibc2.35
- Python version: 3.10.9
- Huggingface_hub version: 0.13.2
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5775/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5774 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5774/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5774/comments | https://api.github.com/repos/huggingface/datasets/issues/5774/events | https://github.com/huggingface/datasets/pull/5774 | 1,676,716,662 | PR_kwDODunzps5OxIMe | 5,774 | Fix style | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010336 / 0.011353 (-0.001017) | 0.007085 / 0.011008 (-0.003924) | 0.135577 / 0.038508 (0.097069) | 0.038301 / 0.023109 (0.015192) | 0.427919 / 0.275898 (0.152021) | 0.461451 / 0.323480 (0.137971) | 0.008929 / 0.007986 (0.000944) | 0.005260 / 0.004328 (0.000931) | 0.103481 / 0.004250 (0.099231) | 0.054885 / 0.037052 (0.017833) | 0.434956 / 0.258489 (0.176467) | 0.466915 / 0.293841 (0.173074) | 0.052403 / 0.128546 (-0.076144) | 0.021128 / 0.075646 (-0.054518) | 0.466847 / 0.419271 (0.047576) | 0.085096 / 0.043533 (0.041563) | 0.439935 / 0.255139 (0.184796) | 0.453613 / 0.283200 (0.170413) | 0.123913 / 0.141683 (-0.017769) | 1.930114 / 1.452155 (0.477959) | 2.052083 / 1.492716 (0.559366) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280612 / 0.018006 (0.262606) | 0.583937 / 0.000490 (0.583447) | 0.004542 / 0.000200 (0.004342) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035901 / 0.037411 (-0.001510) | 0.160357 / 0.014526 (0.145831) | 0.141661 / 0.176557 (-0.034896) | 0.234915 / 0.737135 (-0.502220) | 0.164110 / 0.296338 (-0.132228) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.659901 / 0.215209 (0.444692) | 6.529102 / 2.077655 (4.451447) | 2.635324 / 1.504120 (1.131204) | 2.275777 / 1.541195 (0.734583) | 2.343205 / 1.468490 (0.874715) | 1.241310 / 4.584777 (-3.343467) | 5.683784 / 3.745712 (1.938072) | 3.377162 / 5.269862 (-1.892700) | 2.176404 / 4.565676 (-2.389273) | 0.144303 / 0.424275 (-0.279972) | 0.016352 / 0.007607 (0.008745) | 0.817383 / 0.226044 (0.591339) | 8.148356 / 2.268929 (5.879428) | 3.489277 / 55.444624 (-51.955347) | 2.848086 / 6.876477 (-4.028391) | 2.973304 / 2.142072 (0.831232) | 1.517821 / 4.805227 (-3.287407) | 0.278794 / 6.500664 (-6.221870) | 0.096385 / 0.075469 (0.020916) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.631693 / 1.841788 (-0.210095) | 19.564716 / 8.074308 (11.490408) | 23.583081 / 10.191392 (13.391689) | 0.252363 / 0.680424 (-0.428061) | 0.027644 / 0.534201 (-0.506557) | 0.579634 / 0.579283 (0.000351) | 0.645702 / 0.434364 (0.211338) | 0.667302 / 0.540337 (0.126965) | 0.766425 / 1.386936 (-0.620511) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011186 / 0.011353 (-0.000167) | 0.007327 / 0.011008 (-0.003681) | 0.105441 / 0.038508 (0.066933) | 0.040293 / 0.023109 (0.017184) | 0.480557 / 0.275898 (0.204659) | 0.522049 / 0.323480 (0.198569) | 0.007779 / 0.007986 (-0.000207) | 0.007338 / 0.004328 (0.003009) | 0.104744 / 0.004250 (0.100494) | 0.059463 / 0.037052 (0.022411) | 0.494055 / 0.258489 (0.235566) | 0.534340 / 0.293841 (0.240499) | 0.062800 / 0.128546 (-0.065746) | 0.020687 / 0.075646 (-0.054959) | 0.135833 / 0.419271 (-0.283439) | 0.087472 / 0.043533 (0.043939) | 0.465019 / 0.255139 (0.209880) | 0.526713 / 0.283200 (0.243513) | 0.131424 / 0.141683 (-0.010259) | 1.884759 / 1.452155 (0.432605) | 2.015817 / 1.492716 (0.523101) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237032 / 0.018006 (0.219026) | 0.605209 / 0.000490 (0.604719) | 0.006653 / 0.000200 (0.006453) | 0.000264 / 0.000054 (0.000210) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034982 / 0.037411 (-0.002430) | 0.141409 / 0.014526 (0.126883) | 0.151635 / 0.176557 (-0.024922) | 0.217298 / 0.737135 (-0.519837) | 0.171945 / 0.296338 (-0.124393) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.678596 / 0.215209 (0.463387) | 6.802432 / 2.077655 (4.724777) | 3.021617 / 1.504120 (1.517497) | 2.722508 / 1.541195 (1.181313) | 2.728194 / 1.468490 (1.259704) | 1.245863 / 4.584777 (-3.338914) | 5.762676 / 3.745712 (2.016963) | 5.497855 / 5.269862 (0.227994) | 2.855764 / 4.565676 (-1.709912) | 0.157359 / 0.424275 (-0.266916) | 0.015562 / 0.007607 (0.007955) | 0.865559 / 0.226044 (0.639515) | 8.553052 / 2.268929 (6.284123) | 3.905544 / 55.444624 (-51.539081) | 3.272528 / 6.876477 (-3.603949) | 3.399481 / 2.142072 (1.257408) | 1.540155 / 4.805227 (-3.265072) | 0.275871 / 6.500664 (-6.224793) | 0.092346 / 0.075469 (0.016877) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.753646 / 1.841788 (-0.088142) | 20.074050 / 8.074308 (11.999742) | 23.920391 / 10.191392 (13.728999) | 0.257161 / 0.680424 (-0.423263) | 0.027805 / 0.534201 (-0.506396) | 0.565605 / 0.579283 (-0.013678) | 0.643277 / 0.434364 (0.208914) | 0.633504 / 0.540337 (0.093167) | 0.754317 / 1.386936 (-0.632619) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2d34c7968ea1a3fe7d4fa7cdf23673e0354f69ac \"CML watermark\")\n"
] | 2023-04-20T13:21:32 | 2023-04-20T13:34:26 | 2023-04-20T13:24:28 | MEMBER | null | Fix C419 issues | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5774/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5774",
"html_url": "https://github.com/huggingface/datasets/pull/5774",
"diff_url": "https://github.com/huggingface/datasets/pull/5774.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5774.patch",
"merged_at": "2023-04-20T13:24:28"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5773 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5773/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5773/comments | https://api.github.com/repos/huggingface/datasets/issues/5773/events | https://github.com/huggingface/datasets/issues/5773 | 1,675,984,633 | I_kwDODunzps5j5X75 | 5,773 | train_dataset does not implement __len__ | {
"login": "v-yunbin",
"id": 38179632,
"node_id": "MDQ6VXNlcjM4MTc5NjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/38179632?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/v-yunbin",
"html_url": "https://github.com/v-yunbin",
"followers_url": "https://api.github.com/users/v-yunbin/followers",
"following_url": "https://api.github.com/users/v-yunbin/following{/other_user}",
"gists_url": "https://api.github.com/users/v-yunbin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/v-yunbin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/v-yunbin/subscriptions",
"organizations_url": "https://api.github.com/users/v-yunbin/orgs",
"repos_url": "https://api.github.com/users/v-yunbin/repos",
"events_url": "https://api.github.com/users/v-yunbin/events{/privacy}",
"received_events_url": "https://api.github.com/users/v-yunbin/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Thanks for reporting, @v-yunbin.\r\n\r\nCould you please give more details, the steps to reproduce the bug, the complete error back trace and the environment information (`datasets-cli env`)?",
"this is a detail error info from transformers:\r\n```\r\nTraceback (most recent call last):\r\n File \"finetune.py\", line 177, in <module>\r\n whisper_finetune(traindir,devdir,outdir)\r\n File \"finetune.py\", line 161, in whisper_finetune\r\n trainer = Seq2SeqTrainer(\r\n File \"/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/trainer_seq2seq.py\", line 56, in __init__\r\n super().__init__(\r\n File \"/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/trainer.py\", line 567, in __init__\r\n raise ValueError(\r\nValueError: The train_dataset does not implement __len__, max_steps has to be specified. The number of steps needs to be known in advance for the learning rate scheduler.\r\n```\r\n",
"How did you create `train_dataset`? The `datasets` library does not appear in your stack trace.\r\n\r\nWe need more information in order to reproduce the issue...",
"```\r\ndef asr_dataset(traindir,devdir):\r\n we_voice = IterableDatasetDict()\r\n #we_voice[\"train\"] = load_from_disk(traindir,streaming=True)\r\n #we_voice[\"test\"]= load_from_disk(devdir,streaming=True)\r\n we_voice[\"train\"] = load_dataset(\"csv\",data_files=os.path.join(traindir,\"train.csv\"),split=\"train\",streaming=True)\r\n #print(load_dataset(\"csv\",data_files=os.path.join(traindir,\"train.csv\"),split=\"train\"))\r\n we_voice[\"test\"] = load_dataset(\"csv\",data_files=os.path.join(devdir,\"dev.csv\"), split=\"train\",streaming=True)\r\n we_voice = we_voice.remove_columns([\"id\"])\r\n we_voice = we_voice.cast_column(\"audio\", Audio(sampling_rate=16000))\r\n return we_voice\r\n\r\n```",
"As you are using iterable datasets (`streaming=True`), their length is not defined.\r\n\r\nYou should:\r\n- Either use non-iterable datasets, which have a defined length: use `DatasetDict` and not passing `streaming=True`\r\n- Or pass `args.max_steps` to the `Trainer`",
"I don't know how to give a reasonable args.max_steps...........................",
"Then you should not use streaming.",
"@albertvillanova I think @v-yunbin, myself, and others might be slightly confused about max_steps and how it differs from num_train_epochs.",
"@lkurlandski A **step** is referring to optimizer's update after back propagation, and it's associated with a batch of data. For example, if a dataset contains 64 examples and you have an overall batch size of 4, then an epoch will have 64/4=16 batches. Therefore, in one epoch, you will have 16 optimizer **steps**."
] | 2023-04-20T04:37:05 | 2023-07-19T20:33:13 | null | NONE | null | when train using data precessored by the datasets, I get follow warning and it leads to that I can not set epoch numbers:
`ValueError: The train_dataset does not implement __len__, max_steps has to be specified. The number of steps needs to be known in advance for the learning rate scheduler.` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5773/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5772 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5772/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5772/comments | https://api.github.com/repos/huggingface/datasets/issues/5772/events | https://github.com/huggingface/datasets/pull/5772 | 1,675,033,510 | PR_kwDODunzps5OreXV | 5,772 | Fix JSON builder when missing keys in first row | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009262 / 0.011353 (-0.002091) | 0.006157 / 0.011008 (-0.004851) | 0.125960 / 0.038508 (0.087451) | 0.036213 / 0.023109 (0.013104) | 0.399331 / 0.275898 (0.123433) | 0.453597 / 0.323480 (0.130117) | 0.006990 / 0.007986 (-0.000995) | 0.007320 / 0.004328 (0.002991) | 0.100321 / 0.004250 (0.096070) | 0.048870 / 0.037052 (0.011818) | 0.396284 / 0.258489 (0.137795) | 0.475619 / 0.293841 (0.181778) | 0.052329 / 0.128546 (-0.076217) | 0.019564 / 0.075646 (-0.056083) | 0.430942 / 0.419271 (0.011670) | 0.063224 / 0.043533 (0.019692) | 0.391717 / 0.255139 (0.136578) | 0.448342 / 0.283200 (0.165142) | 0.114055 / 0.141683 (-0.027628) | 1.793204 / 1.452155 (0.341049) | 1.895151 / 1.492716 (0.402435) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.283699 / 0.018006 (0.265693) | 0.597194 / 0.000490 (0.596704) | 0.007143 / 0.000200 (0.006944) | 0.000602 / 0.000054 (0.000548) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034761 / 0.037411 (-0.002651) | 0.124555 / 0.014526 (0.110030) | 0.149126 / 0.176557 (-0.027430) | 0.220335 / 0.737135 (-0.516801) | 0.153109 / 0.296338 (-0.143229) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.620210 / 0.215209 (0.405001) | 6.229937 / 2.077655 (4.152282) | 2.615203 / 1.504120 (1.111083) | 2.239337 / 1.541195 (0.698143) | 2.262138 / 1.468490 (0.793648) | 1.196498 / 4.584777 (-3.388279) | 5.609932 / 3.745712 (1.864220) | 3.031347 / 5.269862 (-2.238515) | 2.025530 / 4.565676 (-2.540146) | 0.139828 / 0.424275 (-0.284447) | 0.015476 / 0.007607 (0.007869) | 0.768964 / 0.226044 (0.542920) | 7.728677 / 2.268929 (5.459748) | 3.336407 / 55.444624 (-52.108217) | 2.700055 / 6.876477 (-4.176422) | 2.765223 / 2.142072 (0.623151) | 1.409073 / 4.805227 (-3.396155) | 0.246849 / 6.500664 (-6.253815) | 0.081231 / 0.075469 (0.005762) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.593836 / 1.841788 (-0.247952) | 18.020525 / 8.074308 (9.946216) | 21.766822 / 10.191392 (11.575430) | 0.258615 / 0.680424 (-0.421809) | 0.026895 / 0.534201 (-0.507306) | 0.529823 / 0.579283 (-0.049460) | 0.623470 / 0.434364 (0.189106) | 0.628171 / 0.540337 (0.087833) | 0.745249 / 1.386936 (-0.641687) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008624 / 0.011353 (-0.002729) | 0.006317 / 0.011008 (-0.004691) | 0.097315 / 0.038508 (0.058807) | 0.035217 / 0.023109 (0.012108) | 0.440197 / 0.275898 (0.164299) | 0.473863 / 0.323480 (0.150383) | 0.006722 / 0.007986 (-0.001264) | 0.006444 / 0.004328 (0.002116) | 0.102056 / 0.004250 (0.097806) | 0.047142 / 0.037052 (0.010089) | 0.452476 / 0.258489 (0.193986) | 0.487619 / 0.293841 (0.193778) | 0.052456 / 0.128546 (-0.076090) | 0.018735 / 0.075646 (-0.056911) | 0.114656 / 0.419271 (-0.304616) | 0.062577 / 0.043533 (0.019044) | 0.444471 / 0.255139 (0.189332) | 0.494264 / 0.283200 (0.211065) | 0.117112 / 0.141683 (-0.024571) | 1.848965 / 1.452155 (0.396810) | 1.984008 / 1.492716 (0.491292) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.290494 / 0.018006 (0.272488) | 0.588415 / 0.000490 (0.587925) | 0.000459 / 0.000200 (0.000259) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032873 / 0.037411 (-0.004538) | 0.131139 / 0.014526 (0.116614) | 0.140268 / 0.176557 (-0.036289) | 0.204561 / 0.737135 (-0.532574) | 0.147443 / 0.296338 (-0.148895) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.636899 / 0.215209 (0.421690) | 6.236139 / 2.077655 (4.158484) | 2.801468 / 1.504120 (1.297348) | 2.398808 / 1.541195 (0.857613) | 2.493150 / 1.468490 (1.024659) | 1.228845 / 4.584777 (-3.355932) | 5.675874 / 3.745712 (1.930162) | 3.084939 / 5.269862 (-2.184922) | 2.061310 / 4.565676 (-2.504367) | 0.142285 / 0.424275 (-0.281990) | 0.014972 / 0.007607 (0.007365) | 0.786599 / 0.226044 (0.560555) | 7.876036 / 2.268929 (5.607107) | 3.476136 / 55.444624 (-51.968489) | 2.847922 / 6.876477 (-4.028555) | 3.040326 / 2.142072 (0.898253) | 1.448538 / 4.805227 (-3.356690) | 0.257230 / 6.500664 (-6.243434) | 0.085137 / 0.075469 (0.009668) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.668173 / 1.841788 (-0.173615) | 18.668520 / 8.074308 (10.594212) | 20.535542 / 10.191392 (10.344150) | 0.244580 / 0.680424 (-0.435844) | 0.026364 / 0.534201 (-0.507837) | 0.531753 / 0.579283 (-0.047530) | 0.616578 / 0.434364 (0.182214) | 0.618906 / 0.540337 (0.078569) | 0.738785 / 1.386936 (-0.648151) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f7265cafa3103d77d6d52aa897088faefcd96659 \"CML watermark\")\n"
] | 2023-04-19T14:32:57 | 2023-04-21T06:45:13 | 2023-04-21T06:35:27 | MEMBER | null | Until now, the JSON builder only considered the keys present in the first element of the list:
- Either explicitly: by passing index 0 in `dataset[0].keys()`
- Or implicitly: `pa.Table.from_pylist(dataset)`, where "schema (default None): If not passed, will be inferred from the first row of the mapping values"
This PR fixes the bug by considering the union of the keys present in all the rows.
Fix #5726. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5772/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5772",
"html_url": "https://github.com/huggingface/datasets/pull/5772",
"diff_url": "https://github.com/huggingface/datasets/pull/5772.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5772.patch",
"merged_at": "2023-04-21T06:35:27"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5771 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5771/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5771/comments | https://api.github.com/repos/huggingface/datasets/issues/5771/events | https://github.com/huggingface/datasets/issues/5771 | 1,674,828,380 | I_kwDODunzps5j09pc | 5,771 | Support cloud storage for loading datasets | {
"login": "eli-osherovich",
"id": 2437102,
"node_id": "MDQ6VXNlcjI0MzcxMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eli-osherovich",
"html_url": "https://github.com/eli-osherovich",
"followers_url": "https://api.github.com/users/eli-osherovich/followers",
"following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}",
"gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions",
"organizations_url": "https://api.github.com/users/eli-osherovich/orgs",
"repos_url": "https://api.github.com/users/eli-osherovich/repos",
"events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}",
"received_events_url": "https://api.github.com/users/eli-osherovich/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
},
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"A duplicate of https://github.com/huggingface/datasets/issues/5281"
] | 2023-04-19T12:43:53 | 2023-05-07T17:47:41 | 2023-05-07T17:47:41 | CONTRIBUTOR | null | ### Feature request
It seems that the the current implementation supports cloud storage only for `load_from_disk`. It would be nice if a similar functionality existed in `load_dataset`.
### Motivation
Motivation is pretty clear -- let users work with datasets located in the cloud.
### Your contribution
I can help implementing this. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5771/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5770/comments | https://api.github.com/repos/huggingface/datasets/issues/5770/events | https://github.com/huggingface/datasets/pull/5770 | 1,673,581,555 | PR_kwDODunzps5OmntV | 5,770 | Add IterableDataset.from_spark | {
"login": "maddiedawson",
"id": 106995444,
"node_id": "U_kgDOBmCe9A",
"avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maddiedawson",
"html_url": "https://github.com/maddiedawson",
"followers_url": "https://api.github.com/users/maddiedawson/followers",
"following_url": "https://api.github.com/users/maddiedawson/following{/other_user}",
"gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions",
"organizations_url": "https://api.github.com/users/maddiedawson/orgs",
"repos_url": "https://api.github.com/users/maddiedawson/repos",
"events_url": "https://api.github.com/users/maddiedawson/events{/privacy}",
"received_events_url": "https://api.github.com/users/maddiedawson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi again @lhoestq this is ready for review! Not sure I have permission to add people to the reviewers list...",
"Cool ! I think you can define `IterableDataset.from_spark` instead of adding `streaming=` in `Dataset.from_spark`, it can be more intuitive IMO :)",
"Thanks for reviewing! I moved the streaming behavior to IterableDataset.from_spark",
"Thanks Quentin! I'll flesh out the docs in a follow-up PR",
"Friendly ping @lhoestq ",
"Thanks @lhoestq ! I fixed the partition order thing and added more unit tests.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006165 / 0.011353 (-0.005188) | 0.004497 / 0.011008 (-0.006511) | 0.099142 / 0.038508 (0.060634) | 0.027479 / 0.023109 (0.004369) | 0.352491 / 0.275898 (0.076593) | 0.402993 / 0.323480 (0.079513) | 0.004885 / 0.007986 (-0.003100) | 0.003315 / 0.004328 (-0.001013) | 0.075787 / 0.004250 (0.071537) | 0.035320 / 0.037052 (-0.001732) | 0.368401 / 0.258489 (0.109912) | 0.409090 / 0.293841 (0.115249) | 0.030125 / 0.128546 (-0.098421) | 0.011670 / 0.075646 (-0.063976) | 0.324381 / 0.419271 (-0.094890) | 0.050815 / 0.043533 (0.007283) | 0.352598 / 0.255139 (0.097460) | 0.389189 / 0.283200 (0.105989) | 0.092873 / 0.141683 (-0.048810) | 1.485140 / 1.452155 (0.032986) | 1.545586 / 1.492716 (0.052869) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199522 / 0.018006 (0.181516) | 0.404576 / 0.000490 (0.404087) | 0.003322 / 0.000200 (0.003122) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022945 / 0.037411 (-0.014466) | 0.095512 / 0.014526 (0.080987) | 0.103077 / 0.176557 (-0.073480) | 0.163918 / 0.737135 (-0.573217) | 0.105560 / 0.296338 (-0.190779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417360 / 0.215209 (0.202151) | 4.161693 / 2.077655 (2.084039) | 1.851941 / 1.504120 (0.347821) | 1.649872 / 1.541195 (0.108677) | 1.682099 / 1.468490 (0.213609) | 0.693187 / 4.584777 (-3.891590) | 3.462528 / 3.745712 (-0.283184) | 1.839893 / 5.269862 (-3.429968) | 1.155945 / 4.565676 (-3.409731) | 0.082611 / 0.424275 (-0.341664) | 0.012076 / 0.007607 (0.004469) | 0.514325 / 0.226044 (0.288280) | 5.155052 / 2.268929 (2.886123) | 2.307280 / 55.444624 (-53.137345) | 1.966483 / 6.876477 (-4.909994) | 2.018892 / 2.142072 (-0.123181) | 0.803068 / 4.805227 (-4.002159) | 0.152213 / 6.500664 (-6.348451) | 0.066320 / 0.075469 (-0.009149) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218578 / 1.841788 (-0.623209) | 13.563869 / 8.074308 (5.489561) | 13.954596 / 10.191392 (3.763204) | 0.151527 / 0.680424 (-0.528897) | 0.016655 / 0.534201 (-0.517546) | 0.380637 / 0.579283 (-0.198646) | 0.395854 / 0.434364 (-0.038509) | 0.459111 / 0.540337 (-0.081226) | 0.560219 / 1.386936 (-0.826717) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006427 / 0.011353 (-0.004926) | 0.004728 / 0.011008 (-0.006280) | 0.080525 / 0.038508 (0.042017) | 0.027294 / 0.023109 (0.004185) | 0.414688 / 0.275898 (0.138790) | 0.449882 / 0.323480 (0.126402) | 0.004771 / 0.007986 (-0.003214) | 0.003402 / 0.004328 (-0.000926) | 0.078748 / 0.004250 (0.074497) | 0.037046 / 0.037052 (-0.000007) | 0.417398 / 0.258489 (0.158909) | 0.462921 / 0.293841 (0.169080) | 0.030364 / 0.128546 (-0.098182) | 0.011810 / 0.075646 (-0.063837) | 0.089787 / 0.419271 (-0.329485) | 0.039806 / 0.043533 (-0.003727) | 0.403401 / 0.255139 (0.148262) | 0.439477 / 0.283200 (0.156278) | 0.088431 / 0.141683 (-0.053252) | 1.534373 / 1.452155 (0.082219) | 1.592316 / 1.492716 (0.099600) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217701 / 0.018006 (0.199695) | 0.384770 / 0.000490 (0.384280) | 0.000437 / 0.000200 (0.000237) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024952 / 0.037411 (-0.012459) | 0.098728 / 0.014526 (0.084202) | 0.106324 / 0.176557 (-0.070233) | 0.155484 / 0.737135 (-0.581651) | 0.109503 / 0.296338 (-0.186836) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.450639 / 0.215209 (0.235430) | 4.523110 / 2.077655 (2.445455) | 2.224810 / 1.504120 (0.720690) | 2.119516 / 1.541195 (0.578321) | 2.225192 / 1.468490 (0.756702) | 0.695397 / 4.584777 (-3.889380) | 3.433559 / 3.745712 (-0.312153) | 2.633127 / 5.269862 (-2.636735) | 1.448471 / 4.565676 (-3.117206) | 0.082262 / 0.424275 (-0.342013) | 0.012246 / 0.007607 (0.004639) | 0.561243 / 0.226044 (0.335199) | 5.652711 / 2.268929 (3.383782) | 2.689771 / 55.444624 (-52.754853) | 2.359512 / 6.876477 (-4.516965) | 2.471098 / 2.142072 (0.329026) | 0.802955 / 4.805227 (-4.002272) | 0.151142 / 6.500664 (-6.349522) | 0.067494 / 0.075469 (-0.007975) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306879 / 1.841788 (-0.534909) | 14.030775 / 8.074308 (5.956467) | 12.917790 / 10.191392 (2.726398) | 0.141269 / 0.680424 (-0.539155) | 0.016264 / 0.534201 (-0.517937) | 0.411957 / 0.579283 (-0.167326) | 0.393235 / 0.434364 (-0.041129) | 0.505144 / 0.540337 (-0.035193) | 0.590660 / 1.386936 (-0.796276) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7790ebd7072eafff755fb575b392f3efa74069e4 \"CML watermark\")\n"
] | 2023-04-18T17:47:53 | 2023-05-17T14:07:32 | 2023-05-17T14:00:38 | CONTRIBUTOR | null | Follow-up from https://github.com/huggingface/datasets/pull/5701
Related issue: https://github.com/huggingface/datasets/issues/5678 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5770/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5770",
"html_url": "https://github.com/huggingface/datasets/pull/5770",
"diff_url": "https://github.com/huggingface/datasets/pull/5770.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5770.patch",
"merged_at": "2023-05-17T14:00:38"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5769 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5769/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5769/comments | https://api.github.com/repos/huggingface/datasets/issues/5769/events | https://github.com/huggingface/datasets/issues/5769 | 1,673,441,182 | I_kwDODunzps5jvq-e | 5,769 | Tiktoken tokenizers are not pickable | {
"login": "markovalexander",
"id": 22663468,
"node_id": "MDQ6VXNlcjIyNjYzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/22663468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/markovalexander",
"html_url": "https://github.com/markovalexander",
"followers_url": "https://api.github.com/users/markovalexander/followers",
"following_url": "https://api.github.com/users/markovalexander/following{/other_user}",
"gists_url": "https://api.github.com/users/markovalexander/gists{/gist_id}",
"starred_url": "https://api.github.com/users/markovalexander/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/markovalexander/subscriptions",
"organizations_url": "https://api.github.com/users/markovalexander/orgs",
"repos_url": "https://api.github.com/users/markovalexander/repos",
"events_url": "https://api.github.com/users/markovalexander/events{/privacy}",
"received_events_url": "https://api.github.com/users/markovalexander/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting, @markovalexander.\r\n\r\nUnfortunately, I'm not able to reproduce the issue: the `tiktoken` tokenizer can be used within `Dataset.map`, both in my local machine and in a Colab notebook: https://colab.research.google.com/drive/1DhJroZgk0sNFJ2Mrz-jYgrmh9jblXaCG?usp=sharing\r\n\r\nAre you sure you are using `datasets` version 2.11.0?"
] | 2023-04-18T16:07:40 | 2023-05-04T18:55:57 | 2023-05-04T18:55:57 | NONE | null | ### Describe the bug
Since tiktoken tokenizer is not pickable, it is not possible to use it inside `dataset.map()` with multiprocessing enabled. However, you [made](https://github.com/huggingface/datasets/issues/5536) tiktoken's tokenizers pickable in `datasets==2.10.0` for caching. For some reason, this logic does not work in dataset processing and raises `TypeError: cannot pickle 'builtins.CoreBPE' object`
### Steps to reproduce the bug
```
from datasets import load_dataset
import tiktoken
dataset = load_dataset("stas/openwebtext-10k")
enc = tiktoken.get_encoding("gpt2")
tokenized = dataset.map(
process,
remove_columns=['text'],
desc="tokenizing the OWT splits",
num_proc=2,
)
def process(example):
ids = enc.encode(example['text'])
ids.append(enc.eot_token)
out = {'ids': ids, 'len': len(ids)}
return out
```
### Expected behavior
starts processing dataset
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.0-1021-oracle-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.13.4
- PyArrow version: 9.0.0
- Pandas version: 2.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5769/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5768 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5768/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5768/comments | https://api.github.com/repos/huggingface/datasets/issues/5768/events | https://github.com/huggingface/datasets/issues/5768 | 1,672,494,561 | I_kwDODunzps5jsD3h | 5,768 | load_dataset("squad") doesn't work in 2.7.1 and 2.10.1 | {
"login": "yaseen157",
"id": 57412770,
"node_id": "MDQ6VXNlcjU3NDEyNzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/57412770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaseen157",
"html_url": "https://github.com/yaseen157",
"followers_url": "https://api.github.com/users/yaseen157/followers",
"following_url": "https://api.github.com/users/yaseen157/following{/other_user}",
"gists_url": "https://api.github.com/users/yaseen157/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yaseen157/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yaseen157/subscriptions",
"organizations_url": "https://api.github.com/users/yaseen157/orgs",
"repos_url": "https://api.github.com/users/yaseen157/repos",
"events_url": "https://api.github.com/users/yaseen157/events{/privacy}",
"received_events_url": "https://api.github.com/users/yaseen157/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @yaseen157.\r\n\r\nCould you please give the complete error stack trace?",
"I am not able to reproduce your issue: the dataset loads perfectly on my local machine and on a Colab notebook: https://colab.research.google.com/drive/1Fbdoa1JdNz8DOdX6gmIsOK1nCT8Abj4O?usp=sharing\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"squad\")\r\nDownloading builder script: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5.27k/5.27k [00:00<00:00, 3.22MB/s]\r\nDownloading metadata: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.36k/2.36k [00:00<00:00, 1.60MB/s]\r\nDownloading readme: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7.67k/7.67k [00:00<00:00, 4.58MB/s]\r\nDownloading and preparing dataset squad/plain_text to ...t/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453...\r\nDownloading data: 30.3MB [00:00, 91.8MB/s] | 0/2 [00:00<?, ?it/s]\r\nDownloading data: 4.85MB [00:00, 75.3MB/s] \r\nDownloading data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.31it/s]\r\nExtracting data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2157.01it/s]\r\nDataset squad downloaded and prepared to .../.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 463.95it/s]\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 87599\r\n })\r\n validation: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 10570\r\n })\r\n})\r\n```",
"I am at a complete loss for what's happening here. A quick summary, I have 3 machines to try this with:\r\n1) My windows 10 laptop\r\n2) Linux machine1, super computer login node\r\n3) Linux machine2, super computer compute node\r\n\r\nLet's define the following as a test script for the machines:\r\n\r\n```\r\nimport traceback\r\nimport datasets\r\nprint(f\"{datasets.__version__=}\")\r\ntry:\r\n ds = datasets.load_dataset(\"squad\")\r\nexcept:\r\n traceback.print_exc()\r\nelse:\r\n print(\"Success!\")\r\n```\r\n\r\nThe Windows laptop enters some sort of traceback recursion loop:\r\n\r\n> datasets.__version__='2.7.1'\r\n> Downloading and preparing dataset squad/plain_text to C:/Users/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453...\r\n> Downloading data files: 100%|██████████| 2/2 [00:00<?, ?it/s]\r\n> Traceback (most recent call last):\r\n> File \"<string>\", line 1, in <module>\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 116, in spawn_main\r\n> exitcode = _main(fd, parent_sentinel)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 125, in _main\r\n> prepare(preparation_data)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 236, in prepare\r\n> _fixup_main_from_path(data['init_main_from_path'])\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 287, in _fixup_main_from_path\r\n> main_content = runpy.run_path(main_path,\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 267, in run_path\r\n> code, fname = _get_code_from_file(run_name, path_name)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 237, in _get_code_from_file\r\n> with io.open_code(decoded_path) as f:\r\n> OSError: [Errno 22] Invalid argument: 'C:\\\\Users\\\\yr3g17\\\\OneDrive - University of Southampton\\\\Documents\\\\PhD-repository\\\\<input>'\r\n> Traceback (most recent call last):\r\n> File \"<string>\", line 1, in <module>\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 116, in spawn_main\r\n> exitcode = _main(fd, parent_sentinel)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 125, in _main\r\n> prepare(preparation_data)\r\n**this error traceback is endlessly recursive**\r\n\r\nThis is a brand new issue that started today and I didn't even realise was a thing, as I had been using my windows machine to follow tracebacks for the other machines...\r\n\r\nI suspect this issue had something to do with my filepath naming, but I couldn't confirm this when I spent time trying to debug this myself weeks ago, something to do with files being locked and never released. I'm not too concerned about my laptop not working here because I've had so many issues with Microsoft OneDrive and my filesystem.\r\n\r\nLinux machines 1 and 2 were working fine for months, but have all of a sudden stopped working. Trying to run linux machine 1 (login node), I get:\r\n\r\n> datasets.__version__='2.10.1'\r\n> Downloading and preparing dataset json/squad to /home/yr3g17/.cache/hugg\r\ningface/datasets/json/squad-d733af945be1d2c2/0.0.0/0f7e3662623656454fcd2\r\nb650f34e886a7db4b9104504885bd462096cc7a9f51...\r\n> Downloading data files: 100%|███████████████████████████████████████████\r\n█████████████████████████████████████████████| 2/2 [00:00<00:00, 4042.70\r\nit/s]\r\n>Extracting data files: 100%|███████████████████████████████████████\r\n███████████████████████████████████████████████████| 2/2 [00:00<00:00, 1\r\n11.15it/s]\r\n> Generating train split: 0 examples [00:00, ? examples/s]\r\n\r\n and hangs here. This has not happened to me before on the Linux machine. If I forcefully keyboard interrupt, I get:\r\n \r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 2, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/load.py\", line 1782, in load_dataset\r\n> builder_instance.download_and_prepare(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/builder.py\", line 793, in download_and_prepare\r\n> with FileLock(lock_path) if is_local else contextlib.nullcontext():\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/utils/filelock.py\", line 320, in __enter__\r\n> self.acquire()\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/utils/filelock.py\", line 282, in acquire\r\n> time.sleep(poll_intervall)\r\n\r\nWhich also appears to be file lock related! I resolved this by navigating to my ~/.cache/huggingface/datasets directory and wiping out anything to do with the squad dataset in *.lock files. Now I get:\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset_load(\"squad\")\r\n\r\n```\r\n> Downloading and preparing dataset squad/plain_text to /home/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb\r\n> 2511d223b9150cce08a837ef62ffea453...\r\n> Downloading data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 44.75it/s]\r\n> Extracting data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 8.54it/s]\r\n> Dataset squad downloaded and prepared to /home/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150\r\n> cce08a837ef62ffea453. Subsequent calls will reuse this data.\r\n> 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 19.77it/s]\r\n> DatasetDict({\r\n> train: Dataset({\r\n> features: ['id', 'title', 'context', 'question', 'answers'],\r\n> num_rows: 87599\r\n> })\r\n> validation: Dataset({\r\n> features: ['id', 'title', 'context', 'question', 'answers'],\r\n> num_rows: 10570\r\n> })\r\n> })\r\n> \r\n\r\nWhich all seems fine right, it's doing what it should be. But now, without ever leaving the IDE, I \"make a subsequent call\" to reuse the data by repeating the command. I encounter the following traceback\r\n\r\n`load_dataset(\"squad\")`\r\n\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1496, in load_dataset_builder\r\n> dataset_module = dataset_module_factory(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1151, in dataset_module_factory\r\n> ).get_module()\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 631, in get_module\r\n> data_files = DataFilesDict.from_local_or_remote(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 796, in from_local_or_remote\r\n> DataFilesList.from_local_or_remote(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 764, in from_local_or_remote\r\n> data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 369, in resolve_patterns_locally_or_by_urls\r\n> raise FileNotFoundError(error_msg)\r\n> FileNotFoundError: Unable to resolve any data file that matches '['train[-._ 0-9/]**', '**[-._ 0-9/]train[-._ 0-9/]**', 'training[-._ 0-9/]**', '**[-\r\n> ._ 0-9/]training[-._ 0-9/]**']' at /mainfs/home/yr3g17/.cache/huggingface/datasets/squad with any supported extension ['csv', 'tsv', 'json', 'jsonl',\r\n> 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'gr\r\n> ib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', '\r\n> mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', '\r\n> emf', 'xbm', 'xpm', 'BLP', 'BMP', 'DIB', 'BUFR', 'CUR', 'PCX', 'DCX', 'DDS', 'PS', 'EPS', 'FIT', 'FITS', 'FLI', 'FLC', 'FTC', 'FTU', 'GBR', 'GIF', 'G\r\n> RIB', 'H5', 'HDF', 'PNG', 'APNG', 'JP2', 'J2K', 'JPC', 'JPF', 'JPX', 'J2C', 'ICNS', 'ICO', 'IM', 'IIM', 'TIF', 'TIFF', 'JFIF', 'JPE', 'JPG', 'JPEG',\r\n> 'MPG', 'MPEG', 'MSP', 'PCD', 'PXR', 'PBM', 'PGM', 'PPM', 'PNM', 'PSD', 'BW', 'RGB', 'RGBA', 'SGI', 'RAS', 'TGA', 'ICB', 'VDA', 'VST', 'WEBP', 'WMF',\r\n> 'EMF', 'XBM', 'XPM', 'aiff', 'au', 'avr', 'caf', 'flac', 'htk', 'svx', 'mat4', 'mat5', 'mpc2k', 'ogg', 'paf', 'pvf', 'raw', 'rf64', 'sd2', 'sds', 'ir\r\n> cam', 'voc', 'w64', 'wav', 'nist', 'wavex', 'wve', 'xi', 'mp3', 'opus', 'AIFF', 'AU', 'AVR', 'CAF', 'FLAC', 'HTK', 'SVX', 'MAT4', 'MAT5', 'MPC2K', 'O\r\n> GG', 'PAF', 'PVF', 'RAW', 'RF64', 'SD2', 'SDS', 'IRCAM', 'VOC', 'W64', 'WAV', 'NIST', 'WAVEX', 'WVE', 'XI', 'MP3', 'OPUS', 'zip']\r\n\r\nIt doesn't even appear like I can reliably repeat this process. I'll nuke squad files in my dataset cache and run the Python code again (which downloads a new copy of the dataset to cache). It will either fail (as it just did in the quote above), or it will successfully recall the dataset.\r\n\r\nI repeated this nuking process a few times until calling load_dataset was reliably giving me the correct result (no filelocking issues or tracebacks). I then sent the test script as a job to the supercomputer compute nodes (which do not have internet access and therefore depend on cached data from Linux machine 1 login nodes)\r\n\r\n> Using the latest cached version of the module from /home/yr3g17/.cache/huggingface/modules/datasets_modules/datasets/squad/8730650fed465361f38ac4d810\r\n> ccdd16e8fc87b56498e52fb7e2cadaefc1f177 (last modified on Tue Feb 14 10:12:56 2023) since it couldn't be found locally at squad., or remotely on the Hugging Face Hub.\r\n> Traceback (most recent call last):\r\n> File \"/mainfs/scratch/yr3g17/squad_qanswering/3054408/0/../../main.py\", line 5, in <module>\r\n> dataset = load_dataset(\"squad\")\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1522, in load_dataset_builder\r\n> builder_instance: DatasetBuilder = builder_cls(\r\n> TypeError: 'NoneType' object is not callable\r\n\r\nand I have absolutely no idea why the second and third machines are producing different tracebacks. I have previously run these exact scripts successfully on the login and compute nodes of the supercomputer, this issue I'm raising has appeared fairly recently for me. This, is where I encounter the TypeError that I opened this issue with, which I was able to traceback (using my laptop before it too started not working) to whatever was dynamically importing \"builder_cls\". That bit of code wasn't doing importing builder_cls correctly and would effectively make the assignment \"builder_cls=None\" resulting in the TypeError. Does any of this help?",
"I'm back on linux machine 1 (login node) now. After submitting that as a job to machine 2 and it failing with TypeError, linux machine 1 now produces identical traceback to machine 2:\r\n\r\n> (arkroyal) [yr3g17@cyan52 squad_qanswering]$ python\r\n> Python 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0] on linux\r\n> Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>\r\n> from datasets import load_dataset\r\n> load_dataset(\"squad\")\r\n>\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1522, in load_dataset_builder\r\n> builder_instance: DatasetBuilder = builder_cls(\r\n> TypeError: 'NoneType' object is not callable\r\n\r\nI thought it might be useful to provide you with my cache file structure:\r\n\r\n>_home_yr3g17_.cache_huggingface_datasets_casino_default_1.1.0_302c3b1ac78c48091deabe83a11f4003c7b472a4e11a8eb92799653785bd5da1.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_imdb_plain_text_1.0.0_2fdd8b9bcadd6e7055e742a706876ba43f19faee861df134affd7a3f60fc38a1.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_squad_plain_text_1.0.0_d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_yelp_review_full_yelp_review_full_1.0.0_e8e18e19d7be9e75642fc66b198abadb116f73599ec89a69ba5dd8d1e57ba0bf.lock\r\n> casino\r\n> downloads\r\n> imdb\r\n> json\r\n> squad\r\n> squad_v2\r\n> yelp_review_full\r\n\r\nThe inside of squad/plain_text/1.0.0/ looks like\r\n\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453.incomplete_info.lock\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453_builder.lock\r\n",
"I see this is quite a complex use case...\r\n\r\nLet's try multiple things:\r\n- First, update `datasets` and make sure you use the same version in all machines, so that we can easily compare different behaviors.\r\n ```\r\n pip install -U datasets\r\n ```\r\n- Second, wherever you run the `load_dataset(\"squad\")` command, make sure there is not a local directory named \"squad\". The datasets library gives priority to any local file/directory over the datasets on the Hugging Face Hub\r\n - I tell you this, because in one of your trace backs, it seems it refers to a local directory:\r\n ```\r\n Downloading and preparing dataset json/squad to /home/yr3g17/.cache/huggingface/datasets/json/squad-d733af945be1d2c2/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...\r\n ```\r\n- Third, to use the \"squad\" dataset from the Hub, you need to have internet connection, so that you can download the \"squad\" Python loading script from the Hub. Do all your machines have internet connection?\r\n - I ask this because of this error message:\r\n ```\r\n Using the latest cached version of the module from /home/yr3g17/.cache/huggingface/modules/datasets_modules/datasets/squad/8730650fed465361f38ac4d810ccdd16e8fc87b56498e52fb7e2cadaefc1f177 (last modified on Tue Feb 14 10:12:56 2023) since it couldn't be found locally at squad., or remotely on the Hugging Face Hub.\r\n ```\r\n- Fourth, to be sure that we avoid any issues with the cache, it is a good idea to remove it and regenerate it. Remove `.cache/huggingface/datasets` and also `.cache/huggingface/modules`\r\n- Fifth, as an additional debugging tool, let's be sure we use the latest \"squad\" Python loading script by passing the revision parameter:\r\n ```\r\n ds = load_dataset(\"squad\", revision=\"5fe18c4c680f9922d794e3f4dd673a751c74ee37\")\r\n ```",
"Additionally, we just had an infrastructure issue on the Hugging Face Hub at around 11:30 today. That might have contributed to the connectivity issue... It is fixed now.\r\n\r\nhttps://status.huggingface.co/",
"Hi again, thanks for your help and insight Albert Villanova.\r\n\r\nIt's all working now, so thank you for that. For the benefit of anyone else who ends up in this thread, I solved the problem by addressing Albert's advice:\r\n\r\n(1) Both Windows and Linux machine 1 (have internet access) and can now access the SQuAD dataset. The supercomputer login node can only access version 2.7.1, but my Windows laptop is running on datasets 2.11.0 just fine. I suspect it was just a perfect storm alongside the aforementioned \"infrastructure issue\".\r\n\r\n(2) I did have a local directory called squad, because I was using a local copy of evaluate's \"SQuAD\" metric. The supercomputer compute nodes do not have internet access and treat `metric = evaluate.load('<x>')` as a way of loading a metric at the local path `./<x>/<x>.py`, which could've been a related issue as I was storing the metric under `squad/squad.py`. Don't be lazy like me and store the evaluation code under a path with a name that can be misinterpreted.\r\n\r\n(3) I can't give internet access to the supercomputer compute nodes, so local files do just fine here.\r\n\r\n(4) The windows and Linux machine 1 can both access the internet and were getting fresh copies of the dataset from the huggingface hub. Linux machine 2 was working after I cleared the contents of ~/.cache/huggingface/....\r\n\r\nI feel silly now, knowing it was all so simple! Sorry about that Albert, and thanks again for the help. I've not raised a Github issue like this before, so I'm not sure if I should be close my own issues or if this is something you guys do?",
"Thanks for your detailed feedback which for sure will be useful to other community members."
] | 2023-04-18T07:10:56 | 2023-04-20T10:27:23 | 2023-04-20T10:27:22 | NONE | null | ### Describe the bug
There is an issue that seems to be unique to the "squad" dataset, in which it cannot be loaded using standard methods. This issue is most quickly reproduced from the command line, using the HF examples to verify a dataset is loaded properly.
This is not a problem with "squad_v2" dataset for example.
### Steps to reproduce the bug
cmd line
> $ python -c "from datasets import load_dataset; print(load_dataset('squad', split='train')[0])"
OR
Python IDE
> from datasets import load_dataset
> load_dataset("squad")
### Expected behavior
I expected to either see the output described here from running the very same command in command line ([https://huggingface.co/docs/datasets/installation]), or any output that does not raise Python's TypeError.
There is some funky behaviour in the dataset builder portion of the codebase that means it is trying to import the squad dataset with an incorrect path, or the squad dataset couldn't be downloaded. I'm not really sure what the problem is beyond that. Messing around with caching I did manage to get it to load the dataset once, and then couldn't repeat this.
### Environment info
datasets=2.7.1 **or** 2.10.1, python=3.10.8, Linux 3.10.0-1160.36.2.el7.x86_64 **or** Windows 10-64
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5768/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5767/comments | https://api.github.com/repos/huggingface/datasets/issues/5767/events | https://github.com/huggingface/datasets/issues/5767 | 1,672,433,979 | I_kwDODunzps5jr1E7 | 5,767 | How to use Distill-BERT with different datasets? | {
"login": "sauravtii",
"id": 109907638,
"node_id": "U_kgDOBo0Otg",
"avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sauravtii",
"html_url": "https://github.com/sauravtii",
"followers_url": "https://api.github.com/users/sauravtii/followers",
"following_url": "https://api.github.com/users/sauravtii/following{/other_user}",
"gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions",
"organizations_url": "https://api.github.com/users/sauravtii/orgs",
"repos_url": "https://api.github.com/users/sauravtii/repos",
"events_url": "https://api.github.com/users/sauravtii/events{/privacy}",
"received_events_url": "https://api.github.com/users/sauravtii/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Closing this one in favor of the same issue opened in the `transformers` repo."
] | 2023-04-18T06:25:12 | 2023-04-20T16:52:05 | 2023-04-20T16:52:05 | NONE | null | ### Describe the bug
- `transformers` version: 4.11.3
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Steps to reproduce the bug
I recently read [this](https://huggingface.co/docs/transformers/quicktour#train-with-tensorflow:~:text=The%20most%20important%20thing%20to%20remember%20is%20you%20need%20to%20instantiate%20a%20tokenizer%20with%20the%20same%20model%20name%20to%20ensure%20you%E2%80%99re%20using%20the%20same%20tokenization%20rules%20a%20model%20was%20pretrained%20with.) and was wondering how to use distill-BERT (which is pre-trained with imdb dataset) with a different dataset (for eg. [this](https://huggingface.co/datasets/yhavinga/imdb_dutch) dataset)?
### Expected behavior
Distill-BERT should work with different datasets.
### Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 11.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5767/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5766 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5766/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5766/comments | https://api.github.com/repos/huggingface/datasets/issues/5766/events | https://github.com/huggingface/datasets/issues/5766 | 1,671,485,882 | I_kwDODunzps5joNm6 | 5,766 | Support custom feature types | {
"login": "jmontalt",
"id": 37540982,
"node_id": "MDQ6VXNlcjM3NTQwOTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/37540982?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jmontalt",
"html_url": "https://github.com/jmontalt",
"followers_url": "https://api.github.com/users/jmontalt/followers",
"following_url": "https://api.github.com/users/jmontalt/following{/other_user}",
"gists_url": "https://api.github.com/users/jmontalt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jmontalt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmontalt/subscriptions",
"organizations_url": "https://api.github.com/users/jmontalt/orgs",
"repos_url": "https://api.github.com/users/jmontalt/repos",
"events_url": "https://api.github.com/users/jmontalt/events{/privacy}",
"received_events_url": "https://api.github.com/users/jmontalt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! Interesting :) What kind of new types would you like to use ?\r\n\r\nNote that you can already implement your own decoding by using `set_transform` that can decode data on-the-fly when rows are accessed",
"An interesting proposal indeed. \r\n\r\nPandas and Polars have the \"extension API\", so doing something similar on our side could be useful, too. However, this requires defining a common interface for the existing feature types before discussing the API/workflow for defining/sharing custom feature types, and this could take some time.\r\n\r\nIt would also be nice if the datasets viewer could render these custom types.",
"Thank you for your replies! @lhoestq I have a use case involving whole-slide images in digital pathology. These are very large images (potentially gigapixel scale), so standard image tools are not suitable. Essentially, encoding/decoding can be done from/to [`OpenSlide`](https://openslide.org/api/python/) objects. Though there may be interest in this use case from the digital pathology community, it may not be sufficiently useful to suggest adding the feature type, but there will likely be many other use cases for a generic custom feature type.\r\n\r\nThank you for pointing out `set_transform`! I will make sure to keep this in mind in the future.\r\n\r\n@mariosasko An \"extension API\" sounds like a good idea, though I understand that this needs to be properly defined, and that you will need to discuss it internally. Support from the viewer would be awesome, too, though the generalization to arbitrary types sounds challenging.\r\n\r\nFor now, happy to know that you're considering the feature. Feel free to let me know if I can do anything to support the process."
] | 2023-04-17T15:46:41 | 2023-05-03T21:58:43 | null | NONE | null | ### Feature request
I think it would be nice to allow registering custom feature types with the 🤗 Datasets library. For example, allow to do something along the following lines:
```
from datasets.features import register_feature_type # this would be a new function
@register_feature_type
class CustomFeatureType:
def encode_example(self, value):
"""User-provided logic to encode an example of this feature."""
pass
def decode_example(self, value, token_per_repo_id=None):
"""User-provided logic to decode an example of this feature."""
pass
```
### Motivation
Users of 🤗 Datasets, such as myself, may want to use the library to load datasets with unsupported feature types (i.e., beyond `ClassLabel`, `Image`, or `Audio`). This would be useful for prototyping new feature types and for feature types that aren't used widely enough to warrant inclusion in 🤗 Datasets.
At the moment, this is only possible by monkey-patching 🤗 Datasets, which obfuscates the code and is prone to breaking with library updates. It also requires the user to write some custom code which could be easily avoided.
### Your contribution
I would be happy to contribute this feature. My proposed solution would involve changing the following call to `globals()` to an explicit feature type registry, which a user-facing `register_feature_type` decorator could update.
https://github.com/huggingface/datasets/blob/fd893098627230cc734f6009ad04cf885c979ac4/src/datasets/features/features.py#L1329
I would also provide an abstract base class for custom feature types which users could inherit. This would have at least an `encode_example` method and a `decode_example` method, similar to `Image` or `Audio`.
The existing `encode_nested_example` and `decode_nested_example` functions would also need to be updated to correctly call the corresponding functions for the new type. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5766/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5765 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5765/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5765/comments | https://api.github.com/repos/huggingface/datasets/issues/5765/events | https://github.com/huggingface/datasets/issues/5765 | 1,671,388,824 | I_kwDODunzps5jn16Y | 5,765 | ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['text'] | {
"login": "sauravtii",
"id": 109907638,
"node_id": "U_kgDOBo0Otg",
"avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sauravtii",
"html_url": "https://github.com/sauravtii",
"followers_url": "https://api.github.com/users/sauravtii/followers",
"following_url": "https://api.github.com/users/sauravtii/following{/other_user}",
"gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions",
"organizations_url": "https://api.github.com/users/sauravtii/orgs",
"repos_url": "https://api.github.com/users/sauravtii/repos",
"events_url": "https://api.github.com/users/sauravtii/events{/privacy}",
"received_events_url": "https://api.github.com/users/sauravtii/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"You need to remove the `text` and `text_en` columns before passing the dataset to the `DataLoader` to avoid this error:\r\n```python\r\ntokenized_datasets = tokenized_datasets.remove_columns([\"text\", \"text_en\"])\r\n```\r\n",
"Thanks @mariosasko. Now I am getting this error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"client_2.py\", line 138, in <module>\r\n main()\r\n File \"client_2.py\", line 134, in main\r\n fl.client.start_numpy_client(server_address=\"localhost:8080\", client=IMDBClient())\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py\", line 208, in start_numpy_client\r\n start_client(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py\", line 142, in start_client\r\n client_message, sleep_duration, keep_going = handle(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py\", line 68, in handle\r\n return _fit(client, server_msg.fit_ins), 0, True\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py\", line 157, in _fit\r\n fit_res = client.fit(fit_ins)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py\", line 252, in _fit\r\n results = self.numpy_client.fit(parameters, ins.config) # type: ignore\r\n File \"client_2.py\", line 124, in fit\r\n train(net, trainloader, epochs=1)\r\n File \"client_2.py\", line 78, in train\r\n for batch in trainloader:\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py\", line 652, in __next__\r\n data = self._next_data()\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py\", line 692, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 49, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 49, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1525, in __getitem__\r\n return self._getitem(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1517, in _getitem\r\n pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 373, in query_table\r\n pa_subtable = _query_table_with_indices_mapping(table, key, indices=indices)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 55, in _query_table_with_indices_mapping\r\n return _query_table(table, key)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 79, in _query_table\r\n return table.fast_slice(key % table.num_rows, 1)\r\nZeroDivisionError: integer division or modulo by zero\r\n```\r\n\r\nThis is my code:\r\n\r\n```\r\nfrom collections import OrderedDict\r\nimport warnings\r\n\r\nimport flwr as fl\r\nimport torch\r\nimport numpy as np\r\n\r\nimport random\r\nfrom torch.utils.data import DataLoader\r\n\r\nfrom datasets import load_dataset, load_metric\r\n\r\nfrom transformers import AutoTokenizer, DataCollatorWithPadding\r\nfrom transformers import AutoModelForSequenceClassification\r\nfrom transformers import AdamW\r\n#from transformers import tokenized_datasets\r\n\r\n\r\nwarnings.filterwarnings(\"ignore\", category=UserWarning)\r\n# DEVICE = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\r\n\r\nDEVICE = \"cpu\"\r\n\r\nCHECKPOINT = \"distilbert-base-uncased\" # transformer model checkpoint\r\n\r\n\r\ndef load_data():\r\n \"\"\"Load IMDB data (training and eval)\"\"\"\r\n raw_datasets = load_dataset(\"yhavinga/imdb_dutch\")\r\n raw_datasets = raw_datasets.shuffle(seed=42)\r\n\r\n # remove unnecessary data split\r\n del raw_datasets[\"unsupervised\"]\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT)\r\n\r\n def tokenize_function(examples):\r\n return tokenizer(examples[\"text\"], truncation=True)\r\n\r\n # random 100 samples\r\n population = random.sample(range(len(raw_datasets[\"train\"])), 100)\r\n\r\n tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)\r\n tokenized_datasets[\"train\"] = tokenized_datasets[\"train\"].select(population)\r\n tokenized_datasets[\"test\"] = tokenized_datasets[\"test\"].select(population)\r\n\r\n # tokenized_datasets = tokenized_datasets.remove_columns(\"text\")\r\n # tokenized_datasets = tokenized_datasets.rename_column(\"label\", \"labels\")\r\n\r\n tokenized_datasets = tokenized_datasets.remove_columns(\"attention_mask\")\r\n tokenized_datasets = tokenized_datasets.remove_columns(\"input_ids\")\r\n tokenized_datasets = tokenized_datasets.remove_columns(\"label\")\r\n # tokenized_datasets = tokenized_datasets.remove_columns(\"text_en\")\r\n\r\n # tokenized_datasets = tokenized_datasets.remove_columns(raw_datasets[\"train\"].column_names)\r\n \r\n tokenized_datasets = tokenized_datasets.remove_columns([\"text\", \"text_en\"])\r\n \r\n data_collator = DataCollatorWithPadding(tokenizer=tokenizer)\r\n trainloader = DataLoader(\r\n tokenized_datasets[\"train\"],\r\n shuffle=True,\r\n batch_size=32,\r\n collate_fn=data_collator,\r\n )\r\n\r\n testloader = DataLoader(\r\n tokenized_datasets[\"test\"], batch_size=32, collate_fn=data_collator\r\n )\r\n\r\n return trainloader, testloader\r\n\r\n\r\ndef train(net, trainloader, epochs):\r\n optimizer = AdamW(net.parameters(), lr=5e-4)\r\n net.train()\r\n for _ in range(epochs):\r\n for batch in trainloader:\r\n batch = {k: v.to(DEVICE) for k, v in batch.items()}\r\n outputs = net(**batch)\r\n loss = outputs.loss\r\n loss.backward()\r\n optimizer.step()\r\n optimizer.zero_grad()\r\n\r\n\r\ndef test(net, testloader):\r\n metric = load_metric(\"accuracy\")\r\n loss = 0\r\n net.eval()\r\n for batch in testloader:\r\n batch = {k: v.to(DEVICE) for k, v in batch.items()}\r\n with torch.no_grad():\r\n outputs = net(**batch)\r\n logits = outputs.logits\r\n loss += outputs.loss.item()\r\n predictions = torch.argmax(logits, dim=-1)\r\n metric.add_batch(predictions=predictions, references=batch[\"labels\"])\r\n loss /= len(testloader.dataset)\r\n accuracy = metric.compute()[\"accuracy\"]\r\n return loss, accuracy\r\n\r\n\r\ndef main():\r\n net = AutoModelForSequenceClassification.from_pretrained(\r\n CHECKPOINT, num_labels=2\r\n ).to(DEVICE)\r\n\r\n trainloader, testloader = load_data()\r\n\r\n # Flower client\r\n class IMDBClient(fl.client.NumPyClient):\r\n def get_parameters(self, config):\r\n return [val.cpu().numpy() for _, val in net.state_dict().items()]\r\n\r\n def set_parameters(self, parameters):\r\n params_dict = zip(net.state_dict().keys(), parameters)\r\n state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict})\r\n net.load_state_dict(state_dict, strict=True)\r\n\r\n def fit(self, parameters, config):\r\n self.set_parameters(parameters)\r\n print(\"Training Started...\")\r\n train(net, trainloader, epochs=1)\r\n print(\"Training Finished.\")\r\n return self.get_parameters(config={}), len(trainloader), {}\r\n\r\n def evaluate(self, parameters, config):\r\n self.set_parameters(parameters)\r\n loss, accuracy = test(net, testloader)\r\n return float(loss), len(testloader), {\"accuracy\": float(accuracy)}\r\n\r\n # Start client\r\n fl.client.start_numpy_client(server_address=\"localhost:8080\", client=IMDBClient())\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```",
"Please also remove/comment these lines:\r\n```python\r\ntokenized_datasets = tokenized_datasets.remove_columns(\"attention_mask\")\r\ntokenized_datasets = tokenized_datasets.remove_columns(\"input_ids\")\r\ntokenized_datasets = tokenized_datasets.remove_columns(\"label\")\r\n```",
"Thanks @mariosasko .\r\n\r\nNow, I am trying out this [tutorial](https://flower.dev/docs/quickstart-huggingface.html) which basically trains distil-BERT with IMDB dataset (very similar to this [tutorial](https://huggingface.co/docs/transformers/main/tasks/sequence_classification)). But I don't know why my accuracy isn't increasing even after training for a significant amount of time and also by using the entire dataset. Below I have attached `client.py` file:\r\n\r\n`client.py`:\r\n\r\n```\r\nfrom collections import OrderedDict\r\nimport warnings\r\n\r\nimport flwr as fl\r\nimport torch\r\nimport numpy as np\r\n\r\nimport random\r\nfrom torch.utils.data import DataLoader\r\n\r\nfrom datasets import load_dataset, load_metric\r\n\r\nfrom transformers import AutoTokenizer, DataCollatorWithPadding\r\nfrom transformers import AutoModelForSequenceClassification\r\nfrom transformers import AdamW\r\n\r\nwarnings.filterwarnings(\"ignore\", category=UserWarning)\r\n\r\nDEVICE = \"cuda:1\"\r\n\r\nCHECKPOINT = \"distilbert-base-uncased\" # transformer model checkpoint\r\n\r\n\r\ndef load_data():\r\n \"\"\"Load IMDB data (training and eval)\"\"\"\r\n raw_datasets = load_dataset(\"imdb\")\r\n raw_datasets = raw_datasets.shuffle(seed=42)\r\n\r\n # remove unnecessary data split\r\n del raw_datasets[\"unsupervised\"]\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT)\r\n\r\n def tokenize_function(examples):\r\n return tokenizer(examples[\"text\"], truncation=True)\r\n\r\n tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)\r\n\r\n tokenized_datasets = tokenized_datasets.remove_columns(\"text\")\r\n tokenized_datasets = tokenized_datasets.rename_column(\"label\", \"labels\")\r\n\r\n data_collator = DataCollatorWithPadding(tokenizer=tokenizer)\r\n trainloader = DataLoader(\r\n tokenized_datasets[\"train\"],\r\n shuffle=True,\r\n batch_size=32,\r\n collate_fn=data_collator,\r\n )\r\n\r\n testloader = DataLoader(\r\n tokenized_datasets[\"test\"], batch_size=32, collate_fn=data_collator\r\n )\r\n\r\n return trainloader, testloader\r\n\r\n\r\ndef train(net, trainloader, epochs):\r\n optimizer = AdamW(net.parameters(), lr=5e-5)\r\n net.train()\r\n for i in range(epochs):\r\n print(\"Epoch: \", i+1)\r\n j = 1\r\n print(\"####################### The length of the trainloader is: \", len(trainloader)) \r\n for batch in trainloader:\r\n print(\"####################### The batch number is: \", j)\r\n batch = {k: v.to(DEVICE) for k, v in batch.items()}\r\n outputs = net(**batch)\r\n loss = outputs.loss\r\n loss.backward()\r\n optimizer.step()\r\n optimizer.zero_grad()\r\n j += 1\r\n\r\n\r\ndef test(net, testloader):\r\n metric = load_metric(\"accuracy\")\r\n loss = 0\r\n net.eval()\r\n for batch in testloader:\r\n batch = {k: v.to(DEVICE) for k, v in batch.items()}\r\n with torch.no_grad():\r\n outputs = net(**batch)\r\n logits = outputs.logits\r\n loss += outputs.loss.item()\r\n predictions = torch.argmax(logits, dim=-1)\r\n metric.add_batch(predictions=predictions, references=batch[\"labels\"])\r\n loss /= len(testloader.dataset)\r\n accuracy = metric.compute()[\"accuracy\"]\r\n return loss, accuracy\r\n\r\n\r\ndef main():\r\n net = AutoModelForSequenceClassification.from_pretrained(\r\n CHECKPOINT, num_labels=2\r\n ).to(DEVICE)\r\n\r\n trainloader, testloader = load_data()\r\n\r\n # Flower client\r\n class IMDBClient(fl.client.NumPyClient):\r\n def get_parameters(self, config):\r\n return [val.cpu().numpy() for _, val in net.state_dict().items()]\r\n\r\n def set_parameters(self, parameters):\r\n params_dict = zip(net.state_dict().keys(), parameters)\r\n state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict})\r\n net.load_state_dict(state_dict, strict=True)\r\n\r\n def fit(self, parameters, config):\r\n self.set_parameters(parameters)\r\n print(\"Training Started...\")\r\n train(net, trainloader, epochs=1)\r\n print(\"Training Finished.\")\r\n return self.get_parameters(config={}), len(trainloader), {}\r\n\r\n def evaluate(self, parameters, config):\r\n self.set_parameters(parameters)\r\n loss, accuracy = test(net, testloader)\r\n print({\"loss\": float(loss), \"accuracy\": float(accuracy)})\r\n return float(loss), len(testloader), {\"loss\": float(loss), \"accuracy\": float(accuracy)}\r\n\r\n # Start client\r\n fl.client.start_numpy_client(server_address=\"localhost:5040\", client=IMDBClient())\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nCan I get any help, please?"
] | 2023-04-17T15:00:50 | 2023-04-25T13:50:45 | null | NONE | null | ### Describe the bug
Following is my code that I am trying to run, but facing an error (have attached the whole error below):
My code:
```
from collections import OrderedDict
import warnings
import flwr as fl
import torch
import numpy as np
import random
from torch.utils.data import DataLoader
from datasets import load_dataset, load_metric
from transformers import AutoTokenizer, DataCollatorWithPadding
from transformers import AutoModelForSequenceClassification
from transformers import AdamW
#from transformers import tokenized_datasets
warnings.filterwarnings("ignore", category=UserWarning)
# DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
DEVICE = "cpu"
CHECKPOINT = "distilbert-base-uncased" # transformer model checkpoint
def load_data():
"""Load IMDB data (training and eval)"""
raw_datasets = load_dataset("yhavinga/imdb_dutch")
raw_datasets = raw_datasets.shuffle(seed=42)
# remove unnecessary data split
del raw_datasets["unsupervised"]
tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT)
def tokenize_function(examples):
return tokenizer(examples["text"], truncation=True)
# random 100 samples
population = random.sample(range(len(raw_datasets["train"])), 100)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
tokenized_datasets["train"] = tokenized_datasets["train"].select(population)
tokenized_datasets["test"] = tokenized_datasets["test"].select(population)
# tokenized_datasets = tokenized_datasets.remove_columns("text")
# tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
tokenized_datasets = tokenized_datasets.remove_columns("attention_mask")
tokenized_datasets = tokenized_datasets.remove_columns("input_ids")
tokenized_datasets = tokenized_datasets.remove_columns("label")
tokenized_datasets = tokenized_datasets.remove_columns("text_en")
# tokenized_datasets = tokenized_datasets.remove_columns(raw_datasets["train"].column_names)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
trainloader = DataLoader(
tokenized_datasets["train"],
shuffle=True,
batch_size=32,
collate_fn=data_collator,
)
testloader = DataLoader(
tokenized_datasets["test"], batch_size=32, collate_fn=data_collator
)
return trainloader, testloader
def train(net, trainloader, epochs):
optimizer = AdamW(net.parameters(), lr=5e-4)
net.train()
for _ in range(epochs):
for batch in trainloader:
batch = {k: v.to(DEVICE) for k, v in batch.items()}
outputs = net(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
def test(net, testloader):
metric = load_metric("accuracy")
loss = 0
net.eval()
for batch in testloader:
batch = {k: v.to(DEVICE) for k, v in batch.items()}
with torch.no_grad():
outputs = net(**batch)
logits = outputs.logits
loss += outputs.loss.item()
predictions = torch.argmax(logits, dim=-1)
metric.add_batch(predictions=predictions, references=batch["labels"])
loss /= len(testloader.dataset)
accuracy = metric.compute()["accuracy"]
return loss, accuracy
def main():
net = AutoModelForSequenceClassification.from_pretrained(
CHECKPOINT, num_labels=2
).to(DEVICE)
trainloader, testloader = load_data()
# Flower client
class IMDBClient(fl.client.NumPyClient):
def get_parameters(self, config):
return [val.cpu().numpy() for _, val in net.state_dict().items()]
def set_parameters(self, parameters):
params_dict = zip(net.state_dict().keys(), parameters)
state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict})
net.load_state_dict(state_dict, strict=True)
def fit(self, parameters, config):
self.set_parameters(parameters)
print("Training Started...")
train(net, trainloader, epochs=1)
print("Training Finished.")
return self.get_parameters(config={}), len(trainloader), {}
def evaluate(self, parameters, config):
self.set_parameters(parameters)
loss, accuracy = test(net, testloader)
return float(loss), len(testloader), {"accuracy": float(accuracy)}
# Start client
fl.client.start_numpy_client(server_address="localhost:8080", client=IMDBClient())
if __name__ == "__main__":
main()
```
Error:
```
Traceback (most recent call last):
File "client_2.py", line 136, in <module>
main()
File "client_2.py", line 132, in main
fl.client.start_numpy_client(server_address="localhost:8080", client=IMDBClient())
File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 208, in start_numpy_client
start_client(
File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 142, in start_client
client_message, sleep_duration, keep_going = handle(
File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py", line 68, in handle
return _fit(client, server_msg.fit_ins), 0, True
File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py", line 157, in _fit
fit_res = client.fit(fit_ins)
File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 252, in _fit
results = self.numpy_client.fit(parameters, ins.config) # type: ignore
File "client_2.py", line 122, in fit
train(net, trainloader, epochs=1)
File "client_2.py", line 76, in train
for batch in trainloader:
File "/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 652, in __next__
data = self._next_data()
File "/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 692, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "/home/saurav/.local/lib/python3.8/site-packages/transformers/data/data_collator.py", line 221, in __call__
batch = self.tokenizer.pad(
File "/home/saurav/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2713, in pad
raise ValueError(
ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['text']
```
### Steps to reproduce the bug
Run the above code.
### Expected behavior
Don't know, doing it for the first time.
### Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 11.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5765/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5764 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5764/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5764/comments | https://api.github.com/repos/huggingface/datasets/issues/5764/events | https://github.com/huggingface/datasets/issues/5764 | 1,670,740,198 | I_kwDODunzps5jlXjm | 5,764 | ConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1 | {
"login": "sauravtii",
"id": 109907638,
"node_id": "U_kgDOBo0Otg",
"avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sauravtii",
"html_url": "https://github.com/sauravtii",
"followers_url": "https://api.github.com/users/sauravtii/followers",
"following_url": "https://api.github.com/users/sauravtii/following{/other_user}",
"gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions",
"organizations_url": "https://api.github.com/users/sauravtii/orgs",
"repos_url": "https://api.github.com/users/sauravtii/repos",
"events_url": "https://api.github.com/users/sauravtii/events{/privacy}",
"received_events_url": "https://api.github.com/users/sauravtii/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @sauravtii.\r\n\r\nUnfortunately, I'm not able to reproduce the issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"josianem/imdb\")\r\n\r\nIn [2]: ds\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 25799\r\n })\r\n test: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 25000\r\n })\r\n unsupervised: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 50000\r\n })\r\n})\r\n```\r\n\r\nCould you please retry to load the dataset? Maybe there was a temporary connection issue to Dropbox.",
"Thanks @albertvillanova. I am facing another issue now\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"sample.py\", line 4, in <module>\r\n dataset = load_dataset(\"josianem/imdb\")\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 738, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/info_utils.py\", line 74, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=34501348, num_examples=25799, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]\r\n```\r\n\r\nThis is my code\r\n\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\")\r\n```",
"Your connection didn't work and you got an empty dataset (`num_bytes=0, num_examples=0`):\r\n```\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: \r\n[\r\n {\r\n 'expected': SplitInfo(name='train', num_bytes=34501348, num_examples=25799, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }, \r\n {\r\n 'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }, \r\n {\r\n 'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }\r\n]\r\n```\r\n\r\nCould you please try the link in your browser and see if it works? https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1\r\n- If it does not work, you should contact the author of the dataset in their Community tab (https://huggingface.co/datasets/josianem/imdb/discussions) and inform them, so that they can host their data elsewhere, for example on the Hugging Face Hub itself\r\n\r\nIf the link works, you should try to load the dataset but forcing the re-download of the data files (so that the cache is refreshed with the actual data file), by passing `download_mode=\"force_redownload\"`:\r\n```python\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n```",
"After pasting the link in the browser, it did start the download so it seems that the link is working. But even after including the `download_mode` in my code I am facing the same issue:\r\n\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"sample.py\", line 4, in <module>\r\n dataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 704, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/home/saurav/.cache/huggingface/modules/datasets_modules/datasets/imdb/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f/imdb.py\", line 79, in _split_generators\r\n archive = dl_manager.download(_DOWNLOAD_URL)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 196, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 197, in map_nested\r\n return function(data_struct)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 217, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 289, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 606, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1\r\n```\r\n\r\nMy code:\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n```",
"I have tried again to reproduce your issue without success: the dataset loads perfectly, both in my local machine and in a Colab notebook.\r\n- See: https://colab.research.google.com/drive/1dky3T0XGFuldggy22NNQQN-UqOFqvnuY?usp=sharing\r\n\r\nI think the cause maight be that you are using a very old version of `datasets`. Please, could you update it and retry?\r\n```\r\npip install -U datasets\r\n```",
"That worked!! Thanks @albertvillanova : )\r\n\r\n```\r\nDownloading builder script: 100%|███████| 4.20k/4.20k [00:00<00:00, 6.69MB/s]\r\nDownloading metadata: 100%|█████████████| 2.60k/2.60k [00:00<00:00, 3.41MB/s]\r\nDownloading readme: 100%|███████████████| 7.52k/7.52k [00:00<00:00, 12.6MB/s]\r\nDownloading and preparing dataset imdb/plain_text to /home/saurav/.cache/huggingface/datasets/josianem___imdb/plain_text/1.0.0/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f...\r\nDownloading data: 100%|███████████████████| 301M/301M [01:32<00:00, 3.25MB/s]\r\nDataset imdb downloaded and prepared to /home/saurav/.cache/huggingface/datasets/josianem___imdb/plain_text/1.0.0/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f. Subsequent calls will reuse this data.\r\n100%|█████████████████████████████████████████| 3/3 [00:00<00:00, 794.83it/s]\r\n```\r\n\r\nThe code I used:\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n\r\n```\r\n\r\nBut when I remove `download_mode=\"force_redownload\"` I get the same error. Any guess on that?",
"That is because the cache got the \"empty\" download file the first time you tried and got the connection error.\r\n\r\nThen, once you no longer get the connection error, you need to refresh the cache by passing `download_mode=\"force_redownload\"`."
] | 2023-04-17T09:08:18 | 2023-04-18T07:18:20 | 2023-04-18T07:18:20 | NONE | null | ### Describe the bug
I want to use this (https://huggingface.co/datasets/josianem/imdb) dataset therefore I am trying to load it using the following code:
```
dataset = load_dataset("josianem/imdb")
```
The dataset is not getting loaded and gives the error message as the following:
```
Traceback (most recent call last):
File "sample.py", line 3, in <module>
dataset = load_dataset("josianem/imdb")
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py", line 704, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/saurav/.cache/huggingface/modules/datasets_modules/datasets/imdb/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f/imdb.py", line 79, in _split_generators
archive = dl_manager.download(_DOWNLOAD_URL)
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download
downloaded_path_or_paths = map_nested(
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 197, in map_nested
return function(data_struct)
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 289, in cached_path
output_path = get_from_cache(
File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 606, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1
```
### Steps to reproduce the bug
You can reproduce the error by using the following code:
```
from datasets import load_dataset, load_metric
dataset = load_dataset("josianem/imdb")
```
### Expected behavior
The dataset should get loaded (I am using this dataset for the first time so not much aware of the exact behavior).
### Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 11.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5764/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5763 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5763/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5763/comments | https://api.github.com/repos/huggingface/datasets/issues/5763/events | https://github.com/huggingface/datasets/pull/5763 | 1,670,476,302 | PR_kwDODunzps5OcMI7 | 5,763 | fix typo: "mow" -> "now" | {
"login": "csris",
"id": 1967608,
"node_id": "MDQ6VXNlcjE5Njc2MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1967608?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/csris",
"html_url": "https://github.com/csris",
"followers_url": "https://api.github.com/users/csris/followers",
"following_url": "https://api.github.com/users/csris/following{/other_user}",
"gists_url": "https://api.github.com/users/csris/gists{/gist_id}",
"starred_url": "https://api.github.com/users/csris/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/csris/subscriptions",
"organizations_url": "https://api.github.com/users/csris/orgs",
"repos_url": "https://api.github.com/users/csris/repos",
"events_url": "https://api.github.com/users/csris/events{/privacy}",
"received_events_url": "https://api.github.com/users/csris/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006804 / 0.011353 (-0.004549) | 0.004984 / 0.011008 (-0.006024) | 0.096781 / 0.038508 (0.058273) | 0.033049 / 0.023109 (0.009939) | 0.297681 / 0.275898 (0.021783) | 0.329553 / 0.323480 (0.006073) | 0.005697 / 0.007986 (-0.002289) | 0.004019 / 0.004328 (-0.000310) | 0.072691 / 0.004250 (0.068441) | 0.046921 / 0.037052 (0.009868) | 0.311467 / 0.258489 (0.052978) | 0.337616 / 0.293841 (0.043775) | 0.042400 / 0.128546 (-0.086146) | 0.011919 / 0.075646 (-0.063727) | 0.331390 / 0.419271 (-0.087881) | 0.051004 / 0.043533 (0.007471) | 0.295317 / 0.255139 (0.040178) | 0.316570 / 0.283200 (0.033371) | 0.099283 / 0.141683 (-0.042400) | 1.430583 / 1.452155 (-0.021572) | 1.493550 / 1.492716 (0.000834) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213634 / 0.018006 (0.195628) | 0.432557 / 0.000490 (0.432067) | 0.001586 / 0.000200 (0.001386) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025249 / 0.037411 (-0.012162) | 0.105433 / 0.014526 (0.090908) | 0.113474 / 0.176557 (-0.063082) | 0.168799 / 0.737135 (-0.568336) | 0.119363 / 0.296338 (-0.176975) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412450 / 0.215209 (0.197241) | 4.117432 / 2.077655 (2.039777) | 1.935176 / 1.504120 (0.431056) | 1.745674 / 1.541195 (0.204479) | 1.853872 / 1.468490 (0.385382) | 0.703429 / 4.584777 (-3.881348) | 3.756981 / 3.745712 (0.011269) | 3.730607 / 5.269862 (-1.539255) | 1.839052 / 4.565676 (-2.726624) | 0.087574 / 0.424275 (-0.336701) | 0.012293 / 0.007607 (0.004686) | 0.517234 / 0.226044 (0.291190) | 5.189759 / 2.268929 (2.920831) | 2.418739 / 55.444624 (-53.025885) | 2.081424 / 6.876477 (-4.795053) | 2.204464 / 2.142072 (0.062392) | 0.842768 / 4.805227 (-3.962459) | 0.169014 / 6.500664 (-6.331650) | 0.063711 / 0.075469 (-0.011758) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180636 / 1.841788 (-0.661152) | 14.816088 / 8.074308 (6.741779) | 14.290085 / 10.191392 (4.098693) | 0.165267 / 0.680424 (-0.515156) | 0.017290 / 0.534201 (-0.516911) | 0.419678 / 0.579283 (-0.159605) | 0.418164 / 0.434364 (-0.016200) | 0.492210 / 0.540337 (-0.048127) | 0.588528 / 1.386936 (-0.798408) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007144 / 0.011353 (-0.004209) | 0.005223 / 0.011008 (-0.005785) | 0.073583 / 0.038508 (0.035075) | 0.033534 / 0.023109 (0.010425) | 0.339020 / 0.275898 (0.063122) | 0.366546 / 0.323480 (0.043066) | 0.006245 / 0.007986 (-0.001741) | 0.004081 / 0.004328 (-0.000247) | 0.073089 / 0.004250 (0.068839) | 0.047024 / 0.037052 (0.009971) | 0.342540 / 0.258489 (0.084051) | 0.379743 / 0.293841 (0.085902) | 0.037551 / 0.128546 (-0.090995) | 0.012246 / 0.075646 (-0.063400) | 0.084796 / 0.419271 (-0.334476) | 0.052256 / 0.043533 (0.008723) | 0.342675 / 0.255139 (0.087536) | 0.367157 / 0.283200 (0.083957) | 0.102939 / 0.141683 (-0.038744) | 1.409039 / 1.452155 (-0.043115) | 1.526137 / 1.492716 (0.033420) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208143 / 0.018006 (0.190136) | 0.437940 / 0.000490 (0.437450) | 0.000424 / 0.000200 (0.000224) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028321 / 0.037411 (-0.009091) | 0.110417 / 0.014526 (0.095891) | 0.119449 / 0.176557 (-0.057107) | 0.168081 / 0.737135 (-0.569054) | 0.126658 / 0.296338 (-0.169681) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429302 / 0.215209 (0.214093) | 4.270547 / 2.077655 (2.192892) | 2.061323 / 1.504120 (0.557203) | 1.857877 / 1.541195 (0.316682) | 1.873317 / 1.468490 (0.404827) | 0.688750 / 4.584777 (-3.896027) | 3.767951 / 3.745712 (0.022239) | 2.011436 / 5.269862 (-3.258426) | 1.299965 / 4.565676 (-3.265712) | 0.084799 / 0.424275 (-0.339476) | 0.012082 / 0.007607 (0.004475) | 0.521981 / 0.226044 (0.295937) | 5.265333 / 2.268929 (2.996405) | 2.494326 / 55.444624 (-52.950298) | 2.144672 / 6.876477 (-4.731804) | 2.365624 / 2.142072 (0.223551) | 0.839868 / 4.805227 (-3.965359) | 0.166614 / 6.500664 (-6.334050) | 0.063804 / 0.075469 (-0.011665) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.264623 / 1.841788 (-0.577164) | 14.946515 / 8.074308 (6.872207) | 14.450115 / 10.191392 (4.258723) | 0.163878 / 0.680424 (-0.516546) | 0.017501 / 0.534201 (-0.516700) | 0.420992 / 0.579283 (-0.158291) | 0.423005 / 0.434364 (-0.011359) | 0.489505 / 0.540337 (-0.050832) | 0.594631 / 1.386936 (-0.792305) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fd893098627230cc734f6009ad04cf885c979ac4 \"CML watermark\")\n"
] | 2023-04-17T06:03:44 | 2023-04-17T15:01:53 | 2023-04-17T14:54:46 | CONTRIBUTOR | null | I noticed a typo as I was reading the datasets documentation. This PR contains a trivial fix changing "mow" to "now." | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5763/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5763",
"html_url": "https://github.com/huggingface/datasets/pull/5763",
"diff_url": "https://github.com/huggingface/datasets/pull/5763.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5763.patch",
"merged_at": "2023-04-17T14:54:46"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5762 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5762/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5762/comments | https://api.github.com/repos/huggingface/datasets/issues/5762/events | https://github.com/huggingface/datasets/issues/5762 | 1,670,326,470 | I_kwDODunzps5jjyjG | 5,762 | Not able to load the pile | {
"login": "surya-narayanan",
"id": 17240858,
"node_id": "MDQ6VXNlcjE3MjQwODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surya-narayanan",
"html_url": "https://github.com/surya-narayanan",
"followers_url": "https://api.github.com/users/surya-narayanan/followers",
"following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}",
"gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions",
"organizations_url": "https://api.github.com/users/surya-narayanan/orgs",
"repos_url": "https://api.github.com/users/surya-narayanan/repos",
"events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}",
"received_events_url": "https://api.github.com/users/surya-narayanan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @surya-narayanan.\r\n\r\nI see you already started a discussion about this on the Community tab of the corresponding dataset: https://huggingface.co/datasets/EleutherAI/the_pile/discussions/10\r\nLet's continue the discussion there!"
] | 2023-04-17T03:09:10 | 2023-04-17T09:37:27 | 2023-04-17T09:37:27 | NONE | null | ### Describe the bug
Got this error when I am trying to load the pile dataset
```
TypeError: Couldn't cast array of type
struct<file: string, id: string>
to
{'id': Value(dtype='string', id=None)}
```
### Steps to reproduce the bug
Please visit the following sample notebook
https://colab.research.google.com/drive/1JHcjawcHL6QHhi5VcqYd07W2QCEj2nWK#scrollTo=ulJP3eJCI-tB
### Expected behavior
The pile should work
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.13.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5762/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5761 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5761/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5761/comments | https://api.github.com/repos/huggingface/datasets/issues/5761/events | https://github.com/huggingface/datasets/issues/5761 | 1,670,034,582 | I_kwDODunzps5jirSW | 5,761 | One or several metadata.jsonl were found, but not in the same directory or in a parent directory | {
"login": "blghtr",
"id": 69686152,
"node_id": "MDQ6VXNlcjY5Njg2MTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/69686152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/blghtr",
"html_url": "https://github.com/blghtr",
"followers_url": "https://api.github.com/users/blghtr/followers",
"following_url": "https://api.github.com/users/blghtr/following{/other_user}",
"gists_url": "https://api.github.com/users/blghtr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/blghtr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/blghtr/subscriptions",
"organizations_url": "https://api.github.com/users/blghtr/orgs",
"repos_url": "https://api.github.com/users/blghtr/repos",
"events_url": "https://api.github.com/users/blghtr/events{/privacy}",
"received_events_url": "https://api.github.com/users/blghtr/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Also, when generated from a zip archive, the dataset contains only a few images. In my case, 20 versus 2000+ contained in the archive. The generation from folders works as expected.",
"Thanks for reporting, @blghtr.\r\n\r\nYou should include the `metadata.jsonl` in your ZIP archives, at the root level directory.\r\n\r\nI agree that our documentation is not clear enough. Maybe we could improve it.",
"You can find a dummy dataset example here: https://huggingface.co/datasets/albertvillanova/tmp-imagefolder-metadata\r\n\r\n```\r\ntmp-imagefolder-metadata/\r\n└── data/\r\n ├── train.zip\r\n └── valid.zip\r\n```\r\nwhere, the directory structure within the `train.zip` archive is:\r\n```\r\nmetadata.jsonl\r\ntrain/\r\n ├── bharatanatyam/\r\n └── bharatanatyam_original_113.jpg_70c297a2-e2f2-4ed8-b93c-0c03d0809fe2.jpg\r\n └── kathak/\r\n └── kathak_original_10.jpg_2c4a2c3d-47fc-4b33-9c09-38b542826632.jpg\r\n```\r\nand the metadata file contains:\r\n```\r\n{\"file_name\": \"train/bharatanatyam/bharatanatyam_original_113.jpg_70c297a2-e2f2-4ed8-b93c-0c03d0809fe2.jpg\", \"text\": \"first\"}\r\n{\"file_name\": \"train/kathak/kathak_original_10.jpg_2c4a2c3d-47fc-4b33-9c09-38b542826632.jpg\", \"text\": \"second\"}\r\n```"
] | 2023-04-16T16:21:55 | 2023-04-19T11:53:24 | null | NONE | null | ### Describe the bug
An attempt to generate a dataset from a zip archive using imagefolder and metadata.jsonl does not lead to the expected result. Tried all possible locations of the json file: the file in the archive is ignored (generated dataset contains only images), the file next to the archive like [here](https://huggingface.co/docs/datasets/image_dataset#imagefolder) leads to an error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:1610, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1609 _time = time.time()
-> 1610 for key, record in generator:
1611 if max_shard_size is not None and writer._num_bytes > max_shard_size:
File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\packaged_modules\folder_based_builder\folder_based_builder.py:370, in FolderBasedBuilder._generate_examples(self, files, metadata_files, split_name, add_metadata, add_labels)
369 else:
--> 370 raise ValueError(
371 f"One or several metadata.{metadata_ext} were found, but not in the same directory or in a parent directory of {downloaded_dir_file}."
372 )
373 if metadata_dir is not None and downloaded_metadata_file is not None:
ValueError: One or several metadata.jsonl were found, but not in the same directory or in a parent directory of C:\Users\User\.cache\huggingface\datasets\downloads\extracted\f7fb7de25fb28ae63089974524f2d271a39d83888bc456d04aa3b3d45f33e6a6\ff0745a0-a741-4d9e-b228-a93b851adf61.png.
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
Cell In[3], line 1
----> 1 dataset = load_dataset("imagefolder", data_dir=r'C:\Users\User\data')
File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\load.py:1791, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
1788 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1790 # Download and prepare data
-> 1791 builder_instance.download_and_prepare(
1792 download_config=download_config,
1793 download_mode=download_mode,
1794 verification_mode=verification_mode,
1795 try_from_hf_gcs=try_from_hf_gcs,
1796 num_proc=num_proc,
1797 storage_options=storage_options,
1798 )
1800 # Build dataset for splits
1801 keep_in_memory = (
1802 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1803 )
File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:891, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
889 if num_proc is not None:
890 prepare_split_kwargs["num_proc"] = num_proc
--> 891 self._download_and_prepare(
892 dl_manager=dl_manager,
893 verification_mode=verification_mode,
894 **prepare_split_kwargs,
895 **download_and_prepare_kwargs,
896 )
897 # Sync info
898 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:1651, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)
1650 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):
-> 1651 super()._download_and_prepare(
1652 dl_manager,
1653 verification_mode,
1654 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS
1655 or verification_mode == VerificationMode.ALL_CHECKS,
1656 **prepare_splits_kwargs,
1657 )
File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:986, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
982 split_dict.add(split_generator.split_info)
984 try:
985 # Prepare split will record examples associated to the split
--> 986 self._prepare_split(split_generator, **prepare_split_kwargs)
987 except OSError as e:
988 raise OSError(
989 "Cannot find data file. "
990 + (self.manual_download_instructions or "")
991 + "\nOriginal error:\n"
992 + str(e)
993 ) from None
File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:1490, in GeneratorBasedBuilder._prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size)
1488 gen_kwargs = split_generator.gen_kwargs
1489 job_id = 0
-> 1490 for job_id, done, content in self._prepare_split_single(
1491 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1492 ):
1493 if done:
1494 result = content
File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:1646, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1644 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1645 e = e.__context__
-> 1646 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1648 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
1. Organize directory structure like in the docs:
folder/metadata.jsonl
folder/train.zip
2. Run load_dataset("imagefolder", data_dir='folder/metadata.jsonl', split='train')
### Expected behavior
Dataset generated with all additional features from metadata.jsonl
### Environment info
- `datasets` version: 2.11.0
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.9.0
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5761/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5760 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5760/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5760/comments | https://api.github.com/repos/huggingface/datasets/issues/5760/events | https://github.com/huggingface/datasets/issues/5760 | 1,670,028,072 | I_kwDODunzps5jipso | 5,760 | Multi-image loading in Imagefolder dataset | {
"login": "vvvm23",
"id": 44398246,
"node_id": "MDQ6VXNlcjQ0Mzk4MjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/44398246?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vvvm23",
"html_url": "https://github.com/vvvm23",
"followers_url": "https://api.github.com/users/vvvm23/followers",
"following_url": "https://api.github.com/users/vvvm23/following{/other_user}",
"gists_url": "https://api.github.com/users/vvvm23/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vvvm23/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vvvm23/subscriptions",
"organizations_url": "https://api.github.com/users/vvvm23/orgs",
"repos_url": "https://api.github.com/users/vvvm23/repos",
"events_url": "https://api.github.com/users/vvvm23/events{/privacy}",
"received_events_url": "https://api.github.com/users/vvvm23/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Supporting this could be useful (I remember a use-case for this on the Hub). Do you agree @polinaeterna? \r\n\r\nImplementing this should be possible if we iterate over metadata files and build image/audio file paths instead of iterating over image/audio files and looking for the corresponding entries in metadata files.",
"I've build a similar feature from scratch and would be interested to combine it with the datasets package.\r\n\r\nMy solution works something like this:\r\nInterpret the first element of each column as a file path. If the path exists and is a file, (try to) load the files for the entire column. Thereby, one isn't restricted to a particular column name, with comes in handy when dealing with multiple file columns.\r\n\r\nI've looked into the code to try to implement this, but didn't find the right places. I'm also open to contribute, but will need some guidance."
] | 2023-04-16T16:01:05 | 2023-05-16T10:14:59 | null | NONE | null | ### Feature request
Extend the `imagefolder` dataloading script to support loading multiple images per dataset entry.
This only really makes sense if a metadata file is present.
Currently you can use the following format (example `metadata.jsonl`:
```
{'file_name': 'path_to_image.png', 'metadata': ...}
...
```
which will return a batch with key `image` and any other metadata.
I would propose extending `file_name` to also accept a list of files, which would return a batch with key `images` and any other metadata.
### Motivation
This is useful for example in segmentation tasks in computer vision models, or in text-to-image models that also accept conditioning signals such as another image, feature map, or similar. Currently if I want to do this, I would need to write a custom dataset, rather than just use `imagefolder`.
### Your contribution
Would be open to doing a PR, but also happy for someone else to take it as I am not familiar with the datasets library. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5760/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5759 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5759/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5759/comments | https://api.github.com/repos/huggingface/datasets/issues/5759/events | https://github.com/huggingface/datasets/issues/5759 | 1,669,977,848 | I_kwDODunzps5jidb4 | 5,759 | Can I load in list of list of dict format? | {
"login": "LZY-the-boys",
"id": 72137647,
"node_id": "MDQ6VXNlcjcyMTM3NjQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/72137647?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LZY-the-boys",
"html_url": "https://github.com/LZY-the-boys",
"followers_url": "https://api.github.com/users/LZY-the-boys/followers",
"following_url": "https://api.github.com/users/LZY-the-boys/following{/other_user}",
"gists_url": "https://api.github.com/users/LZY-the-boys/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LZY-the-boys/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LZY-the-boys/subscriptions",
"organizations_url": "https://api.github.com/users/LZY-the-boys/orgs",
"repos_url": "https://api.github.com/users/LZY-the-boys/repos",
"events_url": "https://api.github.com/users/LZY-the-boys/events{/privacy}",
"received_events_url": "https://api.github.com/users/LZY-the-boys/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @LZY-the-boys.\r\n\r\nCould you please give more details about what is your intended dataset structure? What are the names of the columns and the value of each row?\r\n\r\nCurrently, the JSON-Lines format is supported:\r\n- Each line correspond to one row of the dataset\r\n- Each line is composed of one JSON object, where the names are the names of the columns, and the values are the values for the row-column pair."
] | 2023-04-16T13:50:14 | 2023-04-19T12:04:36 | null | NONE | null | ### Feature request
my jsonl dataset has following format:
```
[{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...]
[{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...]
```
I try to use `datasets.load_dataset('json', data_files=path)` or `datasets.Dataset.from_json`, it raises
```
File "site-packages/datasets/arrow_dataset.py", line 1078, in from_json
).read()
File "site-packages/datasets/io/json.py", line 59, in read
self.builder.download_and_prepare(
File "site-packages/datasets/builder.py", line 872, in download_and_prepare
self._download_and_prepare(
File "site-packages/datasets/builder.py", line 967, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "site-packages/datasets/builder.py", line 1749, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "site-packages/datasets/builder.py", line 1892, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Motivation
I wanna use features like `Datasets.map` or `Datasets.shuffle`, so i need the dataset in memory to be `arrow_dataset.Datasets` format
### Your contribution
PR | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5759/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5758/comments | https://api.github.com/repos/huggingface/datasets/issues/5758/events | https://github.com/huggingface/datasets/pull/5758 | 1,669,920,923 | PR_kwDODunzps5OaY9S | 5,758 | Fixes #5757 | {
"login": "eli-osherovich",
"id": 2437102,
"node_id": "MDQ6VXNlcjI0MzcxMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eli-osherovich",
"html_url": "https://github.com/eli-osherovich",
"followers_url": "https://api.github.com/users/eli-osherovich/followers",
"following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}",
"gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions",
"organizations_url": "https://api.github.com/users/eli-osherovich/orgs",
"repos_url": "https://api.github.com/users/eli-osherovich/repos",
"events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}",
"received_events_url": "https://api.github.com/users/eli-osherovich/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The CI can be fixed by merging `main` into your branch. Can you do that before we merge ?",
"_The documentation is not available anymore as the PR was closed or merged._",
"Done.\n\nOn Thu, Apr 20, 2023 at 6:01 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> The CI can be fixed by merging main into your branch. Can you do that\n> before we merge ?\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/5758#issuecomment-1516488124>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AASS73QPLA735AMN4PFDYRTXCFFTJANCNFSM6AAAAAAXACBUQU>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n",
"Nice thanks !",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007161 / 0.011353 (-0.004192) | 0.005099 / 0.011008 (-0.005909) | 0.099301 / 0.038508 (0.060793) | 0.034144 / 0.023109 (0.011034) | 0.298273 / 0.275898 (0.022375) | 0.329009 / 0.323480 (0.005529) | 0.005486 / 0.007986 (-0.002500) | 0.003887 / 0.004328 (-0.000441) | 0.074769 / 0.004250 (0.070518) | 0.047505 / 0.037052 (0.010453) | 0.306550 / 0.258489 (0.048061) | 0.335380 / 0.293841 (0.041540) | 0.034796 / 0.128546 (-0.093750) | 0.012152 / 0.075646 (-0.063495) | 0.332194 / 0.419271 (-0.087077) | 0.049661 / 0.043533 (0.006128) | 0.296832 / 0.255139 (0.041693) | 0.316417 / 0.283200 (0.033218) | 0.098234 / 0.141683 (-0.043449) | 1.494114 / 1.452155 (0.041959) | 1.566468 / 1.492716 (0.073751) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221309 / 0.018006 (0.203303) | 0.440855 / 0.000490 (0.440365) | 0.003025 / 0.000200 (0.002825) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026594 / 0.037411 (-0.010817) | 0.110406 / 0.014526 (0.095880) | 0.116117 / 0.176557 (-0.060439) | 0.173502 / 0.737135 (-0.563633) | 0.121988 / 0.296338 (-0.174351) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403307 / 0.215209 (0.188098) | 4.034146 / 2.077655 (1.956492) | 1.852162 / 1.504120 (0.348042) | 1.675643 / 1.541195 (0.134448) | 1.748851 / 1.468490 (0.280360) | 0.703458 / 4.584777 (-3.881319) | 3.809055 / 3.745712 (0.063343) | 2.118060 / 5.269862 (-3.151801) | 1.338394 / 4.565676 (-3.227282) | 0.086319 / 0.424275 (-0.337956) | 0.012195 / 0.007607 (0.004588) | 0.520814 / 0.226044 (0.294769) | 5.201074 / 2.268929 (2.932145) | 2.418384 / 55.444624 (-53.026240) | 2.085496 / 6.876477 (-4.790980) | 2.245638 / 2.142072 (0.103565) | 0.849042 / 4.805227 (-3.956185) | 0.171912 / 6.500664 (-6.328752) | 0.065691 / 0.075469 (-0.009778) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.159985 / 1.841788 (-0.681803) | 14.910867 / 8.074308 (6.836559) | 14.473926 / 10.191392 (4.282534) | 0.181532 / 0.680424 (-0.498891) | 0.017203 / 0.534201 (-0.516998) | 0.420805 / 0.579283 (-0.158479) | 0.426455 / 0.434364 (-0.007909) | 0.497086 / 0.540337 (-0.043251) | 0.593909 / 1.386936 (-0.793027) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007688 / 0.011353 (-0.003665) | 0.005353 / 0.011008 (-0.005656) | 0.076869 / 0.038508 (0.038361) | 0.035030 / 0.023109 (0.011921) | 0.344649 / 0.275898 (0.068751) | 0.387669 / 0.323480 (0.064190) | 0.005913 / 0.007986 (-0.002072) | 0.004107 / 0.004328 (-0.000221) | 0.074111 / 0.004250 (0.069860) | 0.049351 / 0.037052 (0.012299) | 0.346061 / 0.258489 (0.087572) | 0.395499 / 0.293841 (0.101658) | 0.035549 / 0.128546 (-0.092997) | 0.012340 / 0.075646 (-0.063307) | 0.087031 / 0.419271 (-0.332241) | 0.049088 / 0.043533 (0.005556) | 0.342774 / 0.255139 (0.087635) | 0.362037 / 0.283200 (0.078837) | 0.100329 / 0.141683 (-0.041354) | 1.442349 / 1.452155 (-0.009806) | 1.551079 / 1.492716 (0.058363) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228458 / 0.018006 (0.210452) | 0.446190 / 0.000490 (0.445701) | 0.000413 / 0.000200 (0.000213) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029884 / 0.037411 (-0.007527) | 0.117527 / 0.014526 (0.103002) | 0.123221 / 0.176557 (-0.053335) | 0.172290 / 0.737135 (-0.564845) | 0.128682 / 0.296338 (-0.167657) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420905 / 0.215209 (0.205696) | 4.199342 / 2.077655 (2.121687) | 2.007327 / 1.504120 (0.503207) | 1.814732 / 1.541195 (0.273537) | 1.893999 / 1.468490 (0.425509) | 0.712259 / 4.584777 (-3.872518) | 3.843402 / 3.745712 (0.097690) | 3.198514 / 5.269862 (-2.071348) | 1.678732 / 4.565676 (-2.886945) | 0.086435 / 0.424275 (-0.337840) | 0.012233 / 0.007607 (0.004626) | 0.526121 / 0.226044 (0.300077) | 5.190578 / 2.268929 (2.921650) | 2.473259 / 55.444624 (-52.971366) | 2.142795 / 6.876477 (-4.733682) | 2.277594 / 2.142072 (0.135521) | 0.846117 / 4.805227 (-3.959110) | 0.169458 / 6.500664 (-6.331206) | 0.065017 / 0.075469 (-0.010452) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272479 / 1.841788 (-0.569309) | 15.086473 / 8.074308 (7.012165) | 14.659728 / 10.191392 (4.468336) | 0.163915 / 0.680424 (-0.516509) | 0.017561 / 0.534201 (-0.516640) | 0.422074 / 0.579283 (-0.157209) | 0.421963 / 0.434364 (-0.012401) | 0.490321 / 0.540337 (-0.050016) | 0.586854 / 1.386936 (-0.800083) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e7ce0ac60c7efc10886471932854903a7c19f172 \"CML watermark\")\n"
] | 2023-04-16T11:56:01 | 2023-04-20T15:37:49 | 2023-04-20T15:30:48 | CONTRIBUTOR | null | Fixes the bug #5757 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5758/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5758",
"html_url": "https://github.com/huggingface/datasets/pull/5758",
"diff_url": "https://github.com/huggingface/datasets/pull/5758.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5758.patch",
"merged_at": "2023-04-20T15:30:48"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5757 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5757/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5757/comments | https://api.github.com/repos/huggingface/datasets/issues/5757/events | https://github.com/huggingface/datasets/issues/5757 | 1,669,910,503 | I_kwDODunzps5jiM_n | 5,757 | Tilde (~) is not supported | {
"login": "eli-osherovich",
"id": 2437102,
"node_id": "MDQ6VXNlcjI0MzcxMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eli-osherovich",
"html_url": "https://github.com/eli-osherovich",
"followers_url": "https://api.github.com/users/eli-osherovich/followers",
"following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}",
"gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions",
"organizations_url": "https://api.github.com/users/eli-osherovich/orgs",
"repos_url": "https://api.github.com/users/eli-osherovich/repos",
"events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}",
"received_events_url": "https://api.github.com/users/eli-osherovich/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2023-04-16T11:48:10 | 2023-04-20T15:30:51 | 2023-04-20T15:30:51 | CONTRIBUTOR | null | ### Describe the bug
It seems that `~` is not recognized correctly in local paths. Whenever I try to use it I get an exception
### Steps to reproduce the bug
```python
load_dataset("imagefolder", data_dir="~/data/my_dataset")
```
Will generate the following error:
```
EmptyDatasetError: The directory at /path/to/cwd/~/data/datasets/clementine_tagged_per_cam doesn't contain any data files
```
### Expected behavior
Load the dataset.
### Environment info
datasets==2.11.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5757/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5756 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5756/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5756/comments | https://api.github.com/repos/huggingface/datasets/issues/5756/events | https://github.com/huggingface/datasets/issues/5756 | 1,669,678,080 | I_kwDODunzps5jhUQA | 5,756 | Calling shuffle on a IterableDataset with streaming=True, gives "ValueError: cannot reshape array" | {
"login": "rohfle",
"id": 21077341,
"node_id": "MDQ6VXNlcjIxMDc3MzQx",
"avatar_url": "https://avatars.githubusercontent.com/u/21077341?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rohfle",
"html_url": "https://github.com/rohfle",
"followers_url": "https://api.github.com/users/rohfle/followers",
"following_url": "https://api.github.com/users/rohfle/following{/other_user}",
"gists_url": "https://api.github.com/users/rohfle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rohfle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rohfle/subscriptions",
"organizations_url": "https://api.github.com/users/rohfle/orgs",
"repos_url": "https://api.github.com/users/rohfle/repos",
"events_url": "https://api.github.com/users/rohfle/events{/privacy}",
"received_events_url": "https://api.github.com/users/rohfle/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! I've merged a PR on the Hub with a fix: https://huggingface.co/datasets/fashion_mnist/discussions/3",
"Thanks, this appears to have fixed the issue.\r\n\r\nI've created a PR for the same change in the mnist dataset: https://huggingface.co/datasets/mnist/discussions/3/files"
] | 2023-04-16T04:59:47 | 2023-04-18T03:40:56 | 2023-04-18T03:40:56 | NONE | null | ### Describe the bug
When calling shuffle on a IterableDataset with streaming=True, I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/administrator/Documents/Projects/huggingface/jax-diffusers-sprint-consistency-models/virtualenv/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 937, in __iter__
for key, example in ex_iterable:
File "/home/administrator/Documents/Projects/huggingface/jax-diffusers-sprint-consistency-models/virtualenv/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 627, in __iter__
for x in self.ex_iterable:
File "/home/administrator/Documents/Projects/huggingface/jax-diffusers-sprint-consistency-models/virtualenv/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 138, in __iter__
yield from self.generate_examples_fn(**kwargs_with_shuffled_shards)
File "/home/administrator/.cache/huggingface/modules/datasets_modules/datasets/mnist/fda16c03c4ecfb13f165ba7e29cf38129ce035011519968cdaf74894ce91c9d4/mnist.py", line 111, in _generate_examples
images = np.frombuffer(f.read(), dtype=np.uint8).reshape(size, 28, 28)
ValueError: cannot reshape array of size 59992 into shape (60000,28,28)
```
Tested with the fashion_mnist and mnist datasets
### Steps to reproduce the bug
Code to reproduce
```python
from datasets import load_dataset
SHUFFLE_SEED = 42
SHUFFLE_BUFFER_SIZE = 10_000
dataset = load_dataset('fashion_mnist', streaming=True).shuffle(seed=SHUFFLE_SEED, buffer_size=SHUFFLE_BUFFER_SIZE)
next(iter(dataset['train']))
```
### Expected behavior
A random item from the dataset and no error
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5756/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5755 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5755/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5755/comments | https://api.github.com/repos/huggingface/datasets/issues/5755/events | https://github.com/huggingface/datasets/issues/5755 | 1,669,048,438 | I_kwDODunzps5je6h2 | 5,755 | ImportError: cannot import name 'DeprecatedEnum' from 'datasets.utils.deprecation_utils' | {
"login": "fivejjs",
"id": 1405491,
"node_id": "MDQ6VXNlcjE0MDU0OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1405491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fivejjs",
"html_url": "https://github.com/fivejjs",
"followers_url": "https://api.github.com/users/fivejjs/followers",
"following_url": "https://api.github.com/users/fivejjs/following{/other_user}",
"gists_url": "https://api.github.com/users/fivejjs/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fivejjs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fivejjs/subscriptions",
"organizations_url": "https://api.github.com/users/fivejjs/orgs",
"repos_url": "https://api.github.com/users/fivejjs/repos",
"events_url": "https://api.github.com/users/fivejjs/events{/privacy}",
"received_events_url": "https://api.github.com/users/fivejjs/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"update the version. fix"
] | 2023-04-14T23:28:54 | 2023-04-14T23:36:19 | 2023-04-14T23:36:19 | NONE | null | ### Describe the bug
The module moved to new place?
### Steps to reproduce the bug
in the import step,
```python
from datasets.utils.deprecation_utils import DeprecatedEnum
```
error:
```
ImportError: cannot import name 'DeprecatedEnum' from 'datasets.utils.deprecation_utils'
```
### Expected behavior
import successfully
### Environment info
python==3.9.16
datasets==1.18.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5755/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5754 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5754/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5754/comments | https://api.github.com/repos/huggingface/datasets/issues/5754/events | https://github.com/huggingface/datasets/pull/5754 | 1,668,755,035 | PR_kwDODunzps5OWozh | 5,754 | Minor tqdm fixes | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006479 / 0.011353 (-0.004874) | 0.004592 / 0.011008 (-0.006416) | 0.097239 / 0.038508 (0.058731) | 0.028609 / 0.023109 (0.005499) | 0.309225 / 0.275898 (0.033327) | 0.340015 / 0.323480 (0.016535) | 0.004857 / 0.007986 (-0.003129) | 0.004649 / 0.004328 (0.000320) | 0.074770 / 0.004250 (0.070520) | 0.038351 / 0.037052 (0.001299) | 0.313360 / 0.258489 (0.054871) | 0.350256 / 0.293841 (0.056416) | 0.030770 / 0.128546 (-0.097776) | 0.011591 / 0.075646 (-0.064055) | 0.322444 / 0.419271 (-0.096828) | 0.043704 / 0.043533 (0.000171) | 0.311790 / 0.255139 (0.056651) | 0.339183 / 0.283200 (0.055984) | 0.088041 / 0.141683 (-0.053642) | 1.490649 / 1.452155 (0.038494) | 1.561789 / 1.492716 (0.069072) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208984 / 0.018006 (0.190978) | 0.406105 / 0.000490 (0.405616) | 0.003152 / 0.000200 (0.002952) | 0.000074 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022622 / 0.037411 (-0.014790) | 0.095819 / 0.014526 (0.081294) | 0.105132 / 0.176557 (-0.071424) | 0.165684 / 0.737135 (-0.571451) | 0.106706 / 0.296338 (-0.189632) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426126 / 0.215209 (0.210917) | 4.233864 / 2.077655 (2.156209) | 1.918727 / 1.504120 (0.414607) | 1.729905 / 1.541195 (0.188710) | 1.760342 / 1.468490 (0.291852) | 0.695449 / 4.584777 (-3.889328) | 3.413531 / 3.745712 (-0.332181) | 1.904557 / 5.269862 (-3.365305) | 1.270604 / 4.565676 (-3.295072) | 0.083018 / 0.424275 (-0.341257) | 0.012760 / 0.007607 (0.005152) | 0.523991 / 0.226044 (0.297947) | 5.236132 / 2.268929 (2.967204) | 2.360959 / 55.444624 (-53.083665) | 1.996533 / 6.876477 (-4.879943) | 2.072934 / 2.142072 (-0.069138) | 0.804133 / 4.805227 (-4.001094) | 0.150976 / 6.500664 (-6.349688) | 0.065503 / 0.075469 (-0.009966) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211828 / 1.841788 (-0.629960) | 13.657743 / 8.074308 (5.583435) | 13.887148 / 10.191392 (3.695756) | 0.145996 / 0.680424 (-0.534428) | 0.016562 / 0.534201 (-0.517639) | 0.380359 / 0.579283 (-0.198924) | 0.388698 / 0.434364 (-0.045666) | 0.440373 / 0.540337 (-0.099965) | 0.531753 / 1.386936 (-0.855183) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006444 / 0.011353 (-0.004909) | 0.004569 / 0.011008 (-0.006439) | 0.076239 / 0.038508 (0.037731) | 0.028462 / 0.023109 (0.005352) | 0.365540 / 0.275898 (0.089642) | 0.398242 / 0.323480 (0.074762) | 0.005785 / 0.007986 (-0.002200) | 0.003346 / 0.004328 (-0.000982) | 0.076296 / 0.004250 (0.072046) | 0.039853 / 0.037052 (0.002800) | 0.367684 / 0.258489 (0.109195) | 0.409570 / 0.293841 (0.115730) | 0.030536 / 0.128546 (-0.098010) | 0.011534 / 0.075646 (-0.064112) | 0.084962 / 0.419271 (-0.334309) | 0.042708 / 0.043533 (-0.000825) | 0.344058 / 0.255139 (0.088919) | 0.389096 / 0.283200 (0.105897) | 0.090559 / 0.141683 (-0.051124) | 1.507101 / 1.452155 (0.054946) | 1.563977 / 1.492716 (0.071260) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228740 / 0.018006 (0.210734) | 0.396890 / 0.000490 (0.396400) | 0.000392 / 0.000200 (0.000192) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025052 / 0.037411 (-0.012360) | 0.099951 / 0.014526 (0.085426) | 0.106847 / 0.176557 (-0.069710) | 0.156666 / 0.737135 (-0.580469) | 0.110344 / 0.296338 (-0.185994) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.442363 / 0.215209 (0.227154) | 4.429571 / 2.077655 (2.351917) | 2.076501 / 1.504120 (0.572381) | 1.875226 / 1.541195 (0.334031) | 1.909093 / 1.468490 (0.440603) | 0.703047 / 4.584777 (-3.881730) | 3.457036 / 3.745712 (-0.288676) | 2.866648 / 5.269862 (-2.403214) | 1.524430 / 4.565676 (-3.041246) | 0.083687 / 0.424275 (-0.340588) | 0.012251 / 0.007607 (0.004643) | 0.543945 / 0.226044 (0.317901) | 5.440559 / 2.268929 (3.171630) | 2.522924 / 55.444624 (-52.921700) | 2.188770 / 6.876477 (-4.687707) | 2.249632 / 2.142072 (0.107559) | 0.813499 / 4.805227 (-3.991728) | 0.152861 / 6.500664 (-6.347803) | 0.067189 / 0.075469 (-0.008280) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284255 / 1.841788 (-0.557533) | 14.207864 / 8.074308 (6.133556) | 14.279691 / 10.191392 (4.088299) | 0.167027 / 0.680424 (-0.513396) | 0.016455 / 0.534201 (-0.517746) | 0.380798 / 0.579283 (-0.198485) | 0.390013 / 0.434364 (-0.044351) | 0.445493 / 0.540337 (-0.094845) | 0.526278 / 1.386936 (-0.860658) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3fdb46c526b9d070df0eb2d56b0ecacdace7cb9a \"CML watermark\")\n"
] | 2023-04-14T18:15:14 | 2023-04-20T15:27:58 | 2023-04-20T15:21:00 | CONTRIBUTOR | null | `GeneratorBasedBuilder`'s TQDM bars were not used as context managers. This PR fixes that (missed these bars in https://github.com/huggingface/datasets/pull/5560).
Also, this PR modifies the single-proc `save_to_disk` to fix the issue with the TQDM bar not accumulating the progress in the multi-shard setting (again, this bug was introduced by me in the linked PR 😎) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5754/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5754",
"html_url": "https://github.com/huggingface/datasets/pull/5754",
"diff_url": "https://github.com/huggingface/datasets/pull/5754.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5754.patch",
"merged_at": "2023-04-20T15:21:00"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5753 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5753/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5753/comments | https://api.github.com/repos/huggingface/datasets/issues/5753/events | https://github.com/huggingface/datasets/issues/5753 | 1,668,659,536 | I_kwDODunzps5jdblQ | 5,753 | [IterableDatasets] Add column followed by interleave datasets gives bogus outputs | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Problem with the code snippet! Using global vars and functions was not a good idea with iterable datasets!\r\n\r\nIf we update to:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\noriginal_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\n\r\n# now add a new column to our streaming dataset using our hack\r\nname = \"new_column\"\r\ncolumn_1 = [f\"new dataset 1, row {i}\" for i in range(50)]\r\n\r\nnew_features = original_dataset.features.copy()\r\nnew_features[name] = new_features[\"file\"] # I know that \"file\" has the right column type to match our new feature\r\n\r\ndef add_column_fn_1(example, idx):\r\n if name in example:\r\n raise ValueError(f\"Error when adding {name}: column {name} is already in the dataset.\")\r\n return {name: column_1[idx]}\r\n\r\nmodified_dataset_1 = original_dataset.map(add_column_fn_1, with_indices=True, features=new_features)\r\n\r\n# now create a second modified dataset using the same trick\r\ncolumn_2 = [f\"new dataset 2, row {i}\" for i in range(50)]\r\n\r\ndef add_column_fn_2(example, idx):\r\n if name in example:\r\n raise ValueError(f\"Error when adding {name}: column {name} is already in the dataset.\")\r\n return {name: column_2[idx]}\r\n\r\nmodified_dataset_2 = original_dataset.map(add_column_fn_2, with_indices=True, features=new_features)\r\n\r\ninterleaved_dataset = interleave_datasets([modified_dataset_1, modified_dataset_2])\r\n\r\nfor i, sample in enumerate(interleaved_dataset):\r\n print(sample[\"new_column\"])\r\n if i == 10:\r\n break\r\n```\r\nwe get the correct outputs:\r\n```python\r\nnew dataset 1, row 0\r\nnew dataset 2, row 0\r\nnew dataset 1, row 1\r\nnew dataset 2, row 1\r\nnew dataset 1, row 2\r\nnew dataset 2, row 2\r\nnew dataset 1, row 3\r\nnew dataset 2, row 3\r\nnew dataset 1, row 4\r\nnew dataset 2, row 4\r\nnew dataset 1, row 5\r\n```\r\n"
] | 2023-04-14T17:32:31 | 2023-04-14T17:45:52 | 2023-04-14T17:36:37 | CONTRIBUTOR | null | ### Describe the bug
If we add a new column to our iterable dataset using the hack described in #5752, when we then interleave datasets the new column is pinned to one value.
### Steps to reproduce the bug
What we're going to do here is:
1. Load an iterable dataset in streaming mode (`original_dataset`)
2. Add a new column to this dataset using the hack in #5752 (`modified_dataset_1`)
3. Create another new dataset by adding a column with the same key but different values (`modified_dataset_2`)
4. Interleave our new datasets (`modified_dataset_1` + `modified_dataset_2`)
5. Check the value of our newly added column (`new_column`)
```python
from datasets import load_dataset
# load an iterable dataset
original_dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True)
# now add a new column to our streaming dataset using our hack from 5752
name = "new_column"
column = [f"new dataset 1, row {i}" for i in range(50)]
new_features = original_dataset.features.copy()
new_features[name] = new_features["file"] # I know that "file" has the right column type to match our new feature
def add_column_fn(example, idx):
if name in example:
raise ValueError(f"Error when adding {name}: column {name} is already in the dataset.")
return {name: column[idx]}
modified_dataset_1 = original_dataset.map(add_column_fn, with_indices=True, features=new_features)
# now create a second modified dataset using the same trick
column = [f"new dataset 2, row {i}" for i in range(50)]
def add_column_fn(example, idx):
if name in example:
raise ValueError(f"Error when adding {name}: column {name} is already in the dataset.")
return {name: column[idx]}
modified_dataset_2 = original_dataset.map(add_column_fn, with_indices=True, features=new_features)
# interleave these datasets
interleaved_dataset = interleave_datasets([modified_dataset_1, modified_dataset_2])
# now check what the value of the added column is
for i, sample in enumerate(interleaved_dataset):
print(sample["new_column"])
if i == 10:
break
```
**Print Output:**
```
new dataset 2, row 0
new dataset 2, row 0
new dataset 2, row 1
new dataset 2, row 1
new dataset 2, row 2
new dataset 2, row 2
new dataset 2, row 3
new dataset 2, row 3
new dataset 2, row 4
new dataset 2, row 4
new dataset 2, row 5
```
We see that we only get outputs from our second dataset.
### Expected behavior
We should interleave between dataset 1 and 2 and increase in row value:
```
new dataset 1, row 0
new dataset 2, row 0
new dataset 1, row 1
new dataset 2, row 1
new dataset 1, row 2
new dataset 2, row 2
...
```
### Environment info
- datasets version: 2.10.2.dev0
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.13.3
- PyArrow version: 10.0.1
- Pandas version: 1.5.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5753/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5752 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5752/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5752/comments | https://api.github.com/repos/huggingface/datasets/issues/5752/events | https://github.com/huggingface/datasets/issues/5752 | 1,668,574,209 | I_kwDODunzps5jdGwB | 5,752 | Streaming dataset looses `.feature` method after `.add_column` | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"I believe the issue resides in this line:\r\nhttps://github.com/huggingface/datasets/blob/7c3a9b057c476c40d157bd7a5d57f49066239df0/src/datasets/iterable_dataset.py#L1415\r\n\r\nIf we pass the **new** features of the dataset to the `.map` method we can return the features after adding a column, e.g.:\r\n```python\r\nfrom datasets import load_dataset, Value\r\n\r\noriginal_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\nprint(original_dataset.features.keys())\r\n\r\n# now add a new column to our streaming dataset using our hack\r\nname = \"new_column\"\r\ncolumn = [\"some random text\" for _ in range(50)]\r\n\r\nnew_features = original_dataset.features.copy()\r\nnew_features[name] = Value(dtype=\"string\", id=None) # I know the correct column type for this feature\r\n\r\ndef add_column_fn(example, idx):\r\n if name in example:\r\n raise ValueError(f\"Error when adding {name}: column {name} is already in the dataset.\")\r\n return {name: column[idx]}\r\n\r\nmodified_dataset = original_dataset.map(add_column_fn, with_indices=True, features=new_features)\r\n\r\nprint(modified_dataset.features.keys())\r\n```\r\n**Print Output:**\r\n```\r\ndict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'])\r\ndict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id', 'new_column'])\r\n```\r\n"
] | 2023-04-14T16:39:50 | 2023-04-14T17:46:54 | null | CONTRIBUTOR | null | ### Describe the bug
After appending a new column to a streaming dataset using `.add_column`, we can no longer access the list of dataset features using the `.feature` method.
### Steps to reproduce the bug
```python
from datasets import load_dataset
original_dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True)
print(original_dataset.features.keys())
# now add a new column to our streaming dataset
modified_dataset = original_dataset.add_column("new_column", ["some random text" for _ in range(50)])
print(modified_dataset.features.keys())
```
**Print Output:**
```
dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'])
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[1], line 8
6 # now add a new column to our streaming dataset
7 modified_dataset = original_dataset.add_column("new_column", ["some random text" for _ in range(50)])
----> 8 print(modified_dataset.features.keys())
AttributeError: 'NoneType' object has no attribute 'keys'
```
We see that we get the features for the original dataset, but not the modified one with the added column.
### Expected behavior
Features should be persevered after adding a new column, i.e. calling:
```python
print(modified_dataset.features.keys())
```
Should return:
```
dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id', 'new_column'])
```
### Environment info
- `datasets` version: 2.10.2.dev0
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.13.3
- PyArrow version: 10.0.1
- Pandas version: 1.5.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5752/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5751 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5751/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5751/comments | https://api.github.com/repos/huggingface/datasets/issues/5751/events | https://github.com/huggingface/datasets/pull/5751 | 1,668,333,316 | PR_kwDODunzps5OVMuT | 5,751 | Consistent ArrayXD Python formatting + better NumPy/Pandas formatting | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010459 / 0.011353 (-0.000894) | 0.007009 / 0.011008 (-0.003999) | 0.153885 / 0.038508 (0.115377) | 0.037308 / 0.023109 (0.014199) | 0.431931 / 0.275898 (0.156033) | 0.452940 / 0.323480 (0.129461) | 0.008572 / 0.007986 (0.000586) | 0.007479 / 0.004328 (0.003150) | 0.093835 / 0.004250 (0.089584) | 0.050172 / 0.037052 (0.013120) | 0.428855 / 0.258489 (0.170366) | 0.517814 / 0.293841 (0.223974) | 0.058558 / 0.128546 (-0.069988) | 0.019550 / 0.075646 (-0.056096) | 0.449837 / 0.419271 (0.030566) | 0.069710 / 0.043533 (0.026177) | 0.444163 / 0.255139 (0.189024) | 0.469003 / 0.283200 (0.185803) | 0.114665 / 0.141683 (-0.027018) | 1.822415 / 1.452155 (0.370261) | 1.956360 / 1.492716 (0.463644) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237489 / 0.018006 (0.219483) | 0.556947 / 0.000490 (0.556457) | 0.006988 / 0.000200 (0.006789) | 0.000499 / 0.000054 (0.000444) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037047 / 0.037411 (-0.000364) | 0.133973 / 0.014526 (0.119447) | 0.137072 / 0.176557 (-0.039485) | 0.201520 / 0.737135 (-0.535615) | 0.144177 / 0.296338 (-0.152161) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.694853 / 0.215209 (0.479644) | 6.805746 / 2.077655 (4.728091) | 2.717864 / 1.504120 (1.213744) | 2.360529 / 1.541195 (0.819335) | 2.384403 / 1.468490 (0.915913) | 1.337512 / 4.584777 (-3.247265) | 5.734090 / 3.745712 (1.988378) | 5.344909 / 5.269862 (0.075047) | 2.906218 / 4.565676 (-1.659458) | 0.160148 / 0.424275 (-0.264127) | 0.015159 / 0.007607 (0.007551) | 0.871356 / 0.226044 (0.645312) | 8.550965 / 2.268929 (6.282037) | 3.613522 / 55.444624 (-51.831103) | 2.868508 / 6.876477 (-4.007969) | 2.912263 / 2.142072 (0.770190) | 1.652548 / 4.805227 (-3.152680) | 0.274117 / 6.500664 (-6.226547) | 0.085911 / 0.075469 (0.010442) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.624798 / 1.841788 (-0.216989) | 18.413303 / 8.074308 (10.338995) | 21.742854 / 10.191392 (11.551462) | 0.255937 / 0.680424 (-0.424487) | 0.029492 / 0.534201 (-0.504709) | 0.541932 / 0.579283 (-0.037351) | 0.638594 / 0.434364 (0.204230) | 0.607427 / 0.540337 (0.067090) | 0.763046 / 1.386936 (-0.623890) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.020543 / 0.011353 (0.009190) | 0.006079 / 0.011008 (-0.004929) | 0.100558 / 0.038508 (0.062050) | 0.039474 / 0.023109 (0.016365) | 0.468889 / 0.275898 (0.192991) | 0.477731 / 0.323480 (0.154251) | 0.006999 / 0.007986 (-0.000987) | 0.005845 / 0.004328 (0.001516) | 0.110022 / 0.004250 (0.105772) | 0.056885 / 0.037052 (0.019833) | 0.447296 / 0.258489 (0.188807) | 0.489007 / 0.293841 (0.195166) | 0.055086 / 0.128546 (-0.073460) | 0.020623 / 0.075646 (-0.055024) | 0.129599 / 0.419271 (-0.289672) | 0.064316 / 0.043533 (0.020784) | 0.446681 / 0.255139 (0.191542) | 0.488897 / 0.283200 (0.205698) | 0.119121 / 0.141683 (-0.022562) | 1.836248 / 1.452155 (0.384093) | 2.002456 / 1.492716 (0.509740) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249344 / 0.018006 (0.231338) | 0.544320 / 0.000490 (0.543830) | 0.000459 / 0.000200 (0.000259) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038771 / 0.037411 (0.001359) | 0.129527 / 0.014526 (0.115002) | 0.144681 / 0.176557 (-0.031876) | 0.208237 / 0.737135 (-0.528898) | 0.149502 / 0.296338 (-0.146836) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.668457 / 0.215209 (0.453248) | 6.729550 / 2.077655 (4.651895) | 2.741076 / 1.504120 (1.236956) | 2.394737 / 1.541195 (0.853542) | 2.415242 / 1.468490 (0.946752) | 1.322334 / 4.584777 (-3.262442) | 5.787454 / 3.745712 (2.041742) | 3.309847 / 5.269862 (-1.960015) | 2.199181 / 4.565676 (-2.366495) | 0.170740 / 0.424275 (-0.253535) | 0.015095 / 0.007607 (0.007487) | 0.864157 / 0.226044 (0.638112) | 8.701858 / 2.268929 (6.432929) | 3.617966 / 55.444624 (-51.826658) | 2.847144 / 6.876477 (-4.029332) | 3.011391 / 2.142072 (0.869319) | 1.595466 / 4.805227 (-3.209762) | 0.284010 / 6.500664 (-6.216654) | 0.091054 / 0.075469 (0.015585) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.702404 / 1.841788 (-0.139384) | 19.427130 / 8.074308 (11.352822) | 21.900446 / 10.191392 (11.709053) | 0.244088 / 0.680424 (-0.436336) | 0.027428 / 0.534201 (-0.506773) | 0.552226 / 0.579283 (-0.027057) | 0.653102 / 0.434364 (0.218738) | 0.635379 / 0.540337 (0.095042) | 0.771842 / 1.386936 (-0.615094) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#efde2a0b9ad937defc83e0ac3f14bbb90fb5f345 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006547 / 0.011353 (-0.004806) | 0.004569 / 0.011008 (-0.006439) | 0.097782 / 0.038508 (0.059274) | 0.028157 / 0.023109 (0.005048) | 0.319017 / 0.275898 (0.043119) | 0.340758 / 0.323480 (0.017278) | 0.005078 / 0.007986 (-0.002907) | 0.003343 / 0.004328 (-0.000985) | 0.074194 / 0.004250 (0.069944) | 0.037918 / 0.037052 (0.000866) | 0.310298 / 0.258489 (0.051809) | 0.349441 / 0.293841 (0.055600) | 0.030375 / 0.128546 (-0.098171) | 0.011527 / 0.075646 (-0.064119) | 0.320499 / 0.419271 (-0.098773) | 0.042639 / 0.043533 (-0.000894) | 0.312182 / 0.255139 (0.057043) | 0.329058 / 0.283200 (0.045858) | 0.085517 / 0.141683 (-0.056165) | 1.532603 / 1.452155 (0.080448) | 1.583996 / 1.492716 (0.091279) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208286 / 0.018006 (0.190280) | 0.418696 / 0.000490 (0.418206) | 0.007051 / 0.000200 (0.006851) | 0.000409 / 0.000054 (0.000354) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024055 / 0.037411 (-0.013356) | 0.098420 / 0.014526 (0.083894) | 0.104785 / 0.176557 (-0.071771) | 0.163618 / 0.737135 (-0.573517) | 0.110006 / 0.296338 (-0.186332) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418756 / 0.215209 (0.203547) | 4.179557 / 2.077655 (2.101902) | 1.881708 / 1.504120 (0.377588) | 1.683393 / 1.541195 (0.142198) | 1.731909 / 1.468490 (0.263419) | 0.696674 / 4.584777 (-3.888103) | 3.384167 / 3.745712 (-0.361545) | 3.173479 / 5.269862 (-2.096382) | 1.620019 / 4.565676 (-2.945658) | 0.082850 / 0.424275 (-0.341426) | 0.012396 / 0.007607 (0.004789) | 0.519743 / 0.226044 (0.293699) | 5.208480 / 2.268929 (2.939552) | 2.312917 / 55.444624 (-53.131708) | 1.963486 / 6.876477 (-4.912991) | 2.084553 / 2.142072 (-0.057519) | 0.805486 / 4.805227 (-3.999742) | 0.153429 / 6.500664 (-6.347235) | 0.069451 / 0.075469 (-0.006018) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197185 / 1.841788 (-0.644603) | 14.341005 / 8.074308 (6.266696) | 14.476162 / 10.191392 (4.284770) | 0.157372 / 0.680424 (-0.523052) | 0.016444 / 0.534201 (-0.517757) | 0.383721 / 0.579283 (-0.195562) | 0.380800 / 0.434364 (-0.053564) | 0.441137 / 0.540337 (-0.099200) | 0.524778 / 1.386936 (-0.862158) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006728 / 0.011353 (-0.004625) | 0.004536 / 0.011008 (-0.006472) | 0.076266 / 0.038508 (0.037757) | 0.028133 / 0.023109 (0.005024) | 0.351072 / 0.275898 (0.075174) | 0.375823 / 0.323480 (0.052344) | 0.005166 / 0.007986 (-0.002819) | 0.004717 / 0.004328 (0.000388) | 0.076130 / 0.004250 (0.071880) | 0.041354 / 0.037052 (0.004301) | 0.345904 / 0.258489 (0.087415) | 0.384119 / 0.293841 (0.090278) | 0.030759 / 0.128546 (-0.097787) | 0.011659 / 0.075646 (-0.063988) | 0.085269 / 0.419271 (-0.334002) | 0.042161 / 0.043533 (-0.001372) | 0.340806 / 0.255139 (0.085667) | 0.366832 / 0.283200 (0.083632) | 0.092187 / 0.141683 (-0.049495) | 1.520035 / 1.452155 (0.067880) | 1.603856 / 1.492716 (0.111140) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237763 / 0.018006 (0.219757) | 0.413406 / 0.000490 (0.412916) | 0.000415 / 0.000200 (0.000215) | 0.000060 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026095 / 0.037411 (-0.011317) | 0.105775 / 0.014526 (0.091249) | 0.108452 / 0.176557 (-0.068105) | 0.160014 / 0.737135 (-0.577122) | 0.112385 / 0.296338 (-0.183953) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437327 / 0.215209 (0.222118) | 4.374949 / 2.077655 (2.297294) | 2.090292 / 1.504120 (0.586172) | 1.885946 / 1.541195 (0.344752) | 1.946768 / 1.468490 (0.478278) | 0.704124 / 4.584777 (-3.880653) | 3.394994 / 3.745712 (-0.350718) | 1.905189 / 5.269862 (-3.364673) | 1.182300 / 4.565676 (-3.383376) | 0.082920 / 0.424275 (-0.341355) | 0.012781 / 0.007607 (0.005174) | 0.535467 / 0.226044 (0.309423) | 5.362799 / 2.268929 (3.093870) | 2.504825 / 55.444624 (-52.939799) | 2.180458 / 6.876477 (-4.696019) | 2.317750 / 2.142072 (0.175677) | 0.811182 / 4.805227 (-3.994045) | 0.151654 / 6.500664 (-6.349010) | 0.067925 / 0.075469 (-0.007544) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.290746 / 1.841788 (-0.551042) | 14.799309 / 8.074308 (6.725001) | 14.439722 / 10.191392 (4.248330) | 0.144358 / 0.680424 (-0.536066) | 0.016688 / 0.534201 (-0.517513) | 0.392907 / 0.579283 (-0.186376) | 0.383109 / 0.434364 (-0.051255) | 0.450069 / 0.540337 (-0.090269) | 0.532534 / 1.386936 (-0.854402) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#87c061032972509a2a1b4103763e62fb74912128 \"CML watermark\")\n",
"I turned it into a draft to fix the failing tests, but CI is now green, so there is no good reason for it :)"
] | 2023-04-14T14:13:59 | 2023-04-20T14:43:20 | 2023-04-20T14:40:34 | CONTRIBUTOR | null | Return a list of lists instead of a list of NumPy arrays when converting the variable-shaped `ArrayXD` to Python. Additionally, improve the NumPy conversion by returning a numeric NumPy array when the offsets are equal or a NumPy object array when they aren't, and allow converting the variable-shaped `ArrayXD` to Pandas.
(Reported in https://github.com/huggingface/datasets/issues/5719#issuecomment-1507579671) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5751/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5751",
"html_url": "https://github.com/huggingface/datasets/pull/5751",
"diff_url": "https://github.com/huggingface/datasets/pull/5751.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5751.patch",
"merged_at": "2023-04-20T14:40:34"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5750 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5750/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5750/comments | https://api.github.com/repos/huggingface/datasets/issues/5750/events | https://github.com/huggingface/datasets/issues/5750 | 1,668,289,067 | I_kwDODunzps5jcBIr | 5,750 | Fail to create datasets from a generator when using Google Big Query | {
"login": "ivanprado",
"id": 895720,
"node_id": "MDQ6VXNlcjg5NTcyMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/895720?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivanprado",
"html_url": "https://github.com/ivanprado",
"followers_url": "https://api.github.com/users/ivanprado/followers",
"following_url": "https://api.github.com/users/ivanprado/following{/other_user}",
"gists_url": "https://api.github.com/users/ivanprado/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivanprado/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivanprado/subscriptions",
"organizations_url": "https://api.github.com/users/ivanprado/orgs",
"repos_url": "https://api.github.com/users/ivanprado/repos",
"events_url": "https://api.github.com/users/ivanprado/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivanprado/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"`from_generator` expects a generator function, not a generator object, so this should work:\r\n```python\r\nfrom datasets import Dataset\r\nfrom google.cloud import bigquery\r\n\r\nclient = bigquery.Client()\r\n\r\ndef gen()\r\n # Perform a query.\r\n QUERY = (\r\n 'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` '\r\n 'WHERE state = \"TX\" '\r\n 'LIMIT 100')\r\n query_job = client.query(QUERY) # API request\r\n yield from query_job.result() # Waits for query to finish\r\n\r\nds = Dataset.from_generator(rows)\r\n\r\nfor r in ds:\r\n print(r)\r\n```",
"@mariosasko your code was incomplete, so I tried to fix it:\r\n\r\n```py\r\nfrom datasets import Dataset\r\nfrom google.cloud import bigquery\r\n\r\nclient = bigquery.Client()\r\n\r\ndef gen():\r\n # Perform a query.\r\n QUERY = (\r\n 'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` '\r\n 'WHERE state = \"TX\" '\r\n 'LIMIT 100')\r\n query_job = client.query(QUERY) # API request\r\n yield from query_job.result() # Waits for query to finish\r\n\r\nds = Dataset.from_generator(gen)\r\n\r\nfor r in ds:\r\n print(r)\r\n```\r\n\r\nThe error is also present in this case:\r\n\r\n```\r\n_pickle.PicklingError: Pickling client objects is explicitly not supported.\r\nClients have non-trivial state that is local and unpickleable.\r\n```\r\n\r\nI think it doesn't matter if the generator is an object or a function. The problem is that the generator is referencing an object that is not pickable (the client in this case). ",
"It does matter: this function expects a generator function, as stated in the docs.\r\n\r\nThis should work:\r\n```python\r\nfrom datasets import Dataset\r\nfrom google.cloud import bigquery\r\n\r\ndef gen():\r\n client = bigquery.Client()\r\n # Perform a query.\r\n QUERY = (\r\n 'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` '\r\n 'WHERE state = \"TX\" '\r\n 'LIMIT 100')\r\n query_job = client.query(QUERY) # API request\r\n yield from query_job.result() # Waits for query to finish\r\n\r\nds = Dataset.from_generator(gen)\r\n\r\nfor r in ds:\r\n print(r)\r\n```\r\n\r\nWe could allow passing non-picklable objects and use a random hash for the generated arrow file. In that case, the caching mechanism would not work, meaning repeated calls with the same set of arguments would generate new datasets instead of reusing the cached version, but this behavior is still better than raising an error.",
"Thank you @mariosasko . Your last code is working indeed. Curiously, the important detail here was to wrap the client instantiation within the generator itself. If the line `client = bigquery.Client()` is moved outside, then the error is back.\r\n\r\nI see now also your point in regard to the generator being a generator function. We can close the issue if you want."
] | 2023-04-14T13:50:59 | 2023-04-17T12:20:43 | 2023-04-17T12:20:43 | NONE | null | ### Describe the bug
Creating a dataset from a generator using `Dataset.from_generator()` fails if the generator is the [Google Big Query Python client](https://cloud.google.com/python/docs/reference/bigquery/latest). The problem is that the Big Query client is not pickable. And the function `create_config_id` tries to get a hash of the generator by pickling it. So the following error is generated:
```
_pickle.PicklingError: Pickling client objects is explicitly not supported.
Clients have non-trivial state that is local and unpickleable.
```
### Steps to reproduce the bug
1. Install the big query client and datasets `pip install google-cloud-bigquery datasets`
2. Run the following code:
```py
from datasets import Dataset
from google.cloud import bigquery
client = bigquery.Client()
# Perform a query.
QUERY = (
'SELECT name FROM `bigquery-public-data.usa_names.usa_1910_2013` '
'WHERE state = "TX" '
'LIMIT 100')
query_job = client.query(QUERY) # API request
rows = query_job.result() # Waits for query to finish
ds = Dataset.from_generator(rows)
for r in ds:
print(r)
```
### Expected behavior
Two options:
1. Ignore the pickle errors when computing the hash
2. Provide a scape hutch so that we can avoid calculating the hash for the generator. For example, allowing to provide a hash from the user.
### Environment info
python 3.9
google-cloud-bigquery 3.9.0
datasets 2.11.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5750/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5749 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5749/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5749/comments | https://api.github.com/repos/huggingface/datasets/issues/5749/events | https://github.com/huggingface/datasets/issues/5749 | 1,668,016,321 | I_kwDODunzps5ja-jB | 5,749 | AttributeError: 'Version' object has no attribute 'match' | {
"login": "gulnaz-zh",
"id": 54584290,
"node_id": "MDQ6VXNlcjU0NTg0Mjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/54584290?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gulnaz-zh",
"html_url": "https://github.com/gulnaz-zh",
"followers_url": "https://api.github.com/users/gulnaz-zh/followers",
"following_url": "https://api.github.com/users/gulnaz-zh/following{/other_user}",
"gists_url": "https://api.github.com/users/gulnaz-zh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gulnaz-zh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gulnaz-zh/subscriptions",
"organizations_url": "https://api.github.com/users/gulnaz-zh/orgs",
"repos_url": "https://api.github.com/users/gulnaz-zh/repos",
"events_url": "https://api.github.com/users/gulnaz-zh/events{/privacy}",
"received_events_url": "https://api.github.com/users/gulnaz-zh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I got the same error, and the official website for visual genome is down. Did you solve this problem? ",
"I am in the same situation now :( ",
"Thanks for reporting, @gulnaz-zh.\r\n\r\nI am investigating it.",
"The host server is down: https://visualgenome.org/\r\n\r\nWe are contacting the dataset authors.",
"Apart form data host server being down, there is an additional issue with the `datasets` library introduced by this PR:\r\n- #5238\r\n\r\nI am working to fix it.",
"PR that fixes the AttributeError: https://huggingface.co/datasets/visual_genome/discussions/2",
"For the issue with their data host server being down, I have opened a discussion in the \"Community\" tab of the Hub dataset: https://huggingface.co/datasets/visual_genome/discussions/3\r\nLet's continue the discussion there.",
"The authors just replied to us with their new URL: https://homes.cs.washington.edu/~ranjay/visualgenome/\r\n\r\nWe have fixed the datasets loading script, which is operative again."
] | 2023-04-14T10:48:06 | 2023-06-30T11:31:17 | 2023-04-18T12:57:08 | NONE | null | ### Describe the bug
When I run
from datasets import load_dataset
data = load_dataset("visual_genome", 'region_descriptions_v1.2.0')
AttributeError: 'Version' object has no attribute 'match'
### Steps to reproduce the bug
from datasets import load_dataset
data = load_dataset("visual_genome", 'region_descriptions_v1.2.0')
### Expected behavior
This is error trace:
Downloading and preparing dataset visual_genome/region_descriptions_v1.2.0 to C:/Users/Acer/.cache/huggingface/datasets/visual_genome/region_descriptions_v1.2.0/1.2.0/136fe5b83f6691884566c5530313288171e053a3b33bfe3ea2e4c8b39abaf7f3...
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[6], line 1
----> 1 data = load_dataset("visual_genome", 'region_descriptions_v1.2.0')
File ~\.conda\envs\aai\Lib\site-packages\datasets\load.py:1791, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
1788 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1790 # Download and prepare data
-> 1791 builder_instance.download_and_prepare(
1792 download_config=download_config,
1793 download_mode=download_mode,
1794 verification_mode=verification_mode,
1795 try_from_hf_gcs=try_from_hf_gcs,
1796 num_proc=num_proc,
1797 storage_options=storage_options,
1798 )
1800 # Build dataset for splits
1801 keep_in_memory = (
1802 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1803 )
File ~\.conda\envs\aai\Lib\site-packages\datasets\builder.py:891, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
889 if num_proc is not None:
890 prepare_split_kwargs["num_proc"] = num_proc
--> 891 self._download_and_prepare(
892 dl_manager=dl_manager,
893 verification_mode=verification_mode,
894 **prepare_split_kwargs,
895 **download_and_prepare_kwargs,
896 )
897 # Sync info
898 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~\.conda\envs\aai\Lib\site-packages\datasets\builder.py:1651, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)
1650 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):
-> 1651 super()._download_and_prepare(
1652 dl_manager,
1653 verification_mode,
1654 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS
1655 or verification_mode == VerificationMode.ALL_CHECKS,
1656 **prepare_splits_kwargs,
1657 )
File ~\.conda\envs\aai\Lib\site-packages\datasets\builder.py:964, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
962 split_dict = SplitDict(dataset_name=self.name)
963 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 964 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
966 # Checksums verification
967 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums:
File ~\.cache\huggingface\modules\datasets_modules\datasets\visual_genome\136fe5b83f6691884566c5530313288171e053a3b33bfe3ea2e4c8b39abaf7f3\visual_genome.py:377, in VisualGenome._split_generators(self, dl_manager)
375 def _split_generators(self, dl_manager):
376 # Download image meta datas.
--> 377 image_metadatas_dir = dl_manager.download_and_extract(self.config.image_metadata_url)
378 image_metadatas_file = os.path.join(
379 image_metadatas_dir, _get_decompressed_filename_from_url(self.config.image_metadata_url)
380 )
382 # Download annotations
File ~\.cache\huggingface\modules\datasets_modules\datasets\visual_genome\136fe5b83f6691884566c5530313288171e053a3b33bfe3ea2e4c8b39abaf7f3\visual_genome.py:328, in VisualGenomeConfig.image_metadata_url(self)
326 @property
327 def image_metadata_url(self):
--> 328 if not self.version.match(_LATEST_VERSIONS["image_metadata"]):
329 logger.warning(
330 f"Latest image metadata version is {_LATEST_VERSIONS['image_metadata']}. Trying to generate a dataset of version: {self.version}. Please double check that image data are unchanged between the two versions."
331 )
332 return f"{_BASE_ANNOTATION_URL}/image_data.json.zip"
### Environment info
datasets 2.11.0
python 3.11.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5749/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5749/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5748 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5748/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5748/comments | https://api.github.com/repos/huggingface/datasets/issues/5748/events | https://github.com/huggingface/datasets/pull/5748 | 1,667,517,024 | PR_kwDODunzps5OSgNH | 5,748 | [BUG FIX] Issue 5739 | {
"login": "ericxsun",
"id": 1772912,
"node_id": "MDQ6VXNlcjE3NzI5MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1772912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ericxsun",
"html_url": "https://github.com/ericxsun",
"followers_url": "https://api.github.com/users/ericxsun/followers",
"following_url": "https://api.github.com/users/ericxsun/following{/other_user}",
"gists_url": "https://api.github.com/users/ericxsun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ericxsun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ericxsun/subscriptions",
"organizations_url": "https://api.github.com/users/ericxsun/orgs",
"repos_url": "https://api.github.com/users/ericxsun/repos",
"events_url": "https://api.github.com/users/ericxsun/events{/privacy}",
"received_events_url": "https://api.github.com/users/ericxsun/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2023-04-14T05:07:31 | 2023-04-14T05:07:31 | null | NONE | null | A fix for https://github.com/huggingface/datasets/issues/5739 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5748/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5748",
"html_url": "https://github.com/huggingface/datasets/pull/5748",
"diff_url": "https://github.com/huggingface/datasets/pull/5748.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5748.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5747 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5747/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5747/comments | https://api.github.com/repos/huggingface/datasets/issues/5747/events | https://github.com/huggingface/datasets/pull/5747 | 1,667,270,412 | PR_kwDODunzps5ORtBF | 5,747 | [WIP] Add Dataset.to_spark | {
"login": "maddiedawson",
"id": 106995444,
"node_id": "U_kgDOBmCe9A",
"avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maddiedawson",
"html_url": "https://github.com/maddiedawson",
"followers_url": "https://api.github.com/users/maddiedawson/followers",
"following_url": "https://api.github.com/users/maddiedawson/following{/other_user}",
"gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions",
"organizations_url": "https://api.github.com/users/maddiedawson/orgs",
"repos_url": "https://api.github.com/users/maddiedawson/repos",
"events_url": "https://api.github.com/users/maddiedawson/events{/privacy}",
"received_events_url": "https://api.github.com/users/maddiedawson/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2023-04-13T23:20:03 | 2023-05-05T12:31:10 | null | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5747/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5747",
"html_url": "https://github.com/huggingface/datasets/pull/5747",
"diff_url": "https://github.com/huggingface/datasets/pull/5747.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5747.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5746 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5746/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5746/comments | https://api.github.com/repos/huggingface/datasets/issues/5746/events | https://github.com/huggingface/datasets/pull/5746 | 1,667,102,459 | PR_kwDODunzps5ORIUU | 5,746 | Fix link in docs | {
"login": "bbbxyz",
"id": 7485661,
"node_id": "MDQ6VXNlcjc0ODU2NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7485661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bbbxyz",
"html_url": "https://github.com/bbbxyz",
"followers_url": "https://api.github.com/users/bbbxyz/followers",
"following_url": "https://api.github.com/users/bbbxyz/following{/other_user}",
"gists_url": "https://api.github.com/users/bbbxyz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bbbxyz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bbbxyz/subscriptions",
"organizations_url": "https://api.github.com/users/bbbxyz/orgs",
"repos_url": "https://api.github.com/users/bbbxyz/repos",
"events_url": "https://api.github.com/users/bbbxyz/events{/privacy}",
"received_events_url": "https://api.github.com/users/bbbxyz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006461 / 0.011353 (-0.004892) | 0.004671 / 0.011008 (-0.006337) | 0.097329 / 0.038508 (0.058821) | 0.028380 / 0.023109 (0.005270) | 0.369892 / 0.275898 (0.093994) | 0.398244 / 0.323480 (0.074764) | 0.004795 / 0.007986 (-0.003190) | 0.004866 / 0.004328 (0.000538) | 0.075060 / 0.004250 (0.070809) | 0.035678 / 0.037052 (-0.001374) | 0.372197 / 0.258489 (0.113708) | 0.407509 / 0.293841 (0.113668) | 0.031557 / 0.128546 (-0.096989) | 0.011608 / 0.075646 (-0.064038) | 0.325467 / 0.419271 (-0.093805) | 0.042590 / 0.043533 (-0.000943) | 0.373738 / 0.255139 (0.118599) | 0.395793 / 0.283200 (0.112593) | 0.082335 / 0.141683 (-0.059348) | 1.471582 / 1.452155 (0.019427) | 1.535834 / 1.492716 (0.043117) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192432 / 0.018006 (0.174426) | 0.404423 / 0.000490 (0.403933) | 0.003252 / 0.000200 (0.003052) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025312 / 0.037411 (-0.012099) | 0.099964 / 0.014526 (0.085438) | 0.108779 / 0.176557 (-0.067777) | 0.170438 / 0.737135 (-0.566697) | 0.110116 / 0.296338 (-0.186223) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420402 / 0.215209 (0.205193) | 4.179142 / 2.077655 (2.101487) | 1.858114 / 1.504120 (0.353994) | 1.674452 / 1.541195 (0.133257) | 1.697839 / 1.468490 (0.229349) | 0.694707 / 4.584777 (-3.890070) | 3.394321 / 3.745712 (-0.351391) | 1.918437 / 5.269862 (-3.351425) | 1.277954 / 4.565676 (-3.287723) | 0.082357 / 0.424275 (-0.341918) | 0.012206 / 0.007607 (0.004598) | 0.522093 / 0.226044 (0.296049) | 5.239604 / 2.268929 (2.970675) | 2.347764 / 55.444624 (-53.096860) | 1.996864 / 6.876477 (-4.879613) | 2.050820 / 2.142072 (-0.091253) | 0.806110 / 4.805227 (-3.999118) | 0.151061 / 6.500664 (-6.349603) | 0.066438 / 0.075469 (-0.009031) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211233 / 1.841788 (-0.630554) | 14.054422 / 8.074308 (5.980114) | 14.110141 / 10.191392 (3.918749) | 0.129962 / 0.680424 (-0.550462) | 0.017271 / 0.534201 (-0.516930) | 0.386410 / 0.579283 (-0.192873) | 0.392648 / 0.434364 (-0.041716) | 0.444940 / 0.540337 (-0.095398) | 0.533535 / 1.386936 (-0.853401) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006865 / 0.011353 (-0.004488) | 0.004662 / 0.011008 (-0.006346) | 0.077837 / 0.038508 (0.039329) | 0.028258 / 0.023109 (0.005149) | 0.346136 / 0.275898 (0.070238) | 0.380414 / 0.323480 (0.056934) | 0.005039 / 0.007986 (-0.002947) | 0.004967 / 0.004328 (0.000638) | 0.077774 / 0.004250 (0.073523) | 0.037504 / 0.037052 (0.000452) | 0.341550 / 0.258489 (0.083061) | 0.382494 / 0.293841 (0.088653) | 0.031881 / 0.128546 (-0.096665) | 0.011746 / 0.075646 (-0.063901) | 0.087087 / 0.419271 (-0.332185) | 0.043108 / 0.043533 (-0.000425) | 0.344103 / 0.255139 (0.088964) | 0.366613 / 0.283200 (0.083413) | 0.090399 / 0.141683 (-0.051284) | 1.492675 / 1.452155 (0.040520) | 1.588666 / 1.492716 (0.095950) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191859 / 0.018006 (0.173853) | 0.412514 / 0.000490 (0.412025) | 0.001953 / 0.000200 (0.001753) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025159 / 0.037411 (-0.012252) | 0.100125 / 0.014526 (0.085599) | 0.106000 / 0.176557 (-0.070556) | 0.160710 / 0.737135 (-0.576425) | 0.110449 / 0.296338 (-0.185889) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436636 / 0.215209 (0.221427) | 4.364597 / 2.077655 (2.286942) | 2.077492 / 1.504120 (0.573372) | 1.868248 / 1.541195 (0.327053) | 1.911218 / 1.468490 (0.442728) | 0.700306 / 4.584777 (-3.884471) | 3.385428 / 3.745712 (-0.360284) | 2.965384 / 5.269862 (-2.304478) | 1.522093 / 4.565676 (-3.043583) | 0.082805 / 0.424275 (-0.341470) | 0.012432 / 0.007607 (0.004825) | 0.538478 / 0.226044 (0.312433) | 5.383207 / 2.268929 (3.114278) | 2.525177 / 55.444624 (-52.919447) | 2.179632 / 6.876477 (-4.696845) | 2.280768 / 2.142072 (0.138695) | 0.805869 / 4.805227 (-3.999358) | 0.152716 / 6.500664 (-6.347948) | 0.067848 / 0.075469 (-0.007621) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.318899 / 1.841788 (-0.522889) | 14.416310 / 8.074308 (6.342002) | 14.172804 / 10.191392 (3.981412) | 0.141729 / 0.680424 (-0.538695) | 0.016785 / 0.534201 (-0.517416) | 0.378626 / 0.579283 (-0.200657) | 0.387153 / 0.434364 (-0.047211) | 0.439950 / 0.540337 (-0.100388) | 0.523958 / 1.386936 (-0.862978) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7c3a9b057c476c40d157bd7a5d57f49066239df0 \"CML watermark\")\n"
] | 2023-04-13T20:45:19 | 2023-04-14T13:15:38 | 2023-04-14T13:08:42 | CONTRIBUTOR | null | Fixes a broken link in the use_with_pytorch docs | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5746/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5746",
"html_url": "https://github.com/huggingface/datasets/pull/5746",
"diff_url": "https://github.com/huggingface/datasets/pull/5746.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5746.patch",
"merged_at": "2023-04-14T13:08:42"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5745 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5745/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5745/comments | https://api.github.com/repos/huggingface/datasets/issues/5745/events | https://github.com/huggingface/datasets/pull/5745 | 1,667,086,143 | PR_kwDODunzps5ORE2n | 5,745 | [BUG FIX] Issue 5744 | {
"login": "keyboardAnt",
"id": 15572698,
"node_id": "MDQ6VXNlcjE1NTcyNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/15572698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keyboardAnt",
"html_url": "https://github.com/keyboardAnt",
"followers_url": "https://api.github.com/users/keyboardAnt/followers",
"following_url": "https://api.github.com/users/keyboardAnt/following{/other_user}",
"gists_url": "https://api.github.com/users/keyboardAnt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/keyboardAnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keyboardAnt/subscriptions",
"organizations_url": "https://api.github.com/users/keyboardAnt/orgs",
"repos_url": "https://api.github.com/users/keyboardAnt/repos",
"events_url": "https://api.github.com/users/keyboardAnt/events{/privacy}",
"received_events_url": "https://api.github.com/users/keyboardAnt/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Have met the same problem with datasets==2.8.0, pandas==2.0.0. It could be solved by installing the latest version of datasets or using datasets==2.8.0, pandas==1.5.3.",
"Pandas 2.0.0 has removed support to passing `mangle_dupe_cols`.\r\n\r\nHowever, our `datasets` library does not use this parameter: it only passes it to pandas if the user passes it to `load_dataset`.\r\n\r\nYou should better:\r\n- Either \"take steps to stop the use of 'mangle_dupe_cols'\" (as it was suggested in the deprecation warning in pandas-1.5.3)\r\n- Or pin pandas (< 2.0.0) in your local requirements file\r\n\r\nPlease note that from `datasets` library, we don't want to force users to use a specific pandas version. We would like to support users as well:\r\n- that use pandas < 1.5.3\r\n- that use pandas >= 2.0.0 and that do not pass the 'mangle_dupe_cols' parameter",
"`datasets` 2.11 doesn't pass `mangle_dupe_cols` unless the user specifies it indeed, so I think we're fine"
] | 2023-04-13T20:29:55 | 2023-04-21T15:22:43 | null | NONE | null | A temporal fix for https://github.com/huggingface/datasets/issues/5744. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5745/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5745",
"html_url": "https://github.com/huggingface/datasets/pull/5745",
"diff_url": "https://github.com/huggingface/datasets/pull/5745.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5745.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5744 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5744/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5744/comments | https://api.github.com/repos/huggingface/datasets/issues/5744/events | https://github.com/huggingface/datasets/issues/5744 | 1,667,076,620 | I_kwDODunzps5jXZIM | 5,744 | [BUG] With Pandas 2.0.0, `load_dataset` raises `TypeError: read_csv() got an unexpected keyword argument 'mangle_dupe_cols'` | {
"login": "keyboardAnt",
"id": 15572698,
"node_id": "MDQ6VXNlcjE1NTcyNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/15572698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keyboardAnt",
"html_url": "https://github.com/keyboardAnt",
"followers_url": "https://api.github.com/users/keyboardAnt/followers",
"following_url": "https://api.github.com/users/keyboardAnt/following{/other_user}",
"gists_url": "https://api.github.com/users/keyboardAnt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/keyboardAnt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keyboardAnt/subscriptions",
"organizations_url": "https://api.github.com/users/keyboardAnt/orgs",
"repos_url": "https://api.github.com/users/keyboardAnt/repos",
"events_url": "https://api.github.com/users/keyboardAnt/events{/privacy}",
"received_events_url": "https://api.github.com/users/keyboardAnt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @keyboardAnt.\r\n\r\nWe haven't noticed any crash in our CI tests. Could you please indicate specifically the `load_dataset` command that crashes in your side, so that we can reproduce it?",
"This has been fixed in `datasets` 2.11"
] | 2023-04-13T20:21:28 | 2023-07-06T17:01:59 | 2023-07-06T17:01:59 | NONE | null | The `load_dataset` function with Pandas `1.5.3` has no issue (just a FutureWarning) but crashes with Pandas `2.0.0`.
For your convenience, I opened a draft Pull Request to fix it quickly: https://github.com/huggingface/datasets/pull/5745
---
* The FutureWarning mentioned above:
```
FutureWarning: the 'mangle_dupe_cols' keyword is deprecated and will be removed in a future version. Please take steps to stop the use of 'mangle_dupe_cols'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5744/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5744/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5743 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5743/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5743/comments | https://api.github.com/repos/huggingface/datasets/issues/5743/events | https://github.com/huggingface/datasets/issues/5743 | 1,666,843,832 | I_kwDODunzps5jWgS4 | 5,743 | dataclass.py in virtual environment is overriding the stdlib module "dataclasses" | {
"login": "syedabdullahhassan",
"id": 71216295,
"node_id": "MDQ6VXNlcjcxMjE2Mjk1",
"avatar_url": "https://avatars.githubusercontent.com/u/71216295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/syedabdullahhassan",
"html_url": "https://github.com/syedabdullahhassan",
"followers_url": "https://api.github.com/users/syedabdullahhassan/followers",
"following_url": "https://api.github.com/users/syedabdullahhassan/following{/other_user}",
"gists_url": "https://api.github.com/users/syedabdullahhassan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/syedabdullahhassan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/syedabdullahhassan/subscriptions",
"organizations_url": "https://api.github.com/users/syedabdullahhassan/orgs",
"repos_url": "https://api.github.com/users/syedabdullahhassan/repos",
"events_url": "https://api.github.com/users/syedabdullahhassan/events{/privacy}",
"received_events_url": "https://api.github.com/users/syedabdullahhassan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"We no longer depend on `dataclasses` (for almost a year), so I don't think our package is the problematic one. \r\n\r\nI think it makes more sense to raise this issue in the `dataclasses` repo: https://github.com/ericvsmith/dataclasses."
] | 2023-04-13T17:28:33 | 2023-04-17T12:23:18 | 2023-04-17T12:23:18 | NONE | null | ### Describe the bug
"e:\Krish_naik\FSDSRegression\venv\Lib\dataclasses.py" is overriding the stdlib module "dataclasses"
### Steps to reproduce the bug
module issue
### Expected behavior
overriding the stdlib module "dataclasses"
### Environment info
VS code | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5743/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5742 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5742/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5742/comments | https://api.github.com/repos/huggingface/datasets/issues/5742/events | https://github.com/huggingface/datasets/pull/5742 | 1,666,209,738 | PR_kwDODunzps5OOH-W | 5,742 | Warning specifying future change in to_tf_dataset behaviour | {
"login": "amyeroberts",
"id": 22614925,
"node_id": "MDQ6VXNlcjIyNjE0OTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/22614925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amyeroberts",
"html_url": "https://github.com/amyeroberts",
"followers_url": "https://api.github.com/users/amyeroberts/followers",
"following_url": "https://api.github.com/users/amyeroberts/following{/other_user}",
"gists_url": "https://api.github.com/users/amyeroberts/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amyeroberts/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amyeroberts/subscriptions",
"organizations_url": "https://api.github.com/users/amyeroberts/orgs",
"repos_url": "https://api.github.com/users/amyeroberts/repos",
"events_url": "https://api.github.com/users/amyeroberts/events{/privacy}",
"received_events_url": "https://api.github.com/users/amyeroberts/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006693 / 0.011353 (-0.004660) | 0.004586 / 0.011008 (-0.006422) | 0.097238 / 0.038508 (0.058730) | 0.027912 / 0.023109 (0.004802) | 0.347339 / 0.275898 (0.071441) | 0.393847 / 0.323480 (0.070368) | 0.005105 / 0.007986 (-0.002880) | 0.004750 / 0.004328 (0.000422) | 0.074671 / 0.004250 (0.070421) | 0.037912 / 0.037052 (0.000860) | 0.368973 / 0.258489 (0.110483) | 0.403983 / 0.293841 (0.110142) | 0.030817 / 0.128546 (-0.097730) | 0.011813 / 0.075646 (-0.063833) | 0.324470 / 0.419271 (-0.094802) | 0.044232 / 0.043533 (0.000699) | 0.347623 / 0.255139 (0.092484) | 0.382458 / 0.283200 (0.099259) | 0.086603 / 0.141683 (-0.055080) | 1.485778 / 1.452155 (0.033623) | 1.549776 / 1.492716 (0.057059) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200154 / 0.018006 (0.182147) | 0.440645 / 0.000490 (0.440155) | 0.003664 / 0.000200 (0.003464) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023635 / 0.037411 (-0.013776) | 0.094969 / 0.014526 (0.080443) | 0.103630 / 0.176557 (-0.072927) | 0.168655 / 0.737135 (-0.568480) | 0.105850 / 0.296338 (-0.190488) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425224 / 0.215209 (0.210015) | 4.236618 / 2.077655 (2.158963) | 1.917091 / 1.504120 (0.412971) | 1.746984 / 1.541195 (0.205789) | 1.817766 / 1.468490 (0.349276) | 0.700989 / 4.584777 (-3.883788) | 3.412577 / 3.745712 (-0.333135) | 3.049311 / 5.269862 (-2.220551) | 1.607692 / 4.565676 (-2.957984) | 0.083410 / 0.424275 (-0.340865) | 0.012601 / 0.007607 (0.004994) | 0.528244 / 0.226044 (0.302200) | 5.284134 / 2.268929 (3.015206) | 2.391885 / 55.444624 (-53.052740) | 2.020018 / 6.876477 (-4.856459) | 2.105908 / 2.142072 (-0.036164) | 0.801262 / 4.805227 (-4.003965) | 0.151467 / 6.500664 (-6.349197) | 0.066529 / 0.075469 (-0.008940) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.203894 / 1.841788 (-0.637894) | 13.827561 / 8.074308 (5.753253) | 14.136730 / 10.191392 (3.945338) | 0.143829 / 0.680424 (-0.536595) | 0.016410 / 0.534201 (-0.517791) | 0.378194 / 0.579283 (-0.201089) | 0.391235 / 0.434364 (-0.043129) | 0.439261 / 0.540337 (-0.101076) | 0.527181 / 1.386936 (-0.859755) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006639 / 0.011353 (-0.004714) | 0.004469 / 0.011008 (-0.006540) | 0.076495 / 0.038508 (0.037987) | 0.027880 / 0.023109 (0.004771) | 0.342807 / 0.275898 (0.066909) | 0.374258 / 0.323480 (0.050778) | 0.005543 / 0.007986 (-0.002443) | 0.003362 / 0.004328 (-0.000966) | 0.075064 / 0.004250 (0.070813) | 0.039209 / 0.037052 (0.002156) | 0.342490 / 0.258489 (0.084001) | 0.382135 / 0.293841 (0.088294) | 0.030356 / 0.128546 (-0.098191) | 0.011762 / 0.075646 (-0.063884) | 0.086031 / 0.419271 (-0.333241) | 0.041991 / 0.043533 (-0.001542) | 0.340323 / 0.255139 (0.085184) | 0.364160 / 0.283200 (0.080961) | 0.088483 / 0.141683 (-0.053200) | 1.502836 / 1.452155 (0.050681) | 1.570438 / 1.492716 (0.077722) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218486 / 0.018006 (0.200480) | 0.405251 / 0.000490 (0.404761) | 0.000398 / 0.000200 (0.000198) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025738 / 0.037411 (-0.011673) | 0.100390 / 0.014526 (0.085864) | 0.109913 / 0.176557 (-0.066644) | 0.161310 / 0.737135 (-0.575826) | 0.113269 / 0.296338 (-0.183069) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438083 / 0.215209 (0.222874) | 4.377742 / 2.077655 (2.300087) | 2.069949 / 1.504120 (0.565829) | 1.857807 / 1.541195 (0.316613) | 1.881315 / 1.468490 (0.412825) | 0.695373 / 4.584777 (-3.889404) | 3.440287 / 3.745712 (-0.305425) | 1.842888 / 5.269862 (-3.426973) | 1.146655 / 4.565676 (-3.419022) | 0.083386 / 0.424275 (-0.340889) | 0.012290 / 0.007607 (0.004683) | 0.545672 / 0.226044 (0.319628) | 5.469568 / 2.268929 (3.200639) | 2.511886 / 55.444624 (-52.932739) | 2.184210 / 6.876477 (-4.692267) | 2.329822 / 2.142072 (0.187749) | 0.804114 / 4.805227 (-4.001114) | 0.151651 / 6.500664 (-6.349013) | 0.067269 / 0.075469 (-0.008200) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272564 / 1.841788 (-0.569223) | 14.180708 / 8.074308 (6.106400) | 14.181657 / 10.191392 (3.990265) | 0.131443 / 0.680424 (-0.548981) | 0.016513 / 0.534201 (-0.517688) | 0.383786 / 0.579283 (-0.195497) | 0.397678 / 0.434364 (-0.036686) | 0.447003 / 0.540337 (-0.093334) | 0.539453 / 1.386936 (-0.847483) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#649d5a3315f9e7666713b6affe318ee00c7163a0 \"CML watermark\")\n"
] | 2023-04-13T11:10:00 | 2023-04-21T13:18:14 | 2023-04-21T13:11:09 | CONTRIBUTOR | null | Warning specifying future changes happening to `to_tf_dataset` behaviour when #5602 is merged in | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5742/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5742",
"html_url": "https://github.com/huggingface/datasets/pull/5742",
"diff_url": "https://github.com/huggingface/datasets/pull/5742.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5742.patch",
"merged_at": "2023-04-21T13:11:09"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5741 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5741/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5741/comments | https://api.github.com/repos/huggingface/datasets/issues/5741/events | https://github.com/huggingface/datasets/pull/5741 | 1,665,860,919 | PR_kwDODunzps5OM9nZ | 5,741 | Fix CI warnings | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007448 / 0.011353 (-0.003905) | 0.005182 / 0.011008 (-0.005826) | 0.098718 / 0.038508 (0.060210) | 0.034594 / 0.023109 (0.011485) | 0.317301 / 0.275898 (0.041403) | 0.357800 / 0.323480 (0.034320) | 0.005860 / 0.007986 (-0.002126) | 0.004267 / 0.004328 (-0.000061) | 0.074876 / 0.004250 (0.070626) | 0.048002 / 0.037052 (0.010950) | 0.333360 / 0.258489 (0.074871) | 0.362080 / 0.293841 (0.068239) | 0.035957 / 0.128546 (-0.092589) | 0.012245 / 0.075646 (-0.063401) | 0.332970 / 0.419271 (-0.086301) | 0.050825 / 0.043533 (0.007293) | 0.313936 / 0.255139 (0.058797) | 0.340684 / 0.283200 (0.057485) | 0.106630 / 0.141683 (-0.035053) | 1.427898 / 1.452155 (-0.024257) | 1.547518 / 1.492716 (0.054801) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296952 / 0.018006 (0.278945) | 0.515708 / 0.000490 (0.515218) | 0.004225 / 0.000200 (0.004025) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029365 / 0.037411 (-0.008046) | 0.111142 / 0.014526 (0.096616) | 0.124414 / 0.176557 (-0.052142) | 0.185227 / 0.737135 (-0.551908) | 0.129545 / 0.296338 (-0.166793) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403303 / 0.215209 (0.188094) | 4.044138 / 2.077655 (1.966483) | 1.803622 / 1.504120 (0.299502) | 1.615436 / 1.541195 (0.074242) | 1.703576 / 1.468490 (0.235086) | 0.706398 / 4.584777 (-3.878379) | 3.912995 / 3.745712 (0.167283) | 4.004575 / 5.269862 (-1.265287) | 2.101592 / 4.565676 (-2.464085) | 0.087280 / 0.424275 (-0.336995) | 0.012564 / 0.007607 (0.004957) | 0.508484 / 0.226044 (0.282440) | 5.089351 / 2.268929 (2.820422) | 2.269022 / 55.444624 (-53.175602) | 1.933375 / 6.876477 (-4.943102) | 2.136783 / 2.142072 (-0.005289) | 0.862624 / 4.805227 (-3.942603) | 0.172107 / 6.500664 (-6.328557) | 0.066694 / 0.075469 (-0.008775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.172513 / 1.841788 (-0.669275) | 15.877519 / 8.074308 (7.803211) | 14.687476 / 10.191392 (4.496084) | 0.189392 / 0.680424 (-0.491032) | 0.017334 / 0.534201 (-0.516866) | 0.420201 / 0.579283 (-0.159082) | 0.418502 / 0.434364 (-0.015862) | 0.489130 / 0.540337 (-0.051207) | 0.580678 / 1.386936 (-0.806258) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007942 / 0.011353 (-0.003411) | 0.005312 / 0.011008 (-0.005696) | 0.074684 / 0.038508 (0.036176) | 0.035952 / 0.023109 (0.012843) | 0.349672 / 0.275898 (0.073774) | 0.377157 / 0.323480 (0.053678) | 0.006399 / 0.007986 (-0.001586) | 0.005769 / 0.004328 (0.001441) | 0.074283 / 0.004250 (0.070032) | 0.053217 / 0.037052 (0.016165) | 0.342545 / 0.258489 (0.084056) | 0.383663 / 0.293841 (0.089822) | 0.037234 / 0.128546 (-0.091312) | 0.012349 / 0.075646 (-0.063298) | 0.086522 / 0.419271 (-0.332749) | 0.049888 / 0.043533 (0.006355) | 0.337686 / 0.255139 (0.082547) | 0.361564 / 0.283200 (0.078365) | 0.104902 / 0.141683 (-0.036781) | 1.478259 / 1.452155 (0.026104) | 1.576376 / 1.492716 (0.083660) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.339760 / 0.018006 (0.321753) | 0.530946 / 0.000490 (0.530456) | 0.000474 / 0.000200 (0.000274) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029685 / 0.037411 (-0.007726) | 0.109409 / 0.014526 (0.094883) | 0.125579 / 0.176557 (-0.050978) | 0.175378 / 0.737135 (-0.561757) | 0.130672 / 0.296338 (-0.165667) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428456 / 0.215209 (0.213247) | 4.238731 / 2.077655 (2.161077) | 2.046703 / 1.504120 (0.542583) | 1.850701 / 1.541195 (0.309506) | 1.909290 / 1.468490 (0.440800) | 0.714314 / 4.584777 (-3.870463) | 3.816056 / 3.745712 (0.070344) | 2.118567 / 5.269862 (-3.151295) | 1.348017 / 4.565676 (-3.217659) | 0.087140 / 0.424275 (-0.337135) | 0.012546 / 0.007607 (0.004938) | 0.538041 / 0.226044 (0.311997) | 5.381822 / 2.268929 (3.112893) | 2.525685 / 55.444624 (-52.918939) | 2.178659 / 6.876477 (-4.697817) | 2.381054 / 2.142072 (0.238981) | 0.844404 / 4.805227 (-3.960823) | 0.171802 / 6.500664 (-6.328862) | 0.065630 / 0.075469 (-0.009839) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262187 / 1.841788 (-0.579600) | 16.197668 / 8.074308 (8.123360) | 15.148636 / 10.191392 (4.957244) | 0.152601 / 0.680424 (-0.527823) | 0.020238 / 0.534201 (-0.513963) | 0.420141 / 0.579283 (-0.159142) | 0.416295 / 0.434364 (-0.018068) | 0.487051 / 0.540337 (-0.053286) | 0.581942 / 1.386936 (-0.804994) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9615e5af75b190c4e7b66792f9ba444f352765a0 \"CML watermark\")\n"
] | 2023-04-13T07:17:02 | 2023-04-13T09:48:10 | 2023-04-13T09:40:50 | MEMBER | null | Fix warnings in our CI tests. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5741/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5741",
"html_url": "https://github.com/huggingface/datasets/pull/5741",
"diff_url": "https://github.com/huggingface/datasets/pull/5741.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5741.patch",
"merged_at": "2023-04-13T09:40:50"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5740 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5740/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5740/comments | https://api.github.com/repos/huggingface/datasets/issues/5740/events | https://github.com/huggingface/datasets/pull/5740 | 1,664,132,130 | PR_kwDODunzps5OHI08 | 5,740 | Fix CI mock filesystem fixtures | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007003 / 0.011353 (-0.004350) | 0.004854 / 0.011008 (-0.006154) | 0.096982 / 0.038508 (0.058474) | 0.033218 / 0.023109 (0.010109) | 0.314088 / 0.275898 (0.038190) | 0.351315 / 0.323480 (0.027835) | 0.005679 / 0.007986 (-0.002307) | 0.005404 / 0.004328 (0.001075) | 0.071773 / 0.004250 (0.067522) | 0.044593 / 0.037052 (0.007540) | 0.323643 / 0.258489 (0.065154) | 0.357172 / 0.293841 (0.063331) | 0.036782 / 0.128546 (-0.091764) | 0.012146 / 0.075646 (-0.063501) | 0.334874 / 0.419271 (-0.084397) | 0.051475 / 0.043533 (0.007942) | 0.305949 / 0.255139 (0.050810) | 0.339326 / 0.283200 (0.056126) | 0.101509 / 0.141683 (-0.040174) | 1.458254 / 1.452155 (0.006099) | 1.535252 / 1.492716 (0.042535) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.264837 / 0.018006 (0.246831) | 0.441444 / 0.000490 (0.440955) | 0.003331 / 0.000200 (0.003131) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026529 / 0.037411 (-0.010882) | 0.105924 / 0.014526 (0.091398) | 0.117191 / 0.176557 (-0.059365) | 0.176606 / 0.737135 (-0.560529) | 0.123452 / 0.296338 (-0.172887) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412351 / 0.215209 (0.197142) | 4.135468 / 2.077655 (2.057813) | 1.912820 / 1.504120 (0.408700) | 1.738993 / 1.541195 (0.197798) | 1.754228 / 1.468490 (0.285738) | 0.692239 / 4.584777 (-3.892538) | 3.765672 / 3.745712 (0.019959) | 2.081141 / 5.269862 (-3.188720) | 1.425153 / 4.565676 (-3.140523) | 0.085055 / 0.424275 (-0.339220) | 0.011918 / 0.007607 (0.004311) | 0.517573 / 0.226044 (0.291529) | 5.179809 / 2.268929 (2.910881) | 2.471620 / 55.444624 (-52.973005) | 2.140634 / 6.876477 (-4.735843) | 2.200150 / 2.142072 (0.058077) | 0.831662 / 4.805227 (-3.973566) | 0.168828 / 6.500664 (-6.331836) | 0.062755 / 0.075469 (-0.012714) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196890 / 1.841788 (-0.644898) | 14.826423 / 8.074308 (6.752114) | 14.020782 / 10.191392 (3.829390) | 0.161275 / 0.680424 (-0.519149) | 0.017467 / 0.534201 (-0.516734) | 0.422278 / 0.579283 (-0.157005) | 0.424053 / 0.434364 (-0.010311) | 0.490768 / 0.540337 (-0.049570) | 0.584490 / 1.386936 (-0.802446) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007102 / 0.011353 (-0.004250) | 0.005145 / 0.011008 (-0.005863) | 0.073823 / 0.038508 (0.035315) | 0.032947 / 0.023109 (0.009838) | 0.336978 / 0.275898 (0.061080) | 0.368961 / 0.323480 (0.045481) | 0.006052 / 0.007986 (-0.001934) | 0.003970 / 0.004328 (-0.000358) | 0.072925 / 0.004250 (0.068674) | 0.044502 / 0.037052 (0.007450) | 0.340849 / 0.258489 (0.082360) | 0.381487 / 0.293841 (0.087646) | 0.037207 / 0.128546 (-0.091339) | 0.012095 / 0.075646 (-0.063551) | 0.085206 / 0.419271 (-0.334065) | 0.056236 / 0.043533 (0.012703) | 0.334048 / 0.255139 (0.078909) | 0.360442 / 0.283200 (0.077242) | 0.104402 / 0.141683 (-0.037281) | 1.446907 / 1.452155 (-0.005248) | 1.542430 / 1.492716 (0.049713) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238720 / 0.018006 (0.220714) | 0.445857 / 0.000490 (0.445367) | 0.009280 / 0.000200 (0.009080) | 0.000150 / 0.000054 (0.000095) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028414 / 0.037411 (-0.008998) | 0.110506 / 0.014526 (0.095981) | 0.124593 / 0.176557 (-0.051964) | 0.170951 / 0.737135 (-0.566184) | 0.128033 / 0.296338 (-0.168305) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426206 / 0.215209 (0.210997) | 4.267289 / 2.077655 (2.189634) | 2.026880 / 1.504120 (0.522760) | 1.844052 / 1.541195 (0.302858) | 1.897697 / 1.468490 (0.429207) | 0.713545 / 4.584777 (-3.871232) | 3.815052 / 3.745712 (0.069339) | 3.217091 / 5.269862 (-2.052770) | 1.790546 / 4.565676 (-2.775130) | 0.087501 / 0.424275 (-0.336774) | 0.012136 / 0.007607 (0.004529) | 0.534495 / 0.226044 (0.308451) | 5.325913 / 2.268929 (3.056984) | 2.484309 / 55.444624 (-52.960315) | 2.149721 / 6.876477 (-4.726756) | 2.158764 / 2.142072 (0.016692) | 0.855273 / 4.805227 (-3.949954) | 0.170374 / 6.500664 (-6.330290) | 0.064053 / 0.075469 (-0.011416) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253171 / 1.841788 (-0.588617) | 15.254562 / 8.074308 (7.180254) | 14.242119 / 10.191392 (4.050727) | 0.159298 / 0.680424 (-0.521126) | 0.017504 / 0.534201 (-0.516696) | 0.419710 / 0.579283 (-0.159574) | 0.417879 / 0.434364 (-0.016485) | 0.486328 / 0.540337 (-0.054009) | 0.578933 / 1.386936 (-0.808003) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bc38663c8e2c2b0b246791c3ed8bddbff163dd64 \"CML watermark\")\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008476 / 0.011353 (-0.002877) | 0.005745 / 0.011008 (-0.005263) | 0.115307 / 0.038508 (0.076799) | 0.039356 / 0.023109 (0.016247) | 0.367155 / 0.275898 (0.091257) | 0.422147 / 0.323480 (0.098667) | 0.006817 / 0.007986 (-0.001168) | 0.004652 / 0.004328 (0.000323) | 0.084045 / 0.004250 (0.079795) | 0.055483 / 0.037052 (0.018431) | 0.364249 / 0.258489 (0.105760) | 0.415975 / 0.293841 (0.122134) | 0.041322 / 0.128546 (-0.087224) | 0.014178 / 0.075646 (-0.061469) | 0.392658 / 0.419271 (-0.026614) | 0.060156 / 0.043533 (0.016623) | 0.373938 / 0.255139 (0.118799) | 0.397494 / 0.283200 (0.114294) | 0.113811 / 0.141683 (-0.027872) | 1.688581 / 1.452155 (0.236427) | 1.790374 / 1.492716 (0.297658) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222203 / 0.018006 (0.204196) | 0.471109 / 0.000490 (0.470619) | 0.007071 / 0.000200 (0.006871) | 0.000156 / 0.000054 (0.000102) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032112 / 0.037411 (-0.005299) | 0.118726 / 0.014526 (0.104200) | 0.134918 / 0.176557 (-0.041639) | 0.207766 / 0.737135 (-0.529369) | 0.139756 / 0.296338 (-0.156582) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.479858 / 0.215209 (0.264649) | 4.798428 / 2.077655 (2.720773) | 2.221573 / 1.504120 (0.717453) | 1.964956 / 1.541195 (0.423761) | 2.021763 / 1.468490 (0.553273) | 0.820401 / 4.584777 (-3.764376) | 4.533887 / 3.745712 (0.788175) | 4.121332 / 5.269862 (-1.148529) | 2.195807 / 4.565676 (-2.369869) | 0.103133 / 0.424275 (-0.321142) | 0.014620 / 0.007607 (0.007013) | 0.605012 / 0.226044 (0.378967) | 5.966623 / 2.268929 (3.697694) | 2.844118 / 55.444624 (-52.600506) | 2.463569 / 6.876477 (-4.412907) | 2.597177 / 2.142072 (0.455105) | 0.983201 / 4.805227 (-3.822026) | 0.199500 / 6.500664 (-6.301164) | 0.078387 / 0.075469 (0.002918) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.401083 / 1.841788 (-0.440705) | 17.258725 / 8.074308 (9.184417) | 16.825992 / 10.191392 (6.634600) | 0.216762 / 0.680424 (-0.463662) | 0.021135 / 0.534201 (-0.513066) | 0.513688 / 0.579283 (-0.065595) | 0.488892 / 0.434364 (0.054529) | 0.566745 / 0.540337 (0.026408) | 0.688958 / 1.386936 (-0.697978) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007948 / 0.011353 (-0.003405) | 0.005981 / 0.011008 (-0.005027) | 0.084474 / 0.038508 (0.045966) | 0.037952 / 0.023109 (0.014843) | 0.383359 / 0.275898 (0.107461) | 0.409324 / 0.323480 (0.085844) | 0.006641 / 0.007986 (-0.001344) | 0.004785 / 0.004328 (0.000456) | 0.083214 / 0.004250 (0.078964) | 0.053177 / 0.037052 (0.016125) | 0.393147 / 0.258489 (0.134658) | 0.438496 / 0.293841 (0.144655) | 0.042090 / 0.128546 (-0.086456) | 0.013373 / 0.075646 (-0.062273) | 0.097585 / 0.419271 (-0.321686) | 0.056359 / 0.043533 (0.012826) | 0.378113 / 0.255139 (0.122974) | 0.403874 / 0.283200 (0.120674) | 0.123503 / 0.141683 (-0.018180) | 1.639557 / 1.452155 (0.187403) | 1.759787 / 1.492716 (0.267071) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242534 / 0.018006 (0.224528) | 0.459040 / 0.000490 (0.458550) | 0.000454 / 0.000200 (0.000254) | 0.000066 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031747 / 0.037411 (-0.005664) | 0.125823 / 0.014526 (0.111297) | 0.138985 / 0.176557 (-0.037571) | 0.194371 / 0.737135 (-0.542764) | 0.148905 / 0.296338 (-0.147433) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.508201 / 0.215209 (0.292992) | 5.007519 / 2.077655 (2.929865) | 2.412956 / 1.504120 (0.908836) | 2.143378 / 1.541195 (0.602183) | 2.192966 / 1.468490 (0.724476) | 0.828497 / 4.584777 (-3.756280) | 4.496457 / 3.745712 (0.750745) | 2.397546 / 5.269862 (-2.872315) | 1.522889 / 4.565676 (-3.042787) | 0.099904 / 0.424275 (-0.324371) | 0.014561 / 0.007607 (0.006954) | 0.627417 / 0.226044 (0.401373) | 6.296441 / 2.268929 (4.027512) | 2.962858 / 55.444624 (-52.481767) | 2.543083 / 6.876477 (-4.333394) | 2.711884 / 2.142072 (0.569811) | 0.997969 / 4.805227 (-3.807259) | 0.200283 / 6.500664 (-6.300382) | 0.075934 / 0.075469 (0.000465) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.541707 / 1.841788 (-0.300081) | 17.791559 / 8.074308 (9.717251) | 16.782877 / 10.191392 (6.591485) | 0.171954 / 0.680424 (-0.508470) | 0.020506 / 0.534201 (-0.513695) | 0.504189 / 0.579283 (-0.075094) | 0.501655 / 0.434364 (0.067291) | 0.583120 / 0.540337 (0.042782) | 0.694931 / 1.386936 (-0.692005) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#53355f308f4ffb9b4071f5d420b5c6767799ef1c \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007613 / 0.011353 (-0.003740) | 0.005057 / 0.011008 (-0.005951) | 0.099147 / 0.038508 (0.060639) | 0.035358 / 0.023109 (0.012249) | 0.303442 / 0.275898 (0.027544) | 0.336898 / 0.323480 (0.013418) | 0.006216 / 0.007986 (-0.001770) | 0.004085 / 0.004328 (-0.000244) | 0.074567 / 0.004250 (0.070317) | 0.050917 / 0.037052 (0.013865) | 0.301786 / 0.258489 (0.043297) | 0.341362 / 0.293841 (0.047521) | 0.037019 / 0.128546 (-0.091528) | 0.011977 / 0.075646 (-0.063669) | 0.334688 / 0.419271 (-0.084583) | 0.051326 / 0.043533 (0.007793) | 0.299878 / 0.255139 (0.044739) | 0.325571 / 0.283200 (0.042371) | 0.110744 / 0.141683 (-0.030939) | 1.480898 / 1.452155 (0.028743) | 1.566917 / 1.492716 (0.074201) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.253249 / 0.018006 (0.235242) | 0.558576 / 0.000490 (0.558086) | 0.003838 / 0.000200 (0.003638) | 0.000085 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028731 / 0.037411 (-0.008681) | 0.110643 / 0.014526 (0.096117) | 0.119560 / 0.176557 (-0.056996) | 0.178010 / 0.737135 (-0.559126) | 0.130286 / 0.296338 (-0.166053) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400190 / 0.215209 (0.184981) | 3.999326 / 2.077655 (1.921672) | 1.797332 / 1.504120 (0.293212) | 1.610808 / 1.541195 (0.069613) | 1.679949 / 1.468490 (0.211459) | 0.696539 / 4.584777 (-3.888238) | 3.784766 / 3.745712 (0.039054) | 2.205008 / 5.269862 (-3.064854) | 1.501697 / 4.565676 (-3.063979) | 0.085553 / 0.424275 (-0.338723) | 0.012223 / 0.007607 (0.004616) | 0.494858 / 0.226044 (0.268813) | 4.968535 / 2.268929 (2.699606) | 2.258759 / 55.444624 (-53.185865) | 1.926236 / 6.876477 (-4.950241) | 2.072155 / 2.142072 (-0.069917) | 0.838354 / 4.805227 (-3.966873) | 0.168810 / 6.500664 (-6.331854) | 0.064347 / 0.075469 (-0.011122) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.166696 / 1.841788 (-0.675091) | 14.721287 / 8.074308 (6.646979) | 14.319272 / 10.191392 (4.127880) | 0.144534 / 0.680424 (-0.535890) | 0.017502 / 0.534201 (-0.516699) | 0.422682 / 0.579283 (-0.156601) | 0.424426 / 0.434364 (-0.009938) | 0.493561 / 0.540337 (-0.046777) | 0.586765 / 1.386936 (-0.800171) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007764 / 0.011353 (-0.003589) | 0.005516 / 0.011008 (-0.005492) | 0.074745 / 0.038508 (0.036237) | 0.034364 / 0.023109 (0.011255) | 0.344318 / 0.275898 (0.068420) | 0.374779 / 0.323480 (0.051299) | 0.005904 / 0.007986 (-0.002082) | 0.004323 / 0.004328 (-0.000005) | 0.073191 / 0.004250 (0.068941) | 0.051549 / 0.037052 (0.014496) | 0.341792 / 0.258489 (0.083303) | 0.387576 / 0.293841 (0.093735) | 0.037483 / 0.128546 (-0.091063) | 0.012410 / 0.075646 (-0.063237) | 0.086480 / 0.419271 (-0.332791) | 0.050035 / 0.043533 (0.006502) | 0.335475 / 0.255139 (0.080336) | 0.361436 / 0.283200 (0.078236) | 0.106890 / 0.141683 (-0.034792) | 1.464032 / 1.452155 (0.011877) | 1.563490 / 1.492716 (0.070774) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268765 / 0.018006 (0.250758) | 0.563811 / 0.000490 (0.563321) | 0.004904 / 0.000200 (0.004704) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029885 / 0.037411 (-0.007526) | 0.113885 / 0.014526 (0.099359) | 0.124283 / 0.176557 (-0.052274) | 0.173619 / 0.737135 (-0.563517) | 0.131781 / 0.296338 (-0.164557) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420296 / 0.215209 (0.205087) | 4.167656 / 2.077655 (2.090001) | 1.982356 / 1.504120 (0.478237) | 1.792181 / 1.541195 (0.250986) | 1.871459 / 1.468490 (0.402969) | 0.707066 / 4.584777 (-3.877711) | 3.835922 / 3.745712 (0.090210) | 3.506796 / 5.269862 (-1.763066) | 1.857172 / 4.565676 (-2.708505) | 0.086219 / 0.424275 (-0.338056) | 0.012404 / 0.007607 (0.004796) | 0.512393 / 0.226044 (0.286348) | 5.111623 / 2.268929 (2.842695) | 2.493523 / 55.444624 (-52.951101) | 2.188220 / 6.876477 (-4.688257) | 2.319096 / 2.142072 (0.177024) | 0.844084 / 4.805227 (-3.961144) | 0.171130 / 6.500664 (-6.329534) | 0.065913 / 0.075469 (-0.009556) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.284768 / 1.841788 (-0.557020) | 15.334610 / 8.074308 (7.260301) | 14.724436 / 10.191392 (4.533044) | 0.188425 / 0.680424 (-0.491999) | 0.017984 / 0.534201 (-0.516217) | 0.428150 / 0.579283 (-0.151133) | 0.429013 / 0.434364 (-0.005351) | 0.500818 / 0.540337 (-0.039519) | 0.592879 / 1.386936 (-0.794057) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ee68da958c2fab3a26d9f0efb1e207ecbcf7ce15 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006870 / 0.011353 (-0.004483) | 0.004702 / 0.011008 (-0.006306) | 0.099258 / 0.038508 (0.060750) | 0.029008 / 0.023109 (0.005899) | 0.330599 / 0.275898 (0.054701) | 0.361163 / 0.323480 (0.037683) | 0.005020 / 0.007986 (-0.002965) | 0.003474 / 0.004328 (-0.000855) | 0.075902 / 0.004250 (0.071651) | 0.037462 / 0.037052 (0.000410) | 0.336213 / 0.258489 (0.077724) | 0.370645 / 0.293841 (0.076804) | 0.032435 / 0.128546 (-0.096111) | 0.011686 / 0.075646 (-0.063960) | 0.326040 / 0.419271 (-0.093232) | 0.043750 / 0.043533 (0.000217) | 0.332629 / 0.255139 (0.077490) | 0.353302 / 0.283200 (0.070102) | 0.090421 / 0.141683 (-0.051262) | 1.470097 / 1.452155 (0.017942) | 1.544908 / 1.492716 (0.052191) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213418 / 0.018006 (0.195411) | 0.434808 / 0.000490 (0.434319) | 0.005949 / 0.000200 (0.005749) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023085 / 0.037411 (-0.014327) | 0.098222 / 0.014526 (0.083696) | 0.104543 / 0.176557 (-0.072013) | 0.165423 / 0.737135 (-0.571713) | 0.108732 / 0.296338 (-0.187606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433933 / 0.215209 (0.218724) | 4.334358 / 2.077655 (2.256704) | 2.013984 / 1.504120 (0.509864) | 1.862981 / 1.541195 (0.321787) | 1.873936 / 1.468490 (0.405446) | 0.699857 / 4.584777 (-3.884920) | 3.417815 / 3.745712 (-0.327897) | 1.946403 / 5.269862 (-3.323459) | 1.308683 / 4.565676 (-3.256994) | 0.083297 / 0.424275 (-0.340978) | 0.012610 / 0.007607 (0.005003) | 0.540877 / 0.226044 (0.314832) | 5.408293 / 2.268929 (3.139365) | 2.529574 / 55.444624 (-52.915050) | 2.201047 / 6.876477 (-4.675429) | 2.392966 / 2.142072 (0.250894) | 0.812719 / 4.805227 (-3.992509) | 0.154013 / 6.500664 (-6.346651) | 0.067614 / 0.075469 (-0.007855) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.228150 / 1.841788 (-0.613638) | 14.037090 / 8.074308 (5.962782) | 14.259416 / 10.191392 (4.068024) | 0.155554 / 0.680424 (-0.524870) | 0.016521 / 0.534201 (-0.517680) | 0.379615 / 0.579283 (-0.199668) | 0.421352 / 0.434364 (-0.013012) | 0.446512 / 0.540337 (-0.093825) | 0.531802 / 1.386936 (-0.855134) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006629 / 0.011353 (-0.004724) | 0.004432 / 0.011008 (-0.006577) | 0.076662 / 0.038508 (0.038154) | 0.027674 / 0.023109 (0.004565) | 0.341667 / 0.275898 (0.065769) | 0.376493 / 0.323480 (0.053014) | 0.005076 / 0.007986 (-0.002910) | 0.004655 / 0.004328 (0.000326) | 0.075698 / 0.004250 (0.071448) | 0.036905 / 0.037052 (-0.000147) | 0.342394 / 0.258489 (0.083905) | 0.383330 / 0.293841 (0.089489) | 0.031729 / 0.128546 (-0.096817) | 0.011582 / 0.075646 (-0.064064) | 0.085721 / 0.419271 (-0.333551) | 0.042012 / 0.043533 (-0.001521) | 0.342063 / 0.255139 (0.086924) | 0.367335 / 0.283200 (0.084136) | 0.089641 / 0.141683 (-0.052042) | 1.520353 / 1.452155 (0.068198) | 1.643653 / 1.492716 (0.150937) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178995 / 0.018006 (0.160989) | 0.436544 / 0.000490 (0.436055) | 0.002311 / 0.000200 (0.002111) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025386 / 0.037411 (-0.012026) | 0.099717 / 0.014526 (0.085192) | 0.110809 / 0.176557 (-0.065747) | 0.162931 / 0.737135 (-0.574204) | 0.110430 / 0.296338 (-0.185909) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438592 / 0.215209 (0.223382) | 4.372560 / 2.077655 (2.294905) | 2.069686 / 1.504120 (0.565567) | 1.860576 / 1.541195 (0.319382) | 1.898161 / 1.468490 (0.429671) | 0.698353 / 4.584777 (-3.886424) | 3.462440 / 3.745712 (-0.283272) | 1.868602 / 5.269862 (-3.401260) | 1.160498 / 4.565676 (-3.405179) | 0.082869 / 0.424275 (-0.341406) | 0.012690 / 0.007607 (0.005083) | 0.533278 / 0.226044 (0.307233) | 5.386214 / 2.268929 (3.117285) | 2.519243 / 55.444624 (-52.925382) | 2.171109 / 6.876477 (-4.705368) | 2.272617 / 2.142072 (0.130544) | 0.805843 / 4.805227 (-3.999384) | 0.152275 / 6.500664 (-6.348389) | 0.068038 / 0.075469 (-0.007431) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291967 / 1.841788 (-0.549821) | 14.386474 / 8.074308 (6.312166) | 14.180693 / 10.191392 (3.989301) | 0.131714 / 0.680424 (-0.548710) | 0.016596 / 0.534201 (-0.517605) | 0.384293 / 0.579283 (-0.194990) | 0.404051 / 0.434364 (-0.030313) | 0.452167 / 0.540337 (-0.088170) | 0.542718 / 1.386936 (-0.844218) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f9c770bb1a43fa7fe390286d7535266d3964d067 \"CML watermark\")\n"
] | 2023-04-12T08:52:35 | 2023-04-13T11:01:24 | 2023-04-13T10:54:13 | MEMBER | null | This PR fixes the fixtures of our CI mock filesystems.
Before, we had to pass `clobber=True` to `fsspec.register_implementation` to overwrite the still present previously added "mock" filesystem. That meant that the mock filesystem fixture was not working properly, because the previously added "mock" filesystem, should have been deleted by the fixture.
This PR fixes the mock filesystem fixtures, so that the "mock" filesystem is properly deleted from the inner `fsspec` registry.
Tests were added to check the correct behavior of the mock filesystem fixtures.
Related to:
- #5733 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5740/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5740",
"html_url": "https://github.com/huggingface/datasets/pull/5740",
"diff_url": "https://github.com/huggingface/datasets/pull/5740.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5740.patch",
"merged_at": "2023-04-13T10:54:13"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5739 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5739/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5739/comments | https://api.github.com/repos/huggingface/datasets/issues/5739/events | https://github.com/huggingface/datasets/issues/5739 | 1,663,762,901 | I_kwDODunzps5jKwHV | 5,739 | weird result during dataset split when data path starts with `/data` | {
"login": "ericxsun",
"id": 1772912,
"node_id": "MDQ6VXNlcjE3NzI5MTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1772912?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ericxsun",
"html_url": "https://github.com/ericxsun",
"followers_url": "https://api.github.com/users/ericxsun/followers",
"following_url": "https://api.github.com/users/ericxsun/following{/other_user}",
"gists_url": "https://api.github.com/users/ericxsun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ericxsun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ericxsun/subscriptions",
"organizations_url": "https://api.github.com/users/ericxsun/orgs",
"repos_url": "https://api.github.com/users/ericxsun/repos",
"events_url": "https://api.github.com/users/ericxsun/events{/privacy}",
"received_events_url": "https://api.github.com/users/ericxsun/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Same problem.",
"hi! \r\nI think you can run python from `/data/train/raw/` directory and load dataset as `load_dataset(\"code_contests\")` to mitigate this issue as a workaround. \r\n@ericxsun Do you want to open a PR to fix the regex? As you already found the solution :) ",
"> hi! I think you can run python from `/data/train/raw/` directory and load dataset as `load_dataset(\"code_contests\")` to mitigate this issue as a workaround. @ericxsun Do you want to open a PR to fix the regex? As you already found the solution :)\r\n\r\nSure, please see https://github.com/huggingface/datasets/pull/5748 @polinaeterna ",
"I think `string_to_dict` is ok, and that the issue is that it gets `'/data2/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet'` as input instead of `'data/test-00000-of-00001-9c49eeff30aacaa8.parquet'`. The path should be relative to the directory being loaded by `load_dataset`"
] | 2023-04-12T04:51:35 | 2023-04-21T14:20:59 | null | NONE | null | ### Describe the bug
The regex defined here https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/utils/py_utils.py#L158
will cause a weird result during dataset split when data path starts with `/data`
### Steps to reproduce the bug
1. clone dataset into local path
```
cd /data/train/raw/
git lfs clone https://huggingface.co/datasets/deepmind/code_contests.git
ls /data/train/raw/code_contests
# README.md data dataset_infos.json
ls /data/train/raw/code_contests/data
# test-00000-of-00001-9c49eeff30aacaa8.parquet
# train-[0-9]+-of-[0-9]+-xx.parquet
# valid-00000-of-00001-5e672c5751f060d3.parquet
```
2. loading data from local
```
from datasets import load_dataset
dataset = load_dataset('/data/train/raw/code_contests')
FileNotFoundError: Unable to resolve any data file that matches '['data/train/raw/code_contests/data/train-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*']' at /data/train/raw/code_contests with any supported extension
```
weird path `data/train/raw/code_contests/data/train-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*`
While dive deep into `LocalDatasetModuleFactoryWithoutScript` defined in [load.py](https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/load.py#L627) and _get_data_files_patterns https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/data_files.py#L228. I found the weird behavior caused by `string_to_dict`
3. check `string_to_dict`
```
p = '/data/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet'
split_pattern = 'data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*'
string_to_dict(p, split_pattern)
# {'split': 'train/raw/code_contests/data/test'}
p = '/data2/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet'
string_to_dict(p, split_pattern)
{'split': 'test'}
```
go deep into string_to_dict https://github.com/huggingface/datasets/blob/f2607935c4e45c70c44fcb698db0363ca7ba83d4/src/datasets/utils/py_utils.py#L158.
4. test the regex:
<img width="680" alt="image" src="https://user-images.githubusercontent.com/1772912/231351129-75179f01-fb9f-4f12-8fa9-0dfcc3d5f3bd.png">
<img width="679" alt="image" src="https://user-images.githubusercontent.com/1772912/231351025-009f3d83-2cf3-4e15-9ed4-6b9663dcb2ee.png">
### Expected behavior
statement in `steps to reproduce the bug`
3. check `string_to_dict`
```
p = '/data/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet'
split_pattern = 'data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*'
string_to_dict(p, split_pattern)
# {'split': 'train/raw/code_contests/data/test'}
p = '/data2/train/raw/code_contests/data/test-00000-of-00001-9c49eeff30aacaa8.parquet'
string_to_dict(p, split_pattern)
{'split': 'test'}
```
### Environment info
- linux(debian)
- python 3.7
- datasets 2.8.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5739/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5738 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5738/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5738/comments | https://api.github.com/repos/huggingface/datasets/issues/5738/events | https://github.com/huggingface/datasets/issues/5738 | 1,663,477,690 | I_kwDODunzps5jJqe6 | 5,738 | load_dataset("text","dataset.txt") loads the wrong dataset! | {
"login": "Tylersuard",
"id": 41713505,
"node_id": "MDQ6VXNlcjQxNzEzNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/41713505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tylersuard",
"html_url": "https://github.com/Tylersuard",
"followers_url": "https://api.github.com/users/Tylersuard/followers",
"following_url": "https://api.github.com/users/Tylersuard/following{/other_user}",
"gists_url": "https://api.github.com/users/Tylersuard/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Tylersuard/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tylersuard/subscriptions",
"organizations_url": "https://api.github.com/users/Tylersuard/orgs",
"repos_url": "https://api.github.com/users/Tylersuard/repos",
"events_url": "https://api.github.com/users/Tylersuard/events{/privacy}",
"received_events_url": "https://api.github.com/users/Tylersuard/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"You need to provide a text file as `data_files`, not as a configuration:\r\n\r\n```python\r\nmy_dataset = load_dataset(\"text\", data_files=\"TextFile.txt\")\r\n```\r\n\r\nOtherwise, since `data_files` is `None`, it picks up Colab's sample datasets from the `content` dir."
] | 2023-04-12T01:07:46 | 2023-04-19T12:08:27 | 2023-04-19T12:08:27 | NONE | null | ### Describe the bug
I am trying to load my own custom text dataset using the load_dataset function. My dataset is a bunch of ordered text, think along the lines of shakespeare plays. However, after I load the dataset and I inspect it, the dataset is a table with a bunch of latitude and longitude values! What in the world??
### Steps to reproduce the bug
my_dataset = load_dataset("text","TextFile.txt")
my_dataset
### Expected behavior
I expected the dataset to contain the actual data from the text document that I used.
### Environment info
Google Colab | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5738/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5737 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5737/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5737/comments | https://api.github.com/repos/huggingface/datasets/issues/5737/events | https://github.com/huggingface/datasets/issues/5737 | 1,662,919,811 | I_kwDODunzps5jHiSD | 5,737 | ClassLabel Error | {
"login": "mrcaelumn",
"id": 10896776,
"node_id": "MDQ6VXNlcjEwODk2Nzc2",
"avatar_url": "https://avatars.githubusercontent.com/u/10896776?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrcaelumn",
"html_url": "https://github.com/mrcaelumn",
"followers_url": "https://api.github.com/users/mrcaelumn/followers",
"following_url": "https://api.github.com/users/mrcaelumn/following{/other_user}",
"gists_url": "https://api.github.com/users/mrcaelumn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mrcaelumn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrcaelumn/subscriptions",
"organizations_url": "https://api.github.com/users/mrcaelumn/orgs",
"repos_url": "https://api.github.com/users/mrcaelumn/repos",
"events_url": "https://api.github.com/users/mrcaelumn/events{/privacy}",
"received_events_url": "https://api.github.com/users/mrcaelumn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi, you can use the `cast_column` function to change the feature type from a `Value(int64)` to `ClassLabel`:\r\n\r\n```py\r\ndataset = dataset.cast_column(\"label\", ClassLabel(names=[\"label_1\", \"label_2\", \"label_3\"]))\r\nprint(dataset.features)\r\n{'text': Value(dtype='string', id=None),\r\n 'label': ClassLabel(names=['label_1', 'label_2', 'label_3'], id=None)}\r\n```",
"thank you @stevhliu, its worked. "
] | 2023-04-11T17:14:13 | 2023-04-13T16:49:57 | 2023-04-13T16:49:57 | NONE | null | ### Describe the bug
I still getting the error "call() takes 1 positional argument but 2 were given" even after ensuring that the value being passed to the label object is a single value and that the ClassLabel object has been created with the correct number of label classes
### Steps to reproduce the bug
from datasets import ClassLabel, Dataset
1. Create the ClassLabel object with 3 label values and their corresponding names
label_test = ClassLabel(num_classes=3, names=["label_1", "label_2", "label_3"])
2. Define a dictionary with text and label fields
data = {
'text': ['text_1', 'text_2', 'text_3'],
'label': [1, 2, 3],
}
3. Create a Hugging Face dataset from the dictionary
dataset = Dataset.from_dict(data)
print(dataset.features)
4. Map the label values to their corresponding label names using the label object
dataset = dataset.map(lambda example: {'text': example['text'], 'label': label_test(example['label'])})
5. Print the resulting dataset
print(dataset)
### Expected behavior
I hope my label type is class label instead int.
### Environment info
python 3.9
google colab | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5737/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5736 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5736/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5736/comments | https://api.github.com/repos/huggingface/datasets/issues/5736/events | https://github.com/huggingface/datasets/issues/5736 | 1,662,286,061 | I_kwDODunzps5jFHjt | 5,736 | FORCE_REDOWNLOAD raises "Directory not empty" exception on second run | {
"login": "rcasero",
"id": 1219084,
"node_id": "MDQ6VXNlcjEyMTkwODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1219084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rcasero",
"html_url": "https://github.com/rcasero",
"followers_url": "https://api.github.com/users/rcasero/followers",
"following_url": "https://api.github.com/users/rcasero/following{/other_user}",
"gists_url": "https://api.github.com/users/rcasero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rcasero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rcasero/subscriptions",
"organizations_url": "https://api.github.com/users/rcasero/orgs",
"repos_url": "https://api.github.com/users/rcasero/repos",
"events_url": "https://api.github.com/users/rcasero/events{/privacy}",
"received_events_url": "https://api.github.com/users/rcasero/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! I couldn't reproduce your issue :/\r\n\r\nIt seems that `shutil.rmtree` failed. It is supposed to work even if the directory is not empty, but you still end up with `OSError: [Errno 39] Directory not empty:`. Can you make sure another process is not using this directory at the same time ?"
] | 2023-04-11T11:29:15 | 2023-04-21T15:27:40 | null | NONE | null | ### Describe the bug
Running `load_dataset(..., download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` twice raises a `Directory not empty` exception on the second run.
### Steps to reproduce the bug
I cannot test this on datasets v2.11.0 due to #5711, but this happens in v2.10.1.
1. Set up a script `my_dataset.py` to generate and load an offline dataset.
2. Load it with
```python
ds = datasets.load_dataset(path=/path/to/my_dataset.py,
name='toy',
data_dir=/path/to/my_dataset.py,
cache_dir=cache_dir,
download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD,
)
```
It loads fine
```
Dataset my_dataset downloaded and prepared to /path/to/cache/toy-..e05e/1.0.0/...5b4c. Subsequent calls will reuse this data.
```
3. Try to load it again with the same snippet and the splits are generated, but at the end of the loading process it raises the error
```
2023-04-11 12:10:19,965: DEBUG: open file: /path/to/cache/toy-..e05e/1.0.0/...5b4c.incomplete/dataset_info.json
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/load.py", line 1782, in load_dataset
builder_instance.download_and_prepare(
File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/builder.py", line 852, in download_and_prepare
with incomplete_dir(self._output_dir) as tmp_output_dir:
File "/path/to/conda/environment/lib/python3.10/contextlib.py", line 142, in __exit__
next(self.gen)
File "/path/to/conda/environment/lib/python3.10/site-packages/datasets/builder.py", line 826, in incomplete_dir
shutil.rmtree(dirname)
File "/path/to/conda/environment/lib/python3.10/shutil.py", line 730, in rmtree
onerror(os.rmdir, path, sys.exc_info())
File "/path/to/conda/environment/lib/python3.10/shutil.py", line 728, in rmtree
os.rmdir(path)
OSError: [Errno 39] Directory not empty: '/path/to/cache/toy-..e05e/1.0.0/...5b4c'
```
### Expected behavior
Regenerate the dataset from scratch and reload it.
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.8
- PyArrow version: 11.0.0
- Pandas version: 1.5.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5736/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5735 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5735/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5735/comments | https://api.github.com/repos/huggingface/datasets/issues/5735/events | https://github.com/huggingface/datasets/pull/5735 | 1,662,150,903 | PR_kwDODunzps5OAY3A | 5,735 | Implement sharding on merged iterable datasets | {
"login": "Hubert-Bonisseur",
"id": 48770768,
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hubert-Bonisseur",
"html_url": "https://github.com/Hubert-Bonisseur",
"followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers",
"following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}",
"gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions",
"organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs",
"repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos",
"events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi ! What if one of the sub-iterables only has one shard ? In that case I don't think we'd end up with a correctly interleaved dataset, since only rank 0 would yield examples from this sub-iterable",
"Hi ! \r\nI just tested this out with the code below and it seems to be ok. Both datasets are alternating and we get all the examples with no duplicates.\r\n\r\nOn thing to keep in mind is that the max amount of workers is equal to the lowest amount of shard amongst the datasets to be merged (1 in this example).\r\n\r\n ```python\r\nfrom torch.utils.data import DataLoader\r\n\r\nfrom datasets import load_dataset, interleave_datasets\r\n\r\n\r\ndef process_dataset_train(batch):\r\n return {\"input\": f'train: {batch[\"review\"][:20]}'}\r\n\r\n\r\ndef process_dataset_test(batch):\r\n return {\"input\": f'test: {batch[\"review\"][:20]}'}\r\n\r\n\r\ndef identity_collator(x):\r\n return x\r\n\r\n\r\nif __name__ == \"__main__\":\r\n ds = load_dataset(\"lhoestq/demo1\")\r\n ds[\"train\"] = ds[\"train\"].map(process_dataset_train, remove_columns=ds[\"train\"].column_names)\r\n ds[\"test\"] = ds[\"test\"].map(process_dataset_test, remove_columns=ds[\"test\"].column_names)\r\n\r\n ds1 = ds[\"train\"].to_iterable_dataset(num_shards=5)\r\n ds2 = ds[\"test\"].to_iterable_dataset(num_shards=1)\r\n\r\n ds_merged = interleave_datasets([ds1, ds2], stopping_strategy=\"all_exhausted\")\r\n\r\n dataloader = DataLoader(ds_merged, collate_fn=identity_collator, num_workers=1, batch_size=1)\r\n\r\n for i, element in enumerate(dataloader):\r\n print(i, element)\r\n\r\n```\r\n\r\n```\r\n0 [{'input': 'train: Great app! The new v'}]\r\n1 [{'input': 'test: Works with RTL and N'}]\r\n2 [{'input': \"train: Great It's not fully\"}]\r\n3 [{'input': 'test: Works with RTL SDR W'}]\r\n4 [{'input': 'train: Works on a Nexus 6p '}]\r\n5 [{'input': 'test: Awsome App! Easy to '}]\r\n6 [{'input': 'train: The bandwidth seemed'}]\r\n7 [{'input': \"test: I'll forgo the refun\"}]\r\n8 [{'input': 'train: Works well with my H'}]\r\n9 [{'input': 'test: looks like a great p'}]\r\n```",
"<s> Could you try with `num_workers>1` ? </s>\r\n\r\nedit: Oh I see\r\n\r\n> On thing to keep in mind is that the max amount of workers is equal to the lowest amount of shard amongst the datasets to be merged (1 in this example).",
"Great ! It's ok to have the max amount of workers is equal to the lowest amount of shard :)\r\n\r\nSo in the case of `num_workers>min(n_shards_per_dataset)` maybe some workers should turn off, and a warning can probably be shown. This is already the case if you use a single dataset with a single shard and `num_workers>1`.\r\n\r\n\r\nRight now it seems to raise an error:\r\n\r\n```python\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 979, in __iter__\r\n yield from self._iter_pytorch(ex_iterable)\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 912, in _iter_pytorch\r\n for key, example in ex_iterable.shard_data_sources(worker_info.id, worker_info.num_workers):\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 259, in shard_data_sources\r\n [iterable.shard_data_sources(worker_id, num_workers) for iterable in self.ex_iterables],\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 259, in <listcomp>\r\n [iterable.shard_data_sources(worker_id, num_workers) for iterable in self.ex_iterables],\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/iterable_dataset.py\", line 125, in shard_data_sources\r\n requested_gen_kwargs = _merge_gen_kwargs([gen_kwargs_list[i] for i in shard_indices])\r\n File \"/Users/quentinlhoest/hf/datasets/src/datasets/utils/sharding.py\", line 76, in _merge_gen_kwargs\r\n for key in gen_kwargs_list[0]\r\nIndexError: list index out of range\r\n```",
"Good point. I have fixed the n_shards property of merged iterable datasets so that this warning is raised properly",
"Hey @lhoestq, what do you think of the last modifications ? ",
"Hello! No problem :)\r\n\r\n- About HorizontallyConcatenatedMultiSourcesExamplesIterable, I've haven't been able to create a bug with sharding. So either I missed something or it's working somehow:\r\n\r\n```python\r\nfrom torch.utils.data import DataLoader\r\n\r\nfrom datasets import load_dataset, interleave_datasets, concatenate_datasets\r\n\r\n\r\ndef process_dataset_train(batch):\r\n return {\"input\": f'train: {batch[\"review\"][:20]}'}\r\n\r\n\r\ndef process_dataset_test(batch):\r\n return {\"input\": f'test: {batch[\"review\"][:20]}'}\r\n\r\n\r\ndef identity_collator(x):\r\n return x\r\n\r\n\r\nif __name__ == \"__main__\":\r\n ds = load_dataset(\"lhoestq/demo1\")\r\n ds[\"train\"] = ds[\"train\"].map(process_dataset_train, remove_columns=ds[\"train\"].column_names)\r\n ds[\"test\"] = ds[\"test\"].map(process_dataset_test, remove_columns=ds[\"test\"].column_names)\r\n ds[\"test\"] = ds[\"test\"].rename_columns({\"input\": \"input2\"})\r\n\r\n ds1 = ds[\"train\"].to_iterable_dataset(num_shards=5)\r\n ds2 = ds[\"test\"].to_iterable_dataset(num_shards=3)\r\n\r\n ds_merged = concatenate_datasets([ds1, ds2], axis=1)\r\n\r\n #n_shards is always 1 for HorizontallyConcatenatedMultiSourcesExamplesIterable\r\n dataloader = DataLoader(ds_merged, collate_fn=identity_collator, num_workers=1, batch_size=1)\r\n\r\n for i, element in enumerate(dataloader):\r\n print(i, element)\r\n```\r\n\r\n```\r\n0 [{'input': 'train: Great app! The new v', 'input2': 'test: Works with RTL and N'}]\r\n1 [{'input': \"train: Great It's not fully\", 'input2': 'test: Works with RTL SDR W'}]\r\n2 [{'input': 'train: Works on a Nexus 6p ', 'input2': 'test: Awsome App! Easy to '}]\r\n3 [{'input': 'train: The bandwidth seemed', 'input2': \"test: I'll forgo the refun\"}]\r\n4 [{'input': 'train: Works well with my H', 'input2': 'test: looks like a great p'}]\r\n```\r\n\r\n- I've added a test but I'm not completely happy with it. My issue is that multiprocessing makes interleaving not completely deterministic as samples are yielded whenever ready by each process, if I'm correct.\r\nAs a result I opted to check for the amount of samples yielded and make that they are all unique, which should be equivalent.\r\nBut now my issue is that the \"first_exhausted\" method breaks the loop when one of the datasets of one of the shards is empty which means that all shards stop yielding and we could be missing up to n_workers samples. I don't know if this is the behaviour expected, but I had to modify the test to accomodate this.\r\n\r\nWhat are your thoughts about this ?",
"Ah indeed it works because it's set to be only 1 shard - my bad :)",
"> But now my issue is that the \"first_exhausted\" method breaks the loop when one of the datasets of one of the shards is empty which means that all shards stop yielding and we could be missing up to n_workers samples. I don't know if this is the behaviour expected, but I had to modify the test to accomodate this.\r\n\r\nThis looks reasonable, maybe this can be documented in the `interleave_datasets` docstring ?\r\n```\r\nNote for iterable datasets:\r\n\r\nIn a distributed setup or in PyTorch DataLoader workers, the stopping strategy is applied per process.\r\nTherefore the \"first_exhausted\" strategy on an sharded iterable dataset can generate less samples in total (up to 1 missing sample per subdataset per worker).\r\n```",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006441 / 0.011353 (-0.004912) | 0.004551 / 0.011008 (-0.006457) | 0.099144 / 0.038508 (0.060636) | 0.028163 / 0.023109 (0.005054) | 0.386342 / 0.275898 (0.110444) | 0.398347 / 0.323480 (0.074867) | 0.004836 / 0.007986 (-0.003150) | 0.004724 / 0.004328 (0.000395) | 0.076277 / 0.004250 (0.072027) | 0.036305 / 0.037052 (-0.000747) | 0.377179 / 0.258489 (0.118690) | 0.410694 / 0.293841 (0.116853) | 0.030196 / 0.128546 (-0.098351) | 0.011436 / 0.075646 (-0.064211) | 0.325911 / 0.419271 (-0.093360) | 0.043709 / 0.043533 (0.000177) | 0.375801 / 0.255139 (0.120662) | 0.396511 / 0.283200 (0.113311) | 0.088346 / 0.141683 (-0.053337) | 1.483427 / 1.452155 (0.031272) | 1.553708 / 1.492716 (0.060992) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.190974 / 0.018006 (0.172968) | 0.451309 / 0.000490 (0.450819) | 0.004045 / 0.000200 (0.003845) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023814 / 0.037411 (-0.013597) | 0.096922 / 0.014526 (0.082396) | 0.101506 / 0.176557 (-0.075050) | 0.164694 / 0.737135 (-0.572441) | 0.106899 / 0.296338 (-0.189439) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432164 / 0.215209 (0.216954) | 4.308076 / 2.077655 (2.230421) | 2.092434 / 1.504120 (0.588314) | 1.937405 / 1.541195 (0.396210) | 1.988030 / 1.468490 (0.519540) | 0.695476 / 4.584777 (-3.889301) | 3.436413 / 3.745712 (-0.309299) | 2.892954 / 5.269862 (-2.376908) | 1.519906 / 4.565676 (-3.045771) | 0.082579 / 0.424275 (-0.341696) | 0.012233 / 0.007607 (0.004626) | 0.531329 / 0.226044 (0.305284) | 5.365272 / 2.268929 (3.096344) | 2.391452 / 55.444624 (-53.053172) | 2.051116 / 6.876477 (-4.825361) | 2.140663 / 2.142072 (-0.001410) | 0.807262 / 4.805227 (-3.997966) | 0.151290 / 6.500664 (-6.349374) | 0.066137 / 0.075469 (-0.009333) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.193106 / 1.841788 (-0.648682) | 13.577240 / 8.074308 (5.502932) | 14.280126 / 10.191392 (4.088734) | 0.142538 / 0.680424 (-0.537886) | 0.016641 / 0.534201 (-0.517560) | 0.386318 / 0.579283 (-0.192965) | 0.385991 / 0.434364 (-0.048373) | 0.440712 / 0.540337 (-0.099625) | 0.524189 / 1.386936 (-0.862747) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006628 / 0.011353 (-0.004725) | 0.004664 / 0.011008 (-0.006344) | 0.077254 / 0.038508 (0.038746) | 0.028369 / 0.023109 (0.005259) | 0.343076 / 0.275898 (0.067178) | 0.376491 / 0.323480 (0.053011) | 0.005298 / 0.007986 (-0.002687) | 0.004853 / 0.004328 (0.000524) | 0.075927 / 0.004250 (0.071677) | 0.039951 / 0.037052 (0.002899) | 0.346225 / 0.258489 (0.087736) | 0.382367 / 0.293841 (0.088526) | 0.031133 / 0.128546 (-0.097413) | 0.011666 / 0.075646 (-0.063981) | 0.086383 / 0.419271 (-0.332889) | 0.042885 / 0.043533 (-0.000647) | 0.343885 / 0.255139 (0.088746) | 0.366840 / 0.283200 (0.083640) | 0.095942 / 0.141683 (-0.045741) | 1.528972 / 1.452155 (0.076817) | 1.586392 / 1.492716 (0.093676) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223952 / 0.018006 (0.205946) | 0.410767 / 0.000490 (0.410277) | 0.001014 / 0.000200 (0.000814) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024210 / 0.037411 (-0.013201) | 0.100308 / 0.014526 (0.085782) | 0.106899 / 0.176557 (-0.069658) | 0.156514 / 0.737135 (-0.580621) | 0.109548 / 0.296338 (-0.186790) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434763 / 0.215209 (0.219554) | 4.348485 / 2.077655 (2.270831) | 2.064255 / 1.504120 (0.560135) | 1.864394 / 1.541195 (0.323199) | 1.899732 / 1.468490 (0.431242) | 0.694147 / 4.584777 (-3.890630) | 3.357898 / 3.745712 (-0.387815) | 2.909155 / 5.269862 (-2.360707) | 1.424790 / 4.565676 (-3.140886) | 0.082597 / 0.424275 (-0.341678) | 0.012442 / 0.007607 (0.004835) | 0.538758 / 0.226044 (0.312713) | 5.390288 / 2.268929 (3.121359) | 2.532016 / 55.444624 (-52.912609) | 2.185724 / 6.876477 (-4.690753) | 2.274176 / 2.142072 (0.132104) | 0.804785 / 4.805227 (-4.000442) | 0.152649 / 6.500664 (-6.348015) | 0.067707 / 0.075469 (-0.007762) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.285219 / 1.841788 (-0.556568) | 13.958098 / 8.074308 (5.883790) | 14.043653 / 10.191392 (3.852261) | 0.144526 / 0.680424 (-0.535898) | 0.016813 / 0.534201 (-0.517388) | 0.390286 / 0.579283 (-0.188997) | 0.389184 / 0.434364 (-0.045180) | 0.470810 / 0.540337 (-0.069527) | 0.562391 / 1.386936 (-0.824545) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4bb172c9772858c188f85ffc9a51f8cb1da292a0 \"CML watermark\")\n"
] | 2023-04-11T10:02:25 | 2023-04-27T16:39:04 | 2023-04-27T16:32:09 | CONTRIBUTOR | null | This PR allows sharding of merged iterable datasets.
Merged iterable datasets with for instance the `interleave_datasets` command are comprised of multiple sub-iterable, one for each dataset that has been merged.
With this PR, sharding a merged iterable will result in multiple merged datasets each comprised of sharded sub-iterable, ensuring that there is no duplication of data.
As a result it is now possible to set any amount of workers in the dataloader as long as it is lower or equal to the lowest amount of shards amongst the datasets. Before it had to be set to 0.
I previously talked about this issue on the forum [here](https://discuss.huggingface.co/t/interleaving-iterable-dataset-with-num-workers-0/35801) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5735/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5735/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5735",
"html_url": "https://github.com/huggingface/datasets/pull/5735",
"diff_url": "https://github.com/huggingface/datasets/pull/5735.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5735.patch",
"merged_at": "2023-04-27T16:32:09"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5734 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5734/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5734/comments | https://api.github.com/repos/huggingface/datasets/issues/5734/events | https://github.com/huggingface/datasets/issues/5734 | 1,662,058,028 | I_kwDODunzps5jEP4s | 5,734 | Remove temporary pin of fsspec | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-04-11T09:04:17 | 2023-04-11T11:04:52 | 2023-04-11T11:04:52 | MEMBER | null | Once root cause is found and fixed, remove the temporary pin introduced by:
- #5731 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5734/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5733 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5733/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5733/comments | https://api.github.com/repos/huggingface/datasets/issues/5733/events | https://github.com/huggingface/datasets/pull/5733 | 1,662,039,191 | PR_kwDODunzps5OAA04 | 5,733 | Unpin fsspec | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006240 / 0.011353 (-0.005113) | 0.004392 / 0.011008 (-0.006616) | 0.097276 / 0.038508 (0.058768) | 0.027262 / 0.023109 (0.004153) | 0.303203 / 0.275898 (0.027305) | 0.331878 / 0.323480 (0.008398) | 0.004706 / 0.007986 (-0.003279) | 0.004428 / 0.004328 (0.000100) | 0.074666 / 0.004250 (0.070416) | 0.036154 / 0.037052 (-0.000899) | 0.302997 / 0.258489 (0.044508) | 0.340350 / 0.293841 (0.046509) | 0.031011 / 0.128546 (-0.097535) | 0.011616 / 0.075646 (-0.064031) | 0.323671 / 0.419271 (-0.095601) | 0.042062 / 0.043533 (-0.001471) | 0.311381 / 0.255139 (0.056242) | 0.324697 / 0.283200 (0.041498) | 0.084248 / 0.141683 (-0.057435) | 1.471651 / 1.452155 (0.019496) | 1.533414 / 1.492716 (0.040697) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193555 / 0.018006 (0.175549) | 0.393452 / 0.000490 (0.392962) | 0.002348 / 0.000200 (0.002148) | 0.000071 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022523 / 0.037411 (-0.014889) | 0.096552 / 0.014526 (0.082026) | 0.101746 / 0.176557 (-0.074810) | 0.163145 / 0.737135 (-0.573990) | 0.106417 / 0.296338 (-0.189921) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448589 / 0.215209 (0.233380) | 4.467803 / 2.077655 (2.390148) | 2.178745 / 1.504120 (0.674625) | 1.983339 / 1.541195 (0.442145) | 2.056554 / 1.468490 (0.588064) | 0.697571 / 4.584777 (-3.887206) | 3.363967 / 3.745712 (-0.381745) | 1.872526 / 5.269862 (-3.397336) | 1.258245 / 4.565676 (-3.307432) | 0.082954 / 0.424275 (-0.341321) | 0.012306 / 0.007607 (0.004699) | 0.545096 / 0.226044 (0.319052) | 5.468706 / 2.268929 (3.199777) | 2.645333 / 55.444624 (-52.799292) | 2.287659 / 6.876477 (-4.588818) | 2.346768 / 2.142072 (0.204696) | 0.803730 / 4.805227 (-4.001497) | 0.151037 / 6.500664 (-6.349627) | 0.066404 / 0.075469 (-0.009065) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.192982 / 1.841788 (-0.648806) | 13.631225 / 8.074308 (5.556917) | 13.830053 / 10.191392 (3.638661) | 0.141901 / 0.680424 (-0.538523) | 0.016500 / 0.534201 (-0.517701) | 0.373268 / 0.579283 (-0.206015) | 0.380123 / 0.434364 (-0.054241) | 0.430786 / 0.540337 (-0.109551) | 0.512669 / 1.386936 (-0.874267) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006161 / 0.011353 (-0.005192) | 0.004399 / 0.011008 (-0.006609) | 0.076210 / 0.038508 (0.037702) | 0.026791 / 0.023109 (0.003681) | 0.341523 / 0.275898 (0.065625) | 0.370400 / 0.323480 (0.046920) | 0.004495 / 0.007986 (-0.003491) | 0.003204 / 0.004328 (-0.001125) | 0.075444 / 0.004250 (0.071194) | 0.035914 / 0.037052 (-0.001138) | 0.343806 / 0.258489 (0.085317) | 0.384320 / 0.293841 (0.090479) | 0.031438 / 0.128546 (-0.097109) | 0.011253 / 0.075646 (-0.064393) | 0.085364 / 0.419271 (-0.333908) | 0.041407 / 0.043533 (-0.002126) | 0.338831 / 0.255139 (0.083692) | 0.364357 / 0.283200 (0.081158) | 0.087417 / 0.141683 (-0.054266) | 1.520624 / 1.452155 (0.068470) | 1.572432 / 1.492716 (0.079716) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232403 / 0.018006 (0.214396) | 0.388187 / 0.000490 (0.387698) | 0.001158 / 0.000200 (0.000958) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024596 / 0.037411 (-0.012816) | 0.101203 / 0.014526 (0.086677) | 0.105243 / 0.176557 (-0.071314) | 0.158215 / 0.737135 (-0.578920) | 0.110277 / 0.296338 (-0.186061) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.435661 / 0.215209 (0.220452) | 4.350151 / 2.077655 (2.272496) | 2.072372 / 1.504120 (0.568252) | 1.870675 / 1.541195 (0.329480) | 1.910883 / 1.468490 (0.442393) | 0.697384 / 4.584777 (-3.887393) | 3.399377 / 3.745712 (-0.346335) | 2.685008 / 5.269862 (-2.584854) | 1.476843 / 4.565676 (-3.088834) | 0.083177 / 0.424275 (-0.341098) | 0.012413 / 0.007607 (0.004806) | 0.542543 / 0.226044 (0.316498) | 5.431422 / 2.268929 (3.162494) | 2.506419 / 55.444624 (-52.938206) | 2.166342 / 6.876477 (-4.710135) | 2.164421 / 2.142072 (0.022348) | 0.800609 / 4.805227 (-4.004618) | 0.150527 / 6.500664 (-6.350137) | 0.065780 / 0.075469 (-0.009689) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.293409 / 1.841788 (-0.548379) | 13.814898 / 8.074308 (5.740590) | 13.940416 / 10.191392 (3.749024) | 0.149377 / 0.680424 (-0.531047) | 0.016462 / 0.534201 (-0.517739) | 0.393748 / 0.579283 (-0.185535) | 0.384327 / 0.434364 (-0.050037) | 0.489900 / 0.540337 (-0.050437) | 0.574608 / 1.386936 (-0.812328) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f2607935c4e45c70c44fcb698db0363ca7ba83d4 \"CML watermark\")\n"
] | 2023-04-11T08:52:12 | 2023-04-11T11:11:45 | 2023-04-11T11:04:51 | MEMBER | null | In `fsspec--2023.4.0` default value for clobber when registering an implementation was changed from True to False. See:
- https://github.com/fsspec/filesystem_spec/pull/1237
This PR recovers previous behavior by passing clobber True when registering mock implementations.
This PR also removes the temporary pin introduced by:
- #5731
Fix #5734. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5733/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5733",
"html_url": "https://github.com/huggingface/datasets/pull/5733",
"diff_url": "https://github.com/huggingface/datasets/pull/5733.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5733.patch",
"merged_at": "2023-04-11T11:04:51"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5732 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5732/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5732/comments | https://api.github.com/repos/huggingface/datasets/issues/5732/events | https://github.com/huggingface/datasets/issues/5732 | 1,662,020,571 | I_kwDODunzps5jEGvb | 5,732 | Enwik8 should support the standard split | {
"login": "lucaslingle",
"id": 10287371,
"node_id": "MDQ6VXNlcjEwMjg3Mzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/10287371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucaslingle",
"html_url": "https://github.com/lucaslingle",
"followers_url": "https://api.github.com/users/lucaslingle/followers",
"following_url": "https://api.github.com/users/lucaslingle/following{/other_user}",
"gists_url": "https://api.github.com/users/lucaslingle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucaslingle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucaslingle/subscriptions",
"organizations_url": "https://api.github.com/users/lucaslingle/orgs",
"repos_url": "https://api.github.com/users/lucaslingle/repos",
"events_url": "https://api.github.com/users/lucaslingle/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucaslingle/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "lucaslingle",
"id": 10287371,
"node_id": "MDQ6VXNlcjEwMjg3Mzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/10287371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucaslingle",
"html_url": "https://github.com/lucaslingle",
"followers_url": "https://api.github.com/users/lucaslingle/followers",
"following_url": "https://api.github.com/users/lucaslingle/following{/other_user}",
"gists_url": "https://api.github.com/users/lucaslingle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucaslingle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucaslingle/subscriptions",
"organizations_url": "https://api.github.com/users/lucaslingle/orgs",
"repos_url": "https://api.github.com/users/lucaslingle/repos",
"events_url": "https://api.github.com/users/lucaslingle/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucaslingle/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lucaslingle",
"id": 10287371,
"node_id": "MDQ6VXNlcjEwMjg3Mzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/10287371?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucaslingle",
"html_url": "https://github.com/lucaslingle",
"followers_url": "https://api.github.com/users/lucaslingle/followers",
"following_url": "https://api.github.com/users/lucaslingle/following{/other_user}",
"gists_url": "https://api.github.com/users/lucaslingle/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lucaslingle/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lucaslingle/subscriptions",
"organizations_url": "https://api.github.com/users/lucaslingle/orgs",
"repos_url": "https://api.github.com/users/lucaslingle/repos",
"events_url": "https://api.github.com/users/lucaslingle/events{/privacy}",
"received_events_url": "https://api.github.com/users/lucaslingle/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"#self-assign",
"The Enwik8 pipeline is not present in this codebase, and is hosted elsewhere. I have opened a PR [there](https://huggingface.co/datasets/enwik8/discussions/4) instead. "
] | 2023-04-11T08:38:53 | 2023-04-11T09:28:17 | 2023-04-11T09:28:16 | NONE | null | ### Feature request
The HuggingFace Datasets library currently supports two BuilderConfigs for Enwik8. One config yields individual lines as examples, while the other config yields the entire dataset as a single example. Both support only a monolithic split: it is all grouped as "train".
The HuggingFace Datasets library should include a BuilderConfig for Enwik8 with train, validation, and test sets derived from the first 90 million bytes, next 5 million bytes, and last 5 million bytes, respectively. This Enwik8 split is standard practice in LM papers, as elaborated and motivated below.
### Motivation
Enwik8 is commonly split into 90M, 5M, 5M consecutive bytes. This is done in the Transformer-XL [codebase](https://github.com/kimiyoung/transformer-xl/blob/44781ed21dbaec88b280f74d9ae2877f52b492a5/getdata.sh#L34), and is additionally mentioned in the Sparse Transformers [paper](https://arxiv.org/abs/1904.10509) and the Compressive Transformers [paper](https://arxiv.org/abs/1911.05507). This split is pretty much universal among language modeling papers.
One may obtain the splits by manual wrangling, using the data yielded by the ```enwik8-raw``` BuilderConfig. However, this undermines the seamless functionality of the library: one must slice the single raw example, extract it into three tensors, and wrap each in a separate dataset.
This becomes even more of a nuisance if using the current Enwik8 HuggingFace dataset as a TfdsDataSource with [SeqIO](https://github.com/google/seqio), where a pipeline of preprocessors is typically included in a SeqIO Task definition, to be applied immediately after loading the data with TFDS.
### Your contribution
Supporting this functionality in HuggingFace Datasets will only require an additional BuilderConfig for Enwik8 and a few additional lines of code. I will submit a PR. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5732/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5731 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5731/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5731/comments | https://api.github.com/repos/huggingface/datasets/issues/5731/events | https://github.com/huggingface/datasets/pull/5731 | 1,662,012,913 | PR_kwDODunzps5N_7Un | 5,731 | Temporarily pin fsspec | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009735 / 0.011353 (-0.001618) | 0.010410 / 0.011008 (-0.000598) | 0.134986 / 0.038508 (0.096478) | 0.038392 / 0.023109 (0.015283) | 0.414451 / 0.275898 (0.138553) | 0.447775 / 0.323480 (0.124295) | 0.007223 / 0.007986 (-0.000763) | 0.006373 / 0.004328 (0.002045) | 0.102631 / 0.004250 (0.098381) | 0.048516 / 0.037052 (0.011464) | 0.410179 / 0.258489 (0.151690) | 0.467773 / 0.293841 (0.173932) | 0.053163 / 0.128546 (-0.075384) | 0.019801 / 0.075646 (-0.055845) | 0.452708 / 0.419271 (0.033436) | 0.068691 / 0.043533 (0.025159) | 0.405482 / 0.255139 (0.150343) | 0.457669 / 0.283200 (0.174470) | 0.113464 / 0.141683 (-0.028219) | 1.918143 / 1.452155 (0.465988) | 2.033123 / 1.492716 (0.540407) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274564 / 0.018006 (0.256557) | 0.608855 / 0.000490 (0.608366) | 0.006266 / 0.000200 (0.006066) | 0.000105 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033704 / 0.037411 (-0.003708) | 0.130982 / 0.014526 (0.116456) | 0.143862 / 0.176557 (-0.032694) | 0.212622 / 0.737135 (-0.524513) | 0.148899 / 0.296338 (-0.147439) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.670968 / 0.215209 (0.455759) | 6.602911 / 2.077655 (4.525256) | 2.644290 / 1.504120 (1.140171) | 2.268593 / 1.541195 (0.727399) | 2.325393 / 1.468490 (0.856903) | 1.388156 / 4.584777 (-3.196621) | 5.958569 / 3.745712 (2.212857) | 3.310756 / 5.269862 (-1.959106) | 2.390953 / 4.565676 (-2.174724) | 0.147416 / 0.424275 (-0.276859) | 0.015201 / 0.007607 (0.007594) | 0.794109 / 0.226044 (0.568064) | 7.984855 / 2.268929 (5.715926) | 3.382275 / 55.444624 (-52.062349) | 2.676102 / 6.876477 (-4.200375) | 2.846743 / 2.142072 (0.704671) | 1.467523 / 4.805227 (-3.337704) | 0.283184 / 6.500664 (-6.217480) | 0.088655 / 0.075469 (0.013186) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.632765 / 1.841788 (-0.209022) | 19.102473 / 8.074308 (11.028165) | 25.632535 / 10.191392 (15.441143) | 0.255628 / 0.680424 (-0.424795) | 0.034655 / 0.534201 (-0.499546) | 0.564593 / 0.579283 (-0.014690) | 0.668339 / 0.434364 (0.233975) | 0.648414 / 0.540337 (0.108076) | 0.766735 / 1.386936 (-0.620201) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009658 / 0.011353 (-0.001695) | 0.006690 / 0.011008 (-0.004318) | 0.099151 / 0.038508 (0.060643) | 0.037092 / 0.023109 (0.013983) | 0.470354 / 0.275898 (0.194456) | 0.525863 / 0.323480 (0.202383) | 0.007593 / 0.007986 (-0.000393) | 0.006637 / 0.004328 (0.002308) | 0.098782 / 0.004250 (0.094532) | 0.058524 / 0.037052 (0.021471) | 0.502569 / 0.258489 (0.244080) | 0.526410 / 0.293841 (0.232569) | 0.059486 / 0.128546 (-0.069060) | 0.019742 / 0.075646 (-0.055904) | 0.119715 / 0.419271 (-0.299556) | 0.065269 / 0.043533 (0.021736) | 0.483327 / 0.255139 (0.228188) | 0.506148 / 0.283200 (0.222948) | 0.123178 / 0.141683 (-0.018505) | 1.916624 / 1.452155 (0.464470) | 2.051410 / 1.492716 (0.558694) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.286481 / 0.018006 (0.268475) | 0.597300 / 0.000490 (0.596810) | 0.008906 / 0.000200 (0.008706) | 0.000128 / 0.000054 (0.000074) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031406 / 0.037411 (-0.006005) | 0.146748 / 0.014526 (0.132222) | 0.152898 / 0.176557 (-0.023658) | 0.212535 / 0.737135 (-0.524600) | 0.155577 / 0.296338 (-0.140761) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.660989 / 0.215209 (0.445780) | 6.688530 / 2.077655 (4.610875) | 3.039278 / 1.504120 (1.535159) | 2.660357 / 1.541195 (1.119162) | 2.696912 / 1.468490 (1.228422) | 1.259760 / 4.584777 (-3.325017) | 5.922452 / 3.745712 (2.176740) | 5.304200 / 5.269862 (0.034338) | 2.823928 / 4.565676 (-1.741748) | 0.148118 / 0.424275 (-0.276157) | 0.015575 / 0.007607 (0.007968) | 0.794404 / 0.226044 (0.568360) | 8.233651 / 2.268929 (5.964722) | 3.777482 / 55.444624 (-51.667142) | 3.064924 / 6.876477 (-3.811552) | 3.117803 / 2.142072 (0.975731) | 1.479559 / 4.805227 (-3.325668) | 0.254070 / 6.500664 (-6.246594) | 0.086806 / 0.075469 (0.011337) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.735515 / 1.841788 (-0.106273) | 18.934157 / 8.074308 (10.859848) | 22.645248 / 10.191392 (12.453856) | 0.227073 / 0.680424 (-0.453351) | 0.030650 / 0.534201 (-0.503551) | 0.594619 / 0.579283 (0.015336) | 0.653304 / 0.434364 (0.218940) | 0.707484 / 0.540337 (0.167147) | 0.823327 / 1.386936 (-0.563610) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#273392966e434286f4f5ba2ad596730bff11056d \"CML watermark\")\n"
] | 2023-04-11T08:33:15 | 2023-04-11T08:57:45 | 2023-04-11T08:47:55 | MEMBER | null | Fix #5730. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5731/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5731",
"html_url": "https://github.com/huggingface/datasets/pull/5731",
"diff_url": "https://github.com/huggingface/datasets/pull/5731.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5731.patch",
"merged_at": "2023-04-11T08:47:55"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5730 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5730/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5730/comments | https://api.github.com/repos/huggingface/datasets/issues/5730/events | https://github.com/huggingface/datasets/issues/5730 | 1,662,007,926 | I_kwDODunzps5jEDp2 | 5,730 | CI is broken: ValueError: Name (mock) already in the registry and clobber is False | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-04-11T08:29:46 | 2023-04-11T08:47:56 | 2023-04-11T08:47:56 | MEMBER | null | CI is broken for `test_py310`.
See: https://github.com/huggingface/datasets/actions/runs/4665326892/jobs/8258580948
```
=========================== short test summary info ============================
ERROR tests/test_builder.py::test_builder_with_filesystem_download_and_prepare - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_builder.py::test_builder_with_filesystem_download_and_prepare_reload - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_dataset_dict.py::test_dummy_datasetdict_serialize_fs - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_file_utils.py::test_get_from_cache_fsspec - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_filesystem.py::test_is_remote_filesystem - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xexists[tmp_path/file.txt-True] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xexists[tmp_path/file_that_doesnt_exist.txt-False] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xexists[mock://top_level/second_level/date=2019-10-01/a.parquet-True] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xexists[mock://top_level/second_level/date=2019-10-01/file_that_doesnt_exist.parquet-False] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xlistdir[tmp_path-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xlistdir[mock://-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xlistdir[mock://top_level-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xlistdir[mock://top_level/second_level/date=2019-10-01-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xisdir[tmp_path-True] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xisdir[tmp_path/file.txt-False] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xisdir[mock://-True] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xisdir[mock://top_level-True] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xisdir[mock://dir_that_doesnt_exist-False] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xisfile[tmp_path/file.txt-True] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xisfile[tmp_path/file_that_doesnt_exist.txt-False] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xisfile[mock://-False] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xisfile[mock://top_level/second_level/date=2019-10-01/a.parquet-True] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xgetsize[tmp_path/file.txt-100] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xgetsize[mock://-0] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xgetsize[mock://top_level/second_level/date=2019-10-01/a.parquet-100] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xglob[tmp_path/*.txt-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xglob[mock://*-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xglob[mock://top_*-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xglob[mock://top_level/second_level/date=2019-10-0[1-4]-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xglob[mock://top_level/second_level/date=2019-10-0[1-4]/*-expected_paths4] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xwalk[tmp_path-expected_outputs0] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::test_xwalk[mock://top_level/second_level-expected_outputs1] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[tmp_path/file.txt-True] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[tmp_path/file_that_doesnt_exist.txt-False] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[mock://top_level/second_level/date=2019-10-01/a.parquet-True] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_exists[mock://top_level/second_level/date=2019-10-01/file_that_doesnt_exist.parquet-False] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[tmp_path-*.txt-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://-*-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://-top_*-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://top_level/second_level-date=2019-10-0[1-4]-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_glob[mock://top_level/second_level-date=2019-10-0[1-4]/*-expected_paths4] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[tmp_path-*.txt-expected_paths0] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://-date=2019-10-0[1-4]-expected_paths1] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://top_level-date=2019-10-0[1-4]-expected_paths2] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://-date=2019-10-0[1-4]/*-expected_paths3] - ValueError: Name (mock) already in the registry and clobber is False
ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://top_level-date=2019-10-0[1-4]/*-expected_paths4] - ValueError: Name (mock) already in the registry and clobber is False
===== 2105 passed, 18 skipped, 38 warnings, 46 errors in 236.22s (0:03:56) =====
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5730/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5729 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5729/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5729/comments | https://api.github.com/repos/huggingface/datasets/issues/5729/events | https://github.com/huggingface/datasets/pull/5729 | 1,661,929,923 | PR_kwDODunzps5N_pvI | 5,729 | Fix nondeterministic sharded data split order | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The error in the CI was unrelated to this PR. I have merged main branch once that has been fixed.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006954 / 0.011353 (-0.004399) | 0.004947 / 0.011008 (-0.006061) | 0.086564 / 0.038508 (0.048056) | 0.031167 / 0.023109 (0.008058) | 0.262285 / 0.275898 (-0.013613) | 0.295753 / 0.323480 (-0.027727) | 0.005389 / 0.007986 (-0.002596) | 0.004130 / 0.004328 (-0.000198) | 0.065127 / 0.004250 (0.060877) | 0.042511 / 0.037052 (0.005458) | 0.263497 / 0.258489 (0.005008) | 0.307456 / 0.293841 (0.013615) | 0.031338 / 0.128546 (-0.097209) | 0.011023 / 0.075646 (-0.064623) | 0.295625 / 0.419271 (-0.123647) | 0.045813 / 0.043533 (0.002280) | 0.259369 / 0.255139 (0.004230) | 0.279325 / 0.283200 (-0.003875) | 0.099748 / 0.141683 (-0.041934) | 1.252572 / 1.452155 (-0.199583) | 1.347069 / 1.492716 (-0.145647) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249726 / 0.018006 (0.231720) | 0.556882 / 0.000490 (0.556392) | 0.008237 / 0.000200 (0.008037) | 0.000294 / 0.000054 (0.000239) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026879 / 0.037411 (-0.010533) | 0.105141 / 0.014526 (0.090615) | 0.115473 / 0.176557 (-0.061084) | 0.172989 / 0.737135 (-0.564147) | 0.120433 / 0.296338 (-0.175906) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400022 / 0.215209 (0.184812) | 3.965402 / 2.077655 (1.887747) | 1.805257 / 1.504120 (0.301138) | 1.610136 / 1.541195 (0.068941) | 1.661162 / 1.468490 (0.192672) | 0.695311 / 4.584777 (-3.889466) | 3.753757 / 3.745712 (0.008045) | 2.060609 / 5.269862 (-3.209253) | 1.333251 / 4.565676 (-3.232426) | 0.085790 / 0.424275 (-0.338485) | 0.012256 / 0.007607 (0.004649) | 0.502133 / 0.226044 (0.276088) | 5.040979 / 2.268929 (2.772051) | 2.310919 / 55.444624 (-53.133705) | 2.010534 / 6.876477 (-4.865943) | 2.132961 / 2.142072 (-0.009111) | 0.837636 / 4.805227 (-3.967592) | 0.169838 / 6.500664 (-6.330826) | 0.065003 / 0.075469 (-0.010466) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218674 / 1.841788 (-0.623114) | 14.696076 / 8.074308 (6.621768) | 14.559492 / 10.191392 (4.368100) | 0.167761 / 0.680424 (-0.512663) | 0.017747 / 0.534201 (-0.516454) | 0.421624 / 0.579283 (-0.157659) | 0.414086 / 0.434364 (-0.020278) | 0.501398 / 0.540337 (-0.038940) | 0.596099 / 1.386936 (-0.790837) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007230 / 0.011353 (-0.004123) | 0.005345 / 0.011008 (-0.005664) | 0.073739 / 0.038508 (0.035231) | 0.033440 / 0.023109 (0.010330) | 0.339790 / 0.275898 (0.063892) | 0.367857 / 0.323480 (0.044377) | 0.005927 / 0.007986 (-0.002058) | 0.004279 / 0.004328 (-0.000049) | 0.074247 / 0.004250 (0.069996) | 0.048971 / 0.037052 (0.011918) | 0.340235 / 0.258489 (0.081746) | 0.380521 / 0.293841 (0.086680) | 0.035322 / 0.128546 (-0.093225) | 0.012416 / 0.075646 (-0.063230) | 0.086060 / 0.419271 (-0.333212) | 0.049331 / 0.043533 (0.005799) | 0.342871 / 0.255139 (0.087732) | 0.355673 / 0.283200 (0.072473) | 0.111976 / 0.141683 (-0.029707) | 1.462530 / 1.452155 (0.010375) | 1.550336 / 1.492716 (0.057620) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266560 / 0.018006 (0.248554) | 0.550886 / 0.000490 (0.550396) | 0.001069 / 0.000200 (0.000869) | 0.000085 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028701 / 0.037411 (-0.008711) | 0.110535 / 0.014526 (0.096010) | 0.122846 / 0.176557 (-0.053711) | 0.176395 / 0.737135 (-0.560740) | 0.128653 / 0.296338 (-0.167685) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431693 / 0.215209 (0.216484) | 4.283691 / 2.077655 (2.206036) | 2.013967 / 1.504120 (0.509847) | 1.823914 / 1.541195 (0.282719) | 1.872055 / 1.468490 (0.403565) | 0.703318 / 4.584777 (-3.881459) | 3.783412 / 3.745712 (0.037699) | 2.950147 / 5.269862 (-2.319715) | 1.826159 / 4.565676 (-2.739518) | 0.086897 / 0.424275 (-0.337379) | 0.012512 / 0.007607 (0.004905) | 0.526730 / 0.226044 (0.300685) | 5.263871 / 2.268929 (2.994943) | 2.552163 / 55.444624 (-52.892462) | 2.276216 / 6.876477 (-4.600261) | 2.419934 / 2.142072 (0.277862) | 0.848235 / 4.805227 (-3.956993) | 0.170405 / 6.500664 (-6.330259) | 0.064979 / 0.075469 (-0.010491) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276780 / 1.841788 (-0.565008) | 15.100829 / 8.074308 (7.026521) | 15.117531 / 10.191392 (4.926139) | 0.147129 / 0.680424 (-0.533295) | 0.017806 / 0.534201 (-0.516395) | 0.422975 / 0.579283 (-0.156308) | 0.430286 / 0.434364 (-0.004078) | 0.501405 / 0.540337 (-0.038932) | 0.596810 / 1.386936 (-0.790126) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f6ee2e6603fe81638256d37a6aa7ad0400e31a83 \"CML watermark\")\n"
] | 2023-04-11T07:34:20 | 2023-04-26T15:12:25 | 2023-04-26T15:05:12 | MEMBER | null | This PR makes the order of the split names deterministic. Before it was nondeterministic because we were iterating over `set` elements.
Fix #5728. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5729/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5729",
"html_url": "https://github.com/huggingface/datasets/pull/5729",
"diff_url": "https://github.com/huggingface/datasets/pull/5729.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5729.patch",
"merged_at": "2023-04-26T15:05:12"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5728 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5728/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5728/comments | https://api.github.com/repos/huggingface/datasets/issues/5728/events | https://github.com/huggingface/datasets/issues/5728 | 1,661,925,932 | I_kwDODunzps5jDvos | 5,728 | The order of data split names is nondeterministic | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-04-11T07:31:25 | 2023-04-26T15:05:13 | 2023-04-26T15:05:13 | MEMBER | null | After this CI error: https://github.com/huggingface/datasets/actions/runs/4639528358/jobs/8210492953?pr=5718
```
FAILED tests/test_data_files.py::test_get_data_files_patterns[data_file_per_split4] - AssertionError: assert ['random', 'train'] == ['train', 'random']
At index 0 diff: 'random' != 'train'
Full diff:
- ['train', 'random']
+ ['random', 'train']
```
I have checked locally and found out that the data split order is nondeterministic.
This is caused by the use of `set` for sharded splits. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5728/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5727 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5727/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5727/comments | https://api.github.com/repos/huggingface/datasets/issues/5727/events | https://github.com/huggingface/datasets/issues/5727 | 1,661,536,363 | I_kwDODunzps5jCQhr | 5,727 | load_dataset fails with FileNotFound error on Windows | {
"login": "joelkowalewski",
"id": 122648572,
"node_id": "U_kgDOB093_A",
"avatar_url": "https://avatars.githubusercontent.com/u/122648572?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joelkowalewski",
"html_url": "https://github.com/joelkowalewski",
"followers_url": "https://api.github.com/users/joelkowalewski/followers",
"following_url": "https://api.github.com/users/joelkowalewski/following{/other_user}",
"gists_url": "https://api.github.com/users/joelkowalewski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joelkowalewski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joelkowalewski/subscriptions",
"organizations_url": "https://api.github.com/users/joelkowalewski/orgs",
"repos_url": "https://api.github.com/users/joelkowalewski/repos",
"events_url": "https://api.github.com/users/joelkowalewski/events{/privacy}",
"received_events_url": "https://api.github.com/users/joelkowalewski/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! Can you please paste the entire error stack trace, not only the last few lines?",
"`----> 1 dataset = datasets.load_dataset(\"glue\", \"ax\")\r\n\r\nFile ~\\anaconda3\\envs\\huggingface\\Lib\\site-packages\\datasets\\load.py:1767, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1762 verification_mode = VerificationMode(\r\n 1763 (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS\r\n 1764 )\r\n 1766 # Create a dataset builder\r\n-> 1767 builder_instance = load_dataset_builder(\r\n 1768 path=path,\r\n 1769 name=name,\r\n 1770 data_dir=data_dir,\r\n 1771 data_files=data_files,\r\n 1772 cache_dir=cache_dir,\r\n 1773 features=features,\r\n 1774 download_config=download_config,\r\n 1775 download_mode=download_mode,\r\n 1776 revision=revision,\r\n 1777 use_auth_token=use_auth_token,\r\n 1778 storage_options=storage_options,\r\n 1779 **config_kwargs,\r\n 1780 )\r\n 1782 # Return iterable dataset in case of streaming\r\n 1783 if streaming:\r\n\r\nFile ~\\anaconda3\\envs\\huggingface\\Lib\\site-packages\\datasets\\load.py:1498, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, storage_options, **config_kwargs)\r\n 1496 download_config = download_config.copy() if download_config else DownloadConfig()\r\n 1497 download_config.use_auth_token = use_auth_token\r\n-> 1498 dataset_module = dataset_module_factory(\r\n 1499 path,\r\n 1500 revision=revision,\r\n 1501 download_config=download_config,\r\n 1502 download_mode=download_mode,\r\n 1503 data_dir=data_dir,\r\n 1504 data_files=data_files,\r\n 1505 )\r\n 1507 # Get dataset builder class from the processing script\r\n 1508 builder_cls = import_main_class(dataset_module.module_path)\r\n\r\nFile ~\\anaconda3\\envs\\huggingface\\Lib\\site-packages\\datasets\\load.py:1211, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1209 raise e1 from None\r\n 1210 if isinstance(e1, FileNotFoundError):\r\n-> 1211 raise FileNotFoundError(\r\n 1212 f\"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. \"\r\n 1213 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\r\n 1214 ) from None\r\n 1215 raise e1 from None\r\n 1216 else:`",
"Okay, this is the issue:\r\n```\r\nFileNotFoundError: [WinError 3] The system cannot find the path specified: \r\n'C:\\\\Users\\\\...\\\\.cache\\\\huggingface'\r\n``` \r\n\r\nI don't remember seeing this error before.\r\n\r\nI guess it could happen in a multi-process environment if one of the processes deletes the `datasets` cache as the other one is loading a dataset (with `load_dataset`), so make sure that's not the case. Also, you can disable the Windows max path length limit (if enabled), but this is most likely not the problem.",
"Closing due to inactivity."
] | 2023-04-10T23:21:12 | 2023-07-21T14:08:20 | 2023-07-21T14:08:19 | NONE | null | ### Describe the bug
Although I can import and run the datasets library in a Colab environment, I cannot successfully load any data on my own machine (Windows 10) despite following the install steps:
(1) create conda environment
(2) activate environment
(3) install with: ``conda` install -c huggingface -c conda-forge datasets`
Then
```
from datasets import load_dataset
# this or any other example from the website fails with the FileNotFoundError
glue = load_dataset("glue", "ax")
```
**Below I have pasted the error omitting the full path**:
```
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at C:\Users\...\glue\glue.py or any data file in the same directory. Couldn't find 'glue' on the Hugging Face Hub either: FileNotFoundError: [WinError 3] The system cannot find the path specified:
'C:\\Users\\...\\.cache\\huggingface'
```
### Steps to reproduce the bug
On Windows 10
1) create a minimal conda environment (with just Python)
(2) activate environment
(3) install datasets with: ``conda` install -c huggingface -c conda-forge datasets`
(4) import load_dataset and follow example usage from any dataset card.
### Expected behavior
The expected behavior is to load the file into the Python session running on my machine without error.
### Environment info
```
# Name Version Build Channel
aiohttp 3.8.4 py311ha68e1ae_0 conda-forge
aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge
arrow-cpp 11.0.0 h57928b3_13_cpu conda-forge
async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge
attrs 22.2.0 pyh71513ae_0 conda-forge
aws-c-auth 0.6.26 h1262f0c_1 conda-forge
aws-c-cal 0.5.21 h7cda486_2 conda-forge
aws-c-common 0.8.14 hcfcfb64_0 conda-forge
aws-c-compression 0.2.16 h8a79959_5 conda-forge
aws-c-event-stream 0.2.20 h5f78564_4 conda-forge
aws-c-http 0.7.6 h2545be9_0 conda-forge
aws-c-io 0.13.19 h0d2781e_3 conda-forge
aws-c-mqtt 0.8.6 hd211e0c_12 conda-forge
aws-c-s3 0.2.7 h8113e7b_1 conda-forge
aws-c-sdkutils 0.1.8 h8a79959_0 conda-forge
aws-checksums 0.1.14 h8a79959_5 conda-forge
aws-crt-cpp 0.19.8 he6d3b81_12 conda-forge
aws-sdk-cpp 1.10.57 h64004b3_8 conda-forge
brotlipy 0.7.0 py311ha68e1ae_1005 conda-forge
bzip2 1.0.8 h8ffe710_4 conda-forge
c-ares 1.19.0 h2bbff1b_0
ca-certificates 2023.01.10 haa95532_0
certifi 2022.12.7 pyhd8ed1ab_0 conda-forge
cffi 1.15.1 py311h7d9ee11_3 conda-forge
charset-normalizer 2.1.1 pyhd8ed1ab_0 conda-forge
colorama 0.4.6 pyhd8ed1ab_0 conda-forge
cryptography 40.0.1 py311h28e9c30_0 conda-forge
dataclasses 0.8 pyhc8e2a94_3 conda-forge
datasets 2.11.0 py_0 huggingface
dill 0.3.6 pyhd8ed1ab_1 conda-forge
filelock 3.11.0 pyhd8ed1ab_0 conda-forge
frozenlist 1.3.3 py311ha68e1ae_0 conda-forge
fsspec 2023.4.0 pyh1a96a4e_0 conda-forge
gflags 2.2.2 ha925a31_1004 conda-forge
glog 0.6.0 h4797de2_0 conda-forge
huggingface_hub 0.13.4 py_0 huggingface
idna 3.4 pyhd8ed1ab_0 conda-forge
importlib-metadata 6.3.0 pyha770c72_0 conda-forge
importlib_metadata 6.3.0 hd8ed1ab_0 conda-forge
intel-openmp 2023.0.0 h57928b3_25922 conda-forge
krb5 1.20.1 heb0366b_0 conda-forge
libabseil 20230125.0 cxx17_h63175ca_1 conda-forge
libarrow 11.0.0 h04c43f8_13_cpu conda-forge
libblas 3.9.0 16_win64_mkl conda-forge
libbrotlicommon 1.0.9 hcfcfb64_8 conda-forge
libbrotlidec 1.0.9 hcfcfb64_8 conda-forge
libbrotlienc 1.0.9 hcfcfb64_8 conda-forge
libcblas 3.9.0 16_win64_mkl conda-forge
libcrc32c 1.1.2 h0e60522_0 conda-forge
libcurl 7.88.1 h68f0423_1 conda-forge
libexpat 2.5.0 h63175ca_1 conda-forge
libffi 3.4.2 h8ffe710_5 conda-forge
libgoogle-cloud 2.8.0 hf2ff781_1 conda-forge
libgrpc 1.52.1 h32da247_1 conda-forge
libhwloc 2.9.0 h51c2c0f_0 conda-forge
libiconv 1.17 h8ffe710_0 conda-forge
liblapack 3.9.0 16_win64_mkl conda-forge
libprotobuf 3.21.12 h12be248_0 conda-forge
libsqlite 3.40.0 hcfcfb64_0 conda-forge
libssh2 1.10.0 h9a1e1f7_3 conda-forge
libthrift 0.18.1 h9ce19ad_0 conda-forge
libutf8proc 2.8.0 h82a8f57_0 conda-forge
libxml2 2.10.3 hc3477c8_6 conda-forge
libzlib 1.2.13 hcfcfb64_4 conda-forge
lz4-c 1.9.4 hcfcfb64_0 conda-forge
mkl 2022.1.0 h6a75c08_874 conda-forge
multidict 6.0.4 py311ha68e1ae_0 conda-forge
multiprocess 0.70.14 py311ha68e1ae_3 conda-forge
numpy 1.24.2 py311h0b4df5a_0 conda-forge
openssl 3.1.0 hcfcfb64_0 conda-forge
orc 1.8.3 hada7b9e_0 conda-forge
packaging 23.0 pyhd8ed1ab_0 conda-forge
pandas 2.0.0 py311hf63dbb6_0 conda-forge
parquet-cpp 1.5.1 2 conda-forge
pip 23.0.1 pyhd8ed1ab_0 conda-forge
pthreads-win32 2.9.1 hfa6e2cd_3 conda-forge
pyarrow 11.0.0 py311h6a6099b_13_cpu conda-forge
pycparser 2.21 pyhd8ed1ab_0 conda-forge
pyopenssl 23.1.1 pyhd8ed1ab_0 conda-forge
pysocks 1.7.1 pyh0701188_6 conda-forge
python 3.11.3 h2628c8c_0_cpython conda-forge
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python-tzdata 2023.3 pyhd8ed1ab_0 conda-forge
python-xxhash 3.2.0 py311ha68e1ae_0 conda-forge
python_abi 3.11 3_cp311 conda-forge
pytz 2023.3 pyhd8ed1ab_0 conda-forge
pyyaml 6.0 py311ha68e1ae_5 conda-forge
re2 2023.02.02 h63175ca_0 conda-forge
requests 2.28.2 pyhd8ed1ab_1 conda-forge
setuptools 67.6.1 pyhd8ed1ab_0 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
snappy 1.1.10 hfb803bf_0 conda-forge
tbb 2021.8.0 h91493d7_0 conda-forge
tk 8.6.12 h8ffe710_0 conda-forge
tqdm 4.65.0 pyhd8ed1ab_1 conda-forge
typing-extensions 4.5.0 hd8ed1ab_0 conda-forge
typing_extensions 4.5.0 pyha770c72_0 conda-forge
tzdata 2023c h71feb2d_0 conda-forge
ucrt 10.0.22621.0 h57928b3_0 conda-forge
urllib3 1.26.15 pyhd8ed1ab_0 conda-forge
vc 14.3 hb6edc58_10 conda-forge
vs2015_runtime 14.34.31931 h4c5c07a_10 conda-forge
wheel 0.40.0 pyhd8ed1ab_0 conda-forge
win_inet_pton 1.1.0 pyhd8ed1ab_6 conda-forge
xxhash 0.8.1 hcfcfb64_0 conda-forge
xz 5.2.10 h8cc25b3_1
yaml 0.2.5 h8ffe710_2 conda-forge
yarl 1.8.2 py311ha68e1ae_0 conda-forge
zipp 3.15.0 pyhd8ed1ab_0 conda-forge
zlib 1.2.13 hcfcfb64_4 conda-forge
zstd 1.5.4 hd43e919_0
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5727/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5726 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5726/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5726/comments | https://api.github.com/repos/huggingface/datasets/issues/5726/events | https://github.com/huggingface/datasets/issues/5726 | 1,660,944,807 | I_kwDODunzps5jAAGn | 5,726 | Fallback JSON Dataset loading does not load all values when features specified manually | {
"login": "myluki2000",
"id": 3610788,
"node_id": "MDQ6VXNlcjM2MTA3ODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/3610788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/myluki2000",
"html_url": "https://github.com/myluki2000",
"followers_url": "https://api.github.com/users/myluki2000/followers",
"following_url": "https://api.github.com/users/myluki2000/following{/other_user}",
"gists_url": "https://api.github.com/users/myluki2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/myluki2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/myluki2000/subscriptions",
"organizations_url": "https://api.github.com/users/myluki2000/orgs",
"repos_url": "https://api.github.com/users/myluki2000/repos",
"events_url": "https://api.github.com/users/myluki2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/myluki2000/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @myluki2000.\r\n\r\nI am working on a fix."
] | 2023-04-10T15:22:14 | 2023-04-21T06:35:28 | 2023-04-21T06:35:28 | NONE | null | ### Describe the bug
The fallback JSON dataset loader located here:
https://github.com/huggingface/datasets/blob/1c4ec00511868bd881e84a6f7e0333648d833b8e/src/datasets/packaged_modules/json/json.py#L130-L153
does not load the values of features correctly when features are specified manually and not all features have a value in the first entry of the dataset. I'm pretty sure this is not supposed to be expected bahavior?
To fix this you'd have to change this line:
https://github.com/huggingface/datasets/blob/1c4ec00511868bd881e84a6f7e0333648d833b8e/src/datasets/packaged_modules/json/json.py#L140
To pass a schema to pyarrow which has the same structure as the features argument passed to the load_dataset() method.
### Steps to reproduce the bug
Consider a dataset JSON like this:
```
[
{
"instruction": "Do stuff",
"output": "Answer stuff"
},
{
"instruction": "Do stuff2",
"input": "Additional Input2",
"output": "Answer stuff2"
}
]
```
Using this code to load the dataset:
```
from datasets import load_dataset, Features, Value
features = {
"instruction": Value("string"),
"input": Value("string"),
"output": Value("string")
}
features = Features(features)
ds = load_dataset("json", data_files="./ds.json", features=features)
for row in ds["train"]:
print(row)
```
we get a dataset that looks like this:
| **Instruction** | **Input** | **Output** |
|-----------------|--------------------|-----------------|
| "Do stuff" | None | "Answer Stuff" |
| "Do stuff2" | None | "Answer Stuff2" |
### Expected behavior
The input column should contain values other than None for dataset entries that have the "input" attribute set:
| **Instruction** | **Input** | **Output** |
|-----------------|--------------------|-----------------|
| "Do stuff" | None | "Answer Stuff" |
| "Do stuff2" | "Additional Input2" | "Answer Stuff2" |
### Environment info
Python 3.10.10
Datasets 2.11.0
Windows 10 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5726/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5726/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5725 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5725/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5725/comments | https://api.github.com/repos/huggingface/datasets/issues/5725/events | https://github.com/huggingface/datasets/issues/5725 | 1,660,455,202 | I_kwDODunzps5i-Iki | 5,725 | How to limit the number of examples in dataset, for testing? | {
"login": "ndvbd",
"id": 845175,
"node_id": "MDQ6VXNlcjg0NTE3NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/845175?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ndvbd",
"html_url": "https://github.com/ndvbd",
"followers_url": "https://api.github.com/users/ndvbd/followers",
"following_url": "https://api.github.com/users/ndvbd/following{/other_user}",
"gists_url": "https://api.github.com/users/ndvbd/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ndvbd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ndvbd/subscriptions",
"organizations_url": "https://api.github.com/users/ndvbd/orgs",
"repos_url": "https://api.github.com/users/ndvbd/repos",
"events_url": "https://api.github.com/users/ndvbd/events{/privacy}",
"received_events_url": "https://api.github.com/users/ndvbd/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! You can use the `nrows` parameter for this:\r\n```python\r\ndata = load_dataset(\"json\", data_files=data_path, nrows=10)\r\n```",
"@mariosasko I get:\r\n\r\n`TypeError: __init__() got an unexpected keyword argument 'nrows'`",
"I misread the format in which the dataset is stored - the `nrows` parameter works for CSV, but not JSON.\r\n\r\nThis means the only option is first to create a DataFrame and then convert it to a Dataset object:\r\n```python\r\nimport pandas as pd\r\nfrom datasets import Dataset\r\n\r\ndf = pd.read_json(data_path, lines=True, nrows=10)\r\nds = Dataset.from_pandas(df)\r\n```"
] | 2023-04-10T08:41:43 | 2023-04-21T06:16:24 | 2023-04-21T06:16:24 | NONE | null | ### Describe the bug
I am using this command:
`data = load_dataset("json", data_files=data_path)`
However, I want to add a parameter, to limit the number of loaded examples to be 10, for development purposes, but can't find this simple parameter.
### Steps to reproduce the bug
In the description.
### Expected behavior
To be able to limit the number of examples
### Environment info
Nothing special | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5725/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5724 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5724/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5724/comments | https://api.github.com/repos/huggingface/datasets/issues/5724/events | https://github.com/huggingface/datasets/issues/5724 | 1,659,938,135 | I_kwDODunzps5i8KVX | 5,724 | Error after shuffling streaming IterableDatasets with downloaded dataset | {
"login": "szxiangjn",
"id": 41177966,
"node_id": "MDQ6VXNlcjQxMTc3OTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/41177966?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/szxiangjn",
"html_url": "https://github.com/szxiangjn",
"followers_url": "https://api.github.com/users/szxiangjn/followers",
"following_url": "https://api.github.com/users/szxiangjn/following{/other_user}",
"gists_url": "https://api.github.com/users/szxiangjn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/szxiangjn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/szxiangjn/subscriptions",
"organizations_url": "https://api.github.com/users/szxiangjn/orgs",
"repos_url": "https://api.github.com/users/szxiangjn/repos",
"events_url": "https://api.github.com/users/szxiangjn/events{/privacy}",
"received_events_url": "https://api.github.com/users/szxiangjn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Moving `\"en\"` to the end of the path instead of passing it as a config name should fix the error:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('/path/to/your/data/dir/en', streaming=True, split='train')\r\ndataset = dataset.shuffle(buffer_size=10_000, seed=42)\r\nnext(iter(dataset))\r\n```\r\n\r\nPS: https://github.com/huggingface/datasets/pull/5331, once merged, will allow us to define C4's configs in its README, making downloading it much more user-friendly."
] | 2023-04-09T16:58:44 | 2023-04-20T20:37:30 | 2023-04-20T20:37:30 | NONE | null | ### Describe the bug
I downloaded the C4 dataset, and used streaming IterableDatasets to read it. Everything went normal until I used `dataset = dataset.shuffle(seed=42, buffer_size=10_000)` to shuffle the dataset. Shuffled dataset will throw the following error when it is used by `next(iter(dataset))`:
```
File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 937, in __iter__
for key, example in ex_iterable:
File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 627, in __iter__
for x in self.ex_iterable:
File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 138, in __iter__
yield from self.generate_examples_fn(**kwargs_with_shuffled_shards)
File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 763, in wrapper
for key, table in generate_tables_fn(**kwargs):
File "/data/miniconda3/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 101, in _generate_tables
batch = f.read(self.config.chunksize)
File "/data/miniconda3/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 372, in read_with_retries
out = read(*args, **kwargs)
File "/data/miniconda3/lib/python3.9/gzip.py", line 300, in read
return self._buffer.read(size)
File "/data/miniconda3/lib/python3.9/_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "/data/miniconda3/lib/python3.9/gzip.py", line 487, in read
if not self._read_gzip_header():
File "/data/miniconda3/lib/python3.9/gzip.py", line 435, in _read_gzip_header
raise BadGzipFile('Not a gzipped file (%r)' % magic)
gzip.BadGzipFile: Not a gzipped file (b've')
```
I found that there is no problem to use the dataset in this way without shuffling. Also, use `dataset = datasets.load_dataset('c4', 'en', split='train', streaming=True)`, which will download the dataset on-the-fly instead of loading from the local file, will also not have problems even after shuffle.
### Steps to reproduce the bug
1. Download C4 dataset from https://huggingface.co/datasets/allenai/c4
2.
```
import datasets
dataset = datasets.load_dataset('/path/to/your/data/dir', 'en', streaming=True, split='train')
dataset = dataset.shuffle(buffer_size=10_000, seed=42)
next(iter(dataset))
```
### Expected behavior
`next(iter(dataset))` should give me a sample from the dataset
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.4.32-1-tlinux4-0001-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.13.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5724/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5724/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5722 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5722/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5722/comments | https://api.github.com/repos/huggingface/datasets/issues/5722/events | https://github.com/huggingface/datasets/issues/5722 | 1,659,837,510 | I_kwDODunzps5i7xxG | 5,722 | Distributed Training Error on Customized Dataset | {
"login": "wlhgtc",
"id": 16603773,
"node_id": "MDQ6VXNlcjE2NjAzNzcz",
"avatar_url": "https://avatars.githubusercontent.com/u/16603773?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wlhgtc",
"html_url": "https://github.com/wlhgtc",
"followers_url": "https://api.github.com/users/wlhgtc/followers",
"following_url": "https://api.github.com/users/wlhgtc/following{/other_user}",
"gists_url": "https://api.github.com/users/wlhgtc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wlhgtc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wlhgtc/subscriptions",
"organizations_url": "https://api.github.com/users/wlhgtc/orgs",
"repos_url": "https://api.github.com/users/wlhgtc/repos",
"events_url": "https://api.github.com/users/wlhgtc/events{/privacy}",
"received_events_url": "https://api.github.com/users/wlhgtc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hmm the error doesn't seem related to data loading.\r\n\r\nRegarding `split_dataset_by_node`: it's generally used to split an iterable dataset (e.g. when streaming) in pytorch DDP. It's not needed if you use a regular dataset since the pytorch DataLoader already assigns a subset of the dataset indices to each node."
] | 2023-04-09T11:04:59 | 2023-07-24T14:50:46 | 2023-07-24T14:50:46 | NONE | null | Hi guys, recently I tried to use `datasets` to train a dual encoder.
I finish my own datasets according to the nice [tutorial](https://huggingface.co/docs/datasets/v2.11.0/en/dataset_script)
Here are my code:
```python
class RetrivalDataset(datasets.GeneratorBasedBuilder):
"""CrossEncoder dataset."""
BUILDER_CONFIGS = [RetrivalConfig(name="DuReader")]
# DEFAULT_CONFIG_NAME = "DuReader"
def _info(self):
return datasets.DatasetInfo(
features=datasets.Features(
{
"id": datasets.Value("string"),
"question": datasets.Value("string"),
"documents": Sequence(datasets.Value("string")),
}
),
supervised_keys=None,
)
def _split_generators(self, dl_manager):
"""Returns SplitGenerators."""
train_file = self.config.data_dir + self.config.train_file
valid_file = self.config.data_dir + self.config.valid_file
logger.info(f"Training on {self.config.train_file}")
logger.info(f"Evaluating on {self.config.valid_file}")
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, gen_kwargs={"file_path": train_file}
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION, gen_kwargs={"file_path": valid_file}
),
]
def _generate_examples(self, file_path):
with jsonlines.open(file_path, "r") as f:
for record in f:
label = record["label"]
question = record["question"]
# dual encoder
all_documents = record["all_documents"]
positive_paragraph = all_documents.pop(label)
all_documents = [positive_paragraph] + all_documents
u_id = "{}_#_{}".format(
md5_hash(question + "".join(all_documents)),
"".join(random.sample(string.ascii_letters + string.digits, 7)),
)
item = {
"question": question,
"documents": all_documents,
"id": u_id,
}
yield u_id, item
```
It works well on single GPU, but got errors as follows when used DDP:
```python
Detected mismatch between collectives on ranks. Rank 1 is running collective: CollectiveFingerPrint(OpType=BARRIER), but Rank 0 is running collective: CollectiveFingerPrint(OpType=ALLGATHER_COALESCED)
```
Here are my train script on a two A100 mechine:
```bash
export TORCH_DISTRIBUTED_DEBUG=DETAIL
export TORCH_SHOW_CPP_STACKTRACES=1
export NCCL_DEBUG=INFO
export NCCL_DEBUG_SUBSYS=INIT,COLL,ENV
nohup torchrun --nproc_per_node 2 train.py experiments/de-big.json >logs/de-big.log 2>&1&
```
I am not sure if this error below related to my dataset code when use DDP. And I notice the PR(#5369 ), but I don't know when and where should I used the function(`split_dataset_by_node`) .
@lhoestq hope you could help me?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5722/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5721 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5721/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5721/comments | https://api.github.com/repos/huggingface/datasets/issues/5721/events | https://github.com/huggingface/datasets/issues/5721 | 1,659,680,682 | I_kwDODunzps5i7Leq | 5,721 | Calling datasets.load_dataset("text" ...) results in a wrong split. | {
"login": "cyrilzakka",
"id": 1841186,
"node_id": "MDQ6VXNlcjE4NDExODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1841186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyrilzakka",
"html_url": "https://github.com/cyrilzakka",
"followers_url": "https://api.github.com/users/cyrilzakka/followers",
"following_url": "https://api.github.com/users/cyrilzakka/following{/other_user}",
"gists_url": "https://api.github.com/users/cyrilzakka/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyrilzakka/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyrilzakka/subscriptions",
"organizations_url": "https://api.github.com/users/cyrilzakka/orgs",
"repos_url": "https://api.github.com/users/cyrilzakka/repos",
"events_url": "https://api.github.com/users/cyrilzakka/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyrilzakka/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2023-04-08T23:55:12 | 2023-04-08T23:55:12 | null | NONE | null | ### Describe the bug
When creating a text dataset, the training split should have the bulk of the examples by default. Currently, testing does.
### Steps to reproduce the bug
I have a folder with 18K text files in it. Each text file essentially consists in a document or article scraped from online. Calling the following codeL
```
folder_path = "/home/cyril/Downloads/llama_dataset"
data = datasets.load_dataset("text", data_dir=folder_path)
data.save_to_disk("/home/cyril/Downloads/data.hf")
data = datasets.load_from_disk("/home/cyril/Downloads/data.hf")
print(data)
```
Results in the following split:
```
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 2114
})
test: Dataset({
features: ['text'],
num_rows: 200882
})
validation: Dataset({
features: ['text'],
num_rows: 152
})
})
```
It seems to me like the train/test/validation splits are in the wrong order since test split >>>> train_split
### Expected behavior
Train split should have the bulk of the training examples.
### Environment info
datasets 2.11.0, python 3.10.6 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5721/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5720/comments | https://api.github.com/repos/huggingface/datasets/issues/5720/events | https://github.com/huggingface/datasets/issues/5720 | 1,659,610,705 | I_kwDODunzps5i66ZR | 5,720 | Streaming IterableDatasets do not work with torch DataLoaders | {
"login": "jlehrer1",
"id": 29244648,
"node_id": "MDQ6VXNlcjI5MjQ0NjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/29244648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlehrer1",
"html_url": "https://github.com/jlehrer1",
"followers_url": "https://api.github.com/users/jlehrer1/followers",
"following_url": "https://api.github.com/users/jlehrer1/following{/other_user}",
"gists_url": "https://api.github.com/users/jlehrer1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlehrer1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlehrer1/subscriptions",
"organizations_url": "https://api.github.com/users/jlehrer1/orgs",
"repos_url": "https://api.github.com/users/jlehrer1/repos",
"events_url": "https://api.github.com/users/jlehrer1/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlehrer1/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Edit: This behavior is true even without `.take/.set`",
"I'm experiencing the same problem that @jlehrer1. I was able to reproduce it with a very small example:\r\n\r\n```py\r\nfrom datasets import Dataset, load_dataset, load_dataset_builder\r\nfrom torch.utils.data import DataLoader\r\n\r\n\r\ndef my_gen():\r\n for i in range(1, 4):\r\n yield {\"a\": i}\r\n\r\n# Saving the dataset as a parquet file\r\ndataset = Dataset.from_generator(my_gen)\r\ntrain_path = \"/tmp/test.parquet\"\r\ndataset.to_parquet(train_path)\r\n\r\n# Creating a local dataset from the parquet file\r\ndata_files = {\"train\": [str(train_path)]}\r\nbuilder = load_dataset_builder(\"parquet\", data_files=data_files)\r\nbuilder.download_and_prepare(\"/tmp/test_ds\", file_format=\"parquet\")\r\n\r\n# Loading the dataset from the local directory as streaming\r\ndataset = load_dataset(\"parquet\", data_dir=\"/tmp/test_ds\", split=\"train\", streaming=True)\r\ndataset.with_format(\"torch\")\r\n\r\ndl = DataLoader(dataset, batch_size=2, num_workers=1)\r\nfor row in dl:\r\n print(row)\r\n```\r\n\r\nMy env info:\r\n```\r\ndatasets 2.11.0\r\ntorch 2.0.0\r\ntorchvision 0.15.1\r\nPython 3.9.16\r\n```\r\n\r\nNote that the example above doesn't fail if the number of workers used is `0`",
"I cannot reproduce this error, not even with your MRE @ivanprado (your env appears to be the same as Colab's, and your code runs there without issues). ",
"@mariosasko you are right, it works on Colab. I digged deeper and found that the problem arises when the multiprocessing method is set to be `spawn`. This code reproduces the problem in Colab:\r\n\r\n```py\r\nfrom datasets import Dataset, load_dataset, load_dataset_builder\r\nfrom torch.utils.data import DataLoader\r\nimport multiprocessing as mp\r\n\r\nmp.set_start_method('spawn')\r\n\r\ndef my_gen():\r\n for i in range(1, 4):\r\n yield {\"a\": i}\r\n\r\n\r\ndef main():\r\n # Saving the dataset as a parquet file\r\n dataset = Dataset.from_generator(my_gen)\r\n train_path = \"/tmp/test.parquet\"\r\n dataset.to_parquet(train_path)\r\n\r\n # Creating a local dataset from the parquet file\r\n data_files = {\"train\": [str(train_path)]}\r\n builder = load_dataset_builder(\"parquet\", data_files=data_files)\r\n builder.download_and_prepare(\"/tmp/test_ds\", file_format=\"parquet\")\r\n\r\n # Loading the dataset from the local directory as streaming\r\n dataset = load_dataset(\"parquet\", data_dir=\"/tmp/test_ds\", split=\"train\", streaming=True)\r\n dataset.with_format(\"torch\")\r\n\r\n dl = DataLoader(dataset, batch_size=2, num_workers=1)\r\n for row in dl:\r\n print(row)\r\n\r\nmain()\r\n```",
"So is there a way to fix this by changing the `mp` method? This is blocking any usage of the `datasets` library for me",
"@jlehrer1 can you try adding `mp.set_start_method('fork')` at the beginning of your code? Maybe this helps you. Keep us posted. ",
"I have a similar issue: \r\n> mp.set_start_method('fork')\r\n\r\n\r\nDidnt work"
] | 2023-04-08T18:45:48 | 2023-05-27T12:57:08 | null | NONE | null | ### Describe the bug
When using streaming datasets set up with train/val split using `.skip()` and `.take()`, the following error occurs when iterating over a torch dataloader:
```
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 363, in __iter__
self._iterator = self._get_iterator()
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 314, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 927, in __init__
w.start()
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object '_generate_examples_from_tables_wrapper.<locals>.wrapper'
```
To reproduce, run the code
```
from datasets import load_dataset
data = load_dataset(args.dataset_name, split="train", streaming=True)
train_len = 5000
val_len = 100
train, val = data.take(train_len), data.skip(train_len).take(val_len)
traindata = IterableClipDataset(data, context_length=args.max_len, tokenizer=tokenizer, image_key="url", text_key="text")
traindata = DataLoader(traindata, batch_size=args.batch_size, num_workers=args.num_workers, persistent_workers=True)
```
Where the class IterableClipDataset is a simple wrapper to cast the dataset to a torch iterabledataset, defined via
```
from torch.utils.data import Dataset, IterableDataset
from torchvision.transforms import Compose, Resize, ToTensor
from transformers import AutoTokenizer
import requests
from PIL import Image
class IterableClipDataset(IterableDataset):
def __init__(self, dataset, context_length: int, image_transform=None, tokenizer=None, image_key="image", text_key="text"):
self.dataset = dataset
self.context_length = context_length
self.image_transform = Compose([Resize((224, 224)), ToTensor()]) if image_transform is None else image_transform
self.tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") if tokenizer is None else tokenizer
self.image_key = image_key
self.text_key = text_key
def read_image(self, url: str):
try: # Try to read the image
image = Image.open(requests.get(url, stream=True).raw)
except:
image = Image.new("RGB", (224, 224), (0, 0, 0))
return image
def process_sample(self, image, text):
if isinstance(image, str):
image = self.read_image(image)
if self.image_transform is not None:
image = self.image_transform(image)
text = self.tokenizer.encode(
text, add_special_tokens=True, max_length=self.context_length, truncation=True, padding="max_length"
)
text = torch.tensor(text, dtype=torch.long)
return image, text
def __iter__(self):
for sample in self.dataset:
image, text = sample[self.image_key], sample[self.text_key]
yield self.process_sample(image, text)
```
### Steps to reproduce the bug
Steps to reproduce
1. Install `datasets`, `torch`, and `PIL` (if you want to reproduce exactly)
2. Run the code above
### Expected behavior
Batched data is produced from the dataloader
### Environment info
```
datasets == 2.9.0
python == 3.9.12
torch == 1.11.0
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5720/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5719 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5719/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5719/comments | https://api.github.com/repos/huggingface/datasets/issues/5719/events | https://github.com/huggingface/datasets/issues/5719 | 1,659,203,222 | I_kwDODunzps5i5W6W | 5,719 | Array2D feature creates a list of list instead of a numpy array | {
"login": "off99555",
"id": 15215732,
"node_id": "MDQ6VXNlcjE1MjE1NzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/15215732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/off99555",
"html_url": "https://github.com/off99555",
"followers_url": "https://api.github.com/users/off99555/followers",
"following_url": "https://api.github.com/users/off99555/following{/other_user}",
"gists_url": "https://api.github.com/users/off99555/gists{/gist_id}",
"starred_url": "https://api.github.com/users/off99555/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/off99555/subscriptions",
"organizations_url": "https://api.github.com/users/off99555/orgs",
"repos_url": "https://api.github.com/users/off99555/repos",
"events_url": "https://api.github.com/users/off99555/events{/privacy}",
"received_events_url": "https://api.github.com/users/off99555/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! \r\n\r\nYou need to set the format to `np` before indexing the dataset to get NumPy arrays:\r\n```python\r\nfeatures = Features(dict(seq=Array2D((2,2), 'float32'))) \r\nds = Dataset.from_dict(dict(seq=[np.random.rand(2,2)]), features=features)\r\nds.set_format(\"np\")\r\na = ds[0]['seq']\r\n```\r\n\r\n> I think it should not be the expected behavior especially when I feed a numpy array as input to the data creation function. Why is it converting my array into a list?\r\n\r\nThe same dataset can have examples in different types (Numpy arrays, Torch tensors, Pandas series, etc.), so recovering them all would be slow and impractical. Instead, the design of our formatting API is similar to Arrow's (the lib we use internally to store data on disk/ in RAM), which allows converting a batch of data to Python/Numpy/Pandas in a single call (and uses C++ to do so to make it faster).\r\n\r\n> Also if I change the first dimension of the Array2D shape to None, it's returning array correctly.\r\n\r\nSetting the first dimension to `None` makes it variable-length (allows passing arrays with the first dimensions of differing lengths).\r\n",
"Current behavior when indexing the dataset:\r\n- Using `Array((2,2))` returns a list of lists.\r\n- Using `Array((None,2))` returns a numpy array.\r\n\r\nDon't you think this is kind of unexpected behavior from end-user perspective? \r\nAs a user, I expect that when I use `Array2D`, the behavior needs to be consistent even if I specify None or not. It should either return a list or an array. It needs to choose one. Let's say if it always return a list, then I will call `ds.set_format('np')` no problem.\r\n\r\nThe consistency can be in any of these aspects:\r\n1. preserves the type of the input data (in this case, a numpy array)\r\n2. ensure the output type is always the same (it can be either list or array, but it needs to be one of them)\r\n\r\nRight now the API doesn't conform to any of these aspects. But I think it needs to conform to one.",
"I thought we made this consistent by returning lists in both scenarios...",
"Fixed in #5751 "
] | 2023-04-07T21:04:08 | 2023-04-20T15:34:41 | 2023-04-20T15:34:41 | NONE | null | ### Describe the bug
I'm not sure if this is expected behavior or not. When I create a 2D array using `Array2D`, the data has list type instead of numpy array. I think it should not be the expected behavior especially when I feed a numpy array as input to the data creation function. Why is it converting my array into a list?
Also if I change the first dimension of the `Array2D` shape to None, it's returning array correctly.
### Steps to reproduce the bug
Run this code:
```py
from datasets import Dataset, Features, Array2D
import numpy as np
# you have to change the first dimension of the shape to None to make it return an array
features = Features(dict(seq=Array2D((2,2), 'float32')))
ds = Dataset.from_dict(dict(seq=[np.random.rand(2,2)]), features=features)
a = ds[0]['seq']
print(a)
print(type(a))
```
The following will be printed in stdout:
```
[[0.8127174377441406, 0.3760348856449127], [0.7510159611701965, 0.4322739541530609]]
<class 'list'>
```
### Expected behavior
Each indexed item should be a list or numpy array. Currently, `Array((2,2))` yields a list but `Array((None,2))` yields an array.
### Environment info
- `datasets` version: 2.11.0
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.9.13
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 1.4.4
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5719/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5718 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5718/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5718/comments | https://api.github.com/repos/huggingface/datasets/issues/5718/events | https://github.com/huggingface/datasets/pull/5718 | 1,658,958,406 | PR_kwDODunzps5N2IZC | 5,718 | Reorder default data splits to have validation before test | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"After this CI error: https://github.com/huggingface/datasets/actions/runs/4639528358/jobs/8210492953?pr=5718\r\n```\r\nFAILED tests/test_data_files.py::test_get_data_files_patterns[data_file_per_split4] - AssertionError: assert ['random', 'train'] == ['train', 'random']\r\n At index 0 diff: 'random' != 'train'\r\n Full diff:\r\n - ['train', 'random']\r\n + ['random', 'train']\r\n```\r\nI have checked locally and found out that the data split order is nondeterministic. I am addressing this in a separate issue.\r\n\r\nWe should first address:\r\n- #5728 \r\n- #5729",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007728 / 0.011353 (-0.003624) | 0.005275 / 0.011008 (-0.005734) | 0.097708 / 0.038508 (0.059199) | 0.039851 / 0.023109 (0.016741) | 0.333360 / 0.275898 (0.057462) | 0.376135 / 0.323480 (0.052655) | 0.006355 / 0.007986 (-0.001630) | 0.004193 / 0.004328 (-0.000135) | 0.072882 / 0.004250 (0.068631) | 0.052668 / 0.037052 (0.015615) | 0.347359 / 0.258489 (0.088870) | 0.382280 / 0.293841 (0.088440) | 0.035996 / 0.128546 (-0.092550) | 0.012517 / 0.075646 (-0.063129) | 0.334520 / 0.419271 (-0.084751) | 0.051969 / 0.043533 (0.008436) | 0.335735 / 0.255139 (0.080596) | 0.359921 / 0.283200 (0.076722) | 0.113971 / 0.141683 (-0.027712) | 1.465636 / 1.452155 (0.013481) | 1.559824 / 1.492716 (0.067108) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223997 / 0.018006 (0.205991) | 0.499041 / 0.000490 (0.498551) | 0.009697 / 0.000200 (0.009497) | 0.000245 / 0.000054 (0.000190) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027031 / 0.037411 (-0.010381) | 0.110271 / 0.014526 (0.095745) | 0.115848 / 0.176557 (-0.060709) | 0.174253 / 0.737135 (-0.562883) | 0.122616 / 0.296338 (-0.173723) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417275 / 0.215209 (0.202066) | 4.158678 / 2.077655 (2.081023) | 1.917585 / 1.504120 (0.413465) | 1.722219 / 1.541195 (0.181025) | 1.813284 / 1.468490 (0.344793) | 0.707193 / 4.584777 (-3.877584) | 3.853545 / 3.745712 (0.107833) | 3.369240 / 5.269862 (-1.900621) | 1.820264 / 4.565676 (-2.745412) | 0.087340 / 0.424275 (-0.336936) | 0.012305 / 0.007607 (0.004698) | 0.520326 / 0.226044 (0.294281) | 5.107383 / 2.268929 (2.838455) | 2.413977 / 55.444624 (-53.030647) | 2.074356 / 6.876477 (-4.802121) | 2.255959 / 2.142072 (0.113887) | 0.849850 / 4.805227 (-3.955377) | 0.170116 / 6.500664 (-6.330548) | 0.067203 / 0.075469 (-0.008267) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.168158 / 1.841788 (-0.673629) | 15.046312 / 8.074308 (6.972004) | 15.113924 / 10.191392 (4.922532) | 0.145288 / 0.680424 (-0.535136) | 0.017959 / 0.534201 (-0.516242) | 0.424666 / 0.579283 (-0.154617) | 0.422560 / 0.434364 (-0.011804) | 0.526386 / 0.540337 (-0.013952) | 0.623755 / 1.386936 (-0.763181) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007676 / 0.011353 (-0.003677) | 0.005240 / 0.011008 (-0.005769) | 0.074668 / 0.038508 (0.036160) | 0.035570 / 0.023109 (0.012461) | 0.348524 / 0.275898 (0.072626) | 0.378157 / 0.323480 (0.054677) | 0.006112 / 0.007986 (-0.001873) | 0.005641 / 0.004328 (0.001312) | 0.073536 / 0.004250 (0.069286) | 0.048651 / 0.037052 (0.011599) | 0.359282 / 0.258489 (0.100793) | 0.385961 / 0.293841 (0.092120) | 0.035417 / 0.128546 (-0.093129) | 0.012227 / 0.075646 (-0.063419) | 0.085725 / 0.419271 (-0.333546) | 0.049651 / 0.043533 (0.006118) | 0.344122 / 0.255139 (0.088983) | 0.364795 / 0.283200 (0.081595) | 0.112711 / 0.141683 (-0.028972) | 1.426823 / 1.452155 (-0.025332) | 1.534745 / 1.492716 (0.042029) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201728 / 0.018006 (0.183721) | 0.448533 / 0.000490 (0.448043) | 0.003554 / 0.000200 (0.003354) | 0.000092 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030917 / 0.037411 (-0.006494) | 0.117966 / 0.014526 (0.103440) | 0.125954 / 0.176557 (-0.050602) | 0.176382 / 0.737135 (-0.560753) | 0.130757 / 0.296338 (-0.165582) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422167 / 0.215209 (0.206958) | 4.213948 / 2.077655 (2.136294) | 2.040049 / 1.504120 (0.535929) | 1.858317 / 1.541195 (0.317122) | 1.937108 / 1.468490 (0.468618) | 0.707797 / 4.584777 (-3.876979) | 3.831061 / 3.745712 (0.085349) | 3.373711 / 5.269862 (-1.896151) | 1.590343 / 4.565676 (-2.975333) | 0.086672 / 0.424275 (-0.337603) | 0.012429 / 0.007607 (0.004821) | 0.520269 / 0.226044 (0.294225) | 5.207285 / 2.268929 (2.938357) | 2.518107 / 55.444624 (-52.926517) | 2.230696 / 6.876477 (-4.645781) | 2.363164 / 2.142072 (0.221091) | 0.836749 / 4.805227 (-3.968479) | 0.169676 / 6.500664 (-6.330988) | 0.065766 / 0.075469 (-0.009703) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251195 / 1.841788 (-0.590592) | 15.196091 / 8.074308 (7.121782) | 14.991600 / 10.191392 (4.800208) | 0.165335 / 0.680424 (-0.515089) | 0.017789 / 0.534201 (-0.516412) | 0.433863 / 0.579283 (-0.145420) | 0.428660 / 0.434364 (-0.005704) | 0.527385 / 0.540337 (-0.012952) | 0.628067 / 1.386936 (-0.758869) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d06b8c21ba98ae85971a2b1d135ac2ef035b59c9 \"CML watermark\")\n"
] | 2023-04-07T16:01:26 | 2023-04-27T14:43:13 | 2023-04-27T14:35:52 | MEMBER | null | This PR reorders data splits, so that by default validation appears before test.
The default order becomes: [train, validation, test] instead of [train, test, validation]. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5718/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5718",
"html_url": "https://github.com/huggingface/datasets/pull/5718",
"diff_url": "https://github.com/huggingface/datasets/pull/5718.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5718.patch",
"merged_at": "2023-04-27T14:35:52"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5717 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5717/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5717/comments | https://api.github.com/repos/huggingface/datasets/issues/5717/events | https://github.com/huggingface/datasets/issues/5717 | 1,658,729,866 | I_kwDODunzps5i3jWK | 5,717 | Errror when saving to disk a dataset of images | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Looks like as long as the number of shards makes a batch lower than 1000 images it works. In my training set I have 40K images. If I use `num_shards=40` (batch of 1000 images) I get the error, but if I update it to `num_shards=50` (batch of 800 images) it works.\r\n\r\nI will be happy to share my dataset privately if it can help to better debug.",
"Hi! I didn't manage to reproduce this behavior, so sharing the dataset with us would help a lot. \r\n\r\n> My dataset is around 50K images, is this error might be due to a bad image?\r\n\r\nThis shouldn't be the case as we save raw data to disk without decoding it.",
"OK, thanks! The dataset is currently hosted on a gcs bucket. How would you like to proceed for sharing the link? ",
"You could follow [this](https://cloud.google.com/storage/docs/collaboration#browser) procedure or upload the dataset to Google Drive (50K images is not that much unless high-res) and send me an email with the link.",
"Thanks @mariosasko. I just sent you the GDrive link.",
"Thanks @jplu! I managed to reproduce the `TypeError` - it stems from [this](https://github.com/huggingface/datasets/blob/e3f4f124a1b118a5bfff5bae76b25a68aedbebbc/src/datasets/features/image.py#L258-L264) line, which can return a `ChunkedArray` (its mask is also chunked then, which Arrow does not allow) when the embedded data is too big to fit in a standard `Array`.\r\n\r\nI'm working on a fix."
] | 2023-04-07T11:59:17 | 2023-05-09T17:14:50 | null | CONTRIBUTOR | null | ### Describe the bug
Hello!
I have an issue when I try to save on disk my dataset of images. The error I get is:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1442, in save_to_disk
for job_id, done, content in Dataset._save_to_disk_single(**kwargs):
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1473, in _save_to_disk_single
writer.write_table(pa_table)
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_writer.py", line 570, in write_table
pa_table = embed_table_storage(pa_table)
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 2268, in embed_table_storage
arrays = [
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 2269, in <listcomp>
embed_array_storage(table[name], feature) if require_storage_embed(feature) else table[name]
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 1817, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 1817, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 2142, in embed_array_storage
return feature.embed_storage(array)
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/features/image.py", line 269, in embed_storage
storage = pa.StructArray.from_arrays([bytes_array, path_array], ["bytes", "path"], mask=bytes_array.is_null())
File "pyarrow/array.pxi", line 2766, in pyarrow.lib.StructArray.from_arrays
File "pyarrow/array.pxi", line 2961, in pyarrow.lib.c_mask_inverted_from_obj
TypeError: Mask must be a pyarrow.Array of type boolean
```
My dataset is around 50K images, is this error might be due to a bad image?
Thanks for the help.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("imagefolder", data_dir="/path/to/dataset")
dataset["train"].save_to_disk("./myds", num_shards=40)
```
### Expected behavior
Having my dataset properly saved to disk.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.13.3
- PyArrow version: 11.0.0
- Pandas version: 2.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5717/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5716 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5716/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5716/comments | https://api.github.com/repos/huggingface/datasets/issues/5716/events | https://github.com/huggingface/datasets/issues/5716 | 1,658,613,092 | I_kwDODunzps5i3G1k | 5,716 | Handle empty audio | {
"login": "v-yunbin",
"id": 38179632,
"node_id": "MDQ6VXNlcjM4MTc5NjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/38179632?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/v-yunbin",
"html_url": "https://github.com/v-yunbin",
"followers_url": "https://api.github.com/users/v-yunbin/followers",
"following_url": "https://api.github.com/users/v-yunbin/following{/other_user}",
"gists_url": "https://api.github.com/users/v-yunbin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/v-yunbin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/v-yunbin/subscriptions",
"organizations_url": "https://api.github.com/users/v-yunbin/orgs",
"repos_url": "https://api.github.com/users/v-yunbin/repos",
"events_url": "https://api.github.com/users/v-yunbin/events{/privacy}",
"received_events_url": "https://api.github.com/users/v-yunbin/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi! Can you share one of the problematic audio files with us?\r\n\r\nI tried to reproduce the error with the following code: \r\n```python\r\nimport soundfile as sf\r\nimport numpy as np\r\nfrom datasets import Audio\r\n\r\nsf.write(\"empty.wav\", np.array([]), 16000)\r\nAudio(sampling_rate=24000).decode_example({\"path\": \"empty.wav\", \"bytes\": None})\r\n```\r\nBut without success.\r\n\r\nAlso, what version of `librosa` is installed in your env? (You can get this info with `python -c \"import librosa; print(librosa.__version__)`)\r\n\r\n"
] | 2023-04-07T09:51:40 | 2023-04-13T17:33:36 | null | NONE | null | Some audio paths exist, but they are empty, and an error will be reported when reading the audio path.How to use the filter function to avoid the empty audio path?
when a audio is empty, when do resample , it will break:
`array, sampling_rate = sf.read(f) array = librosa.resample(array, orig_sr=sampling_rate, target_sr=self.sampling_rate)` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5716/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5715 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5715/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5715/comments | https://api.github.com/repos/huggingface/datasets/issues/5715/events | https://github.com/huggingface/datasets/issues/5715 | 1,657,479,788 | I_kwDODunzps5iyyJs | 5,715 | Return Numpy Array (fixed length) Mode, in __get_item__, Instead of List | {
"login": "jungbaepark",
"id": 34066771,
"node_id": "MDQ6VXNlcjM0MDY2Nzcx",
"avatar_url": "https://avatars.githubusercontent.com/u/34066771?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jungbaepark",
"html_url": "https://github.com/jungbaepark",
"followers_url": "https://api.github.com/users/jungbaepark/followers",
"following_url": "https://api.github.com/users/jungbaepark/following{/other_user}",
"gists_url": "https://api.github.com/users/jungbaepark/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jungbaepark/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jungbaepark/subscriptions",
"organizations_url": "https://api.github.com/users/jungbaepark/orgs",
"repos_url": "https://api.github.com/users/jungbaepark/repos",
"events_url": "https://api.github.com/users/jungbaepark/events{/privacy}",
"received_events_url": "https://api.github.com/users/jungbaepark/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi! \r\n\r\nYou can use [`.set_format(\"np\")`](https://huggingface.co/docs/datasets/process#format) to get NumPy arrays (or Pytorch tensors with `.set_format(\"torch\")`) in `__getitem__`.\r\n\r\nAlso, have you been able to reproduce the linked PyTorch issue with a HF dataset?\r\n "
] | 2023-04-06T13:57:48 | 2023-04-20T17:16:26 | 2023-04-20T17:16:26 | NONE | null | ### Feature request
There are old known issues, but they can be easily forgettable problems in multiprocessing with pytorch-dataloader:
Too high usage of RAM or shared-memory in pytorch when we set num workers > 1 and returning type of dataset or dataloader is "List" or "Dict".
https://github.com/pytorch/pytorch/issues/13246
With huggingface datasets, unfortunately, the default return type is the list, so the problem is raised too often if we do not set anything for the issue.
However, this issue can be released when the returning output is fixed in length.
Therefore, I request the mode, returning outputs with fixed length (e.g. numpy array) rather than list.
The design would be good when we load datasets as
```python
load_dataset(..., with_return_as_fixed_tensor=True)
```
### Motivation
The general solution for this issue is already in the comments: https://github.com/pytorch/pytorch/issues/13246#issuecomment-905703662
: Numpy or Pandas seems not to have problems, while both have the string type.
(I'm not sure that the sequence of huggingface datasets can solve this problem as well)
### Your contribution
I'll read it ! thanks | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5715/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5714 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5714/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5714/comments | https://api.github.com/repos/huggingface/datasets/issues/5714/events | https://github.com/huggingface/datasets/pull/5714 | 1,657,388,033 | PR_kwDODunzps5NxIOc | 5,714 | Fix xnumpy_load for .npz files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006498 / 0.011353 (-0.004855) | 0.004406 / 0.011008 (-0.006602) | 0.097136 / 0.038508 (0.058628) | 0.027711 / 0.023109 (0.004601) | 0.303092 / 0.275898 (0.027194) | 0.336804 / 0.323480 (0.013324) | 0.004838 / 0.007986 (-0.003148) | 0.004533 / 0.004328 (0.000204) | 0.075062 / 0.004250 (0.070812) | 0.035105 / 0.037052 (-0.001947) | 0.310245 / 0.258489 (0.051756) | 0.347086 / 0.293841 (0.053245) | 0.030867 / 0.128546 (-0.097679) | 0.011436 / 0.075646 (-0.064211) | 0.320728 / 0.419271 (-0.098544) | 0.042303 / 0.043533 (-0.001230) | 0.308177 / 0.255139 (0.053038) | 0.333673 / 0.283200 (0.050473) | 0.084736 / 0.141683 (-0.056947) | 1.477391 / 1.452155 (0.025237) | 1.530399 / 1.492716 (0.037682) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212698 / 0.018006 (0.194692) | 0.409098 / 0.000490 (0.408608) | 0.004202 / 0.000200 (0.004002) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022725 / 0.037411 (-0.014686) | 0.095866 / 0.014526 (0.081340) | 0.104153 / 0.176557 (-0.072404) | 0.162964 / 0.737135 (-0.574171) | 0.106505 / 0.296338 (-0.189834) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431336 / 0.215209 (0.216127) | 4.283290 / 2.077655 (2.205635) | 1.982418 / 1.504120 (0.478298) | 1.762104 / 1.541195 (0.220909) | 1.807528 / 1.468490 (0.339038) | 0.695507 / 4.584777 (-3.889270) | 3.376299 / 3.745712 (-0.369413) | 1.856642 / 5.269862 (-3.413219) | 1.154258 / 4.565676 (-3.411419) | 0.082749 / 0.424275 (-0.341526) | 0.012289 / 0.007607 (0.004682) | 0.525842 / 0.226044 (0.299798) | 5.285764 / 2.268929 (3.016835) | 2.389926 / 55.444624 (-53.054698) | 2.021830 / 6.876477 (-4.854646) | 2.107460 / 2.142072 (-0.034612) | 0.808118 / 4.805227 (-3.997109) | 0.150791 / 6.500664 (-6.349873) | 0.065825 / 0.075469 (-0.009644) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206939 / 1.841788 (-0.634849) | 13.795902 / 8.074308 (5.721594) | 14.107950 / 10.191392 (3.916558) | 0.144300 / 0.680424 (-0.536124) | 0.016478 / 0.534201 (-0.517723) | 0.379395 / 0.579283 (-0.199888) | 0.388437 / 0.434364 (-0.045927) | 0.451443 / 0.540337 (-0.088894) | 0.523142 / 1.386936 (-0.863794) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006503 / 0.011353 (-0.004850) | 0.004578 / 0.011008 (-0.006430) | 0.076278 / 0.038508 (0.037770) | 0.028052 / 0.023109 (0.004943) | 0.337873 / 0.275898 (0.061975) | 0.371368 / 0.323480 (0.047888) | 0.005086 / 0.007986 (-0.002899) | 0.003354 / 0.004328 (-0.000975) | 0.076876 / 0.004250 (0.072625) | 0.039146 / 0.037052 (0.002093) | 0.340299 / 0.258489 (0.081810) | 0.381209 / 0.293841 (0.087368) | 0.031771 / 0.128546 (-0.096775) | 0.011670 / 0.075646 (-0.063976) | 0.085156 / 0.419271 (-0.334116) | 0.041990 / 0.043533 (-0.001543) | 0.338644 / 0.255139 (0.083505) | 0.362461 / 0.283200 (0.079262) | 0.089772 / 0.141683 (-0.051911) | 1.480341 / 1.452155 (0.028187) | 1.562815 / 1.492716 (0.070099) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205700 / 0.018006 (0.187694) | 0.402206 / 0.000490 (0.401716) | 0.001212 / 0.000200 (0.001012) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025172 / 0.037411 (-0.012240) | 0.100959 / 0.014526 (0.086433) | 0.108464 / 0.176557 (-0.068093) | 0.161321 / 0.737135 (-0.575814) | 0.114245 / 0.296338 (-0.182093) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437425 / 0.215209 (0.222216) | 4.362212 / 2.077655 (2.284557) | 2.068815 / 1.504120 (0.564695) | 1.864089 / 1.541195 (0.322894) | 1.909038 / 1.468490 (0.440548) | 0.696097 / 4.584777 (-3.888680) | 3.358628 / 3.745712 (-0.387084) | 2.999085 / 5.269862 (-2.270777) | 1.533917 / 4.565676 (-3.031760) | 0.083010 / 0.424275 (-0.341266) | 0.012372 / 0.007607 (0.004765) | 0.539926 / 0.226044 (0.313882) | 5.438326 / 2.268929 (3.169397) | 2.498581 / 55.444624 (-52.946043) | 2.153359 / 6.876477 (-4.723117) | 2.177891 / 2.142072 (0.035819) | 0.803169 / 4.805227 (-4.002059) | 0.151079 / 6.500664 (-6.349585) | 0.065981 / 0.075469 (-0.009489) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.336682 / 1.841788 (-0.505106) | 14.133055 / 8.074308 (6.058747) | 14.033972 / 10.191392 (3.842580) | 0.152109 / 0.680424 (-0.528315) | 0.016475 / 0.534201 (-0.517726) | 0.387808 / 0.579283 (-0.191475) | 0.378347 / 0.434364 (-0.056017) | 0.484732 / 0.540337 (-0.055606) | 0.569907 / 1.386936 (-0.817029) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1c4ec00511868bd881e84a6f7e0333648d833b8e \"CML watermark\")\n"
] | 2023-04-06T13:01:45 | 2023-04-07T09:23:54 | 2023-04-07T09:16:57 | MEMBER | null | PR:
- #5626
implemented support for streaming `.npy` files by using `numpy.load`.
However, it introduced a bug when used with `.npz` files, within a context manager:
```
ValueError: seek of closed file
```
or in streaming mode:
```
ValueError: I/O operation on closed file.
```
This PR fixes the bug and tests for both `.npy` and `.npz` files.
Fix #5711. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5714/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5714",
"html_url": "https://github.com/huggingface/datasets/pull/5714",
"diff_url": "https://github.com/huggingface/datasets/pull/5714.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5714.patch",
"merged_at": "2023-04-07T09:16:57"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5713 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5713/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5713/comments | https://api.github.com/repos/huggingface/datasets/issues/5713/events | https://github.com/huggingface/datasets/issues/5713 | 1,657,141,251 | I_kwDODunzps5ixfgD | 5,713 | ArrowNotImplementedError when loading dataset from the hub | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi Julien ! This sounds related to https://github.com/huggingface/datasets/issues/5695 - TL;DR: you need to have shards smaller than 2GB to avoid this issue\r\n\r\nThe number of rows per shard is computed using an estimated size of the full dataset, which can sometimes lead to shards bigger than `max_shard_size`. The estimation is currently done using the first samples of the dataset (which can surely be improved). We should probably open an issue to fix this once and for all.\r\n\r\nAnyway for your specific dataset I'd suggest you to pass `num_shards` instead of `max_shard_size` for now, and make sure to have enough shards to end up with shards smaller than 2GB",
"Hi Quentin! Thanks a lot! Using `num_shards` instead of `max_shard_size` works as expected.\r\n\r\nIndeed the way you describe how the size is computed cannot really work with the dataset I'm building as all the image doesn't have the same resolution and then size. Opening an issue on this might be a good idea."
] | 2023-04-06T10:27:22 | 2023-04-06T13:06:22 | 2023-04-06T13:06:21 | CONTRIBUTOR | null | ### Describe the bug
Hello,
I have created a dataset by using the image loader. Once the dataset is created I try to download it and I get the error:
```
Traceback (most recent call last):
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1860, in _prepare_split_single
for _, table in generator:
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 69, in _generate_tables
for batch_idx, record_batch in enumerate(
File "pyarrow/_parquet.pyx", line 1323, in iter_batches
File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1893, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
Create the dataset and push it to the hub:
```python
from datasets import load_dataset
dataset = load_dataset("imagefolder", data_dir="/path/to/dataset")
dataset.push_to_hub("org/dataset-name", private=True, max_shard_size="1GB")
```
Then use it:
```python
from datasets import load_dataset
dataset = load_dataset("org/dataset-name")
```
### Expected behavior
To properly download and use the pushed dataset.
Something else to note is that I specified to have shards of 1GB max, but at the end, for the train set, it is an almost 7GB single file that is pushed.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.13.3
- PyArrow version: 11.0.0
- Pandas version: 2.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5713/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5712 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5712/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5712/comments | https://api.github.com/repos/huggingface/datasets/issues/5712/events | https://github.com/huggingface/datasets/issues/5712 | 1,655,972,106 | I_kwDODunzps5itCEK | 5,712 | load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load() | {
"login": "rcasero",
"id": 1219084,
"node_id": "MDQ6VXNlcjEyMTkwODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1219084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rcasero",
"html_url": "https://github.com/rcasero",
"followers_url": "https://api.github.com/users/rcasero/followers",
"following_url": "https://api.github.com/users/rcasero/following{/other_user}",
"gists_url": "https://api.github.com/users/rcasero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rcasero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rcasero/subscriptions",
"organizations_url": "https://api.github.com/users/rcasero/orgs",
"repos_url": "https://api.github.com/users/rcasero/repos",
"events_url": "https://api.github.com/users/rcasero/events{/privacy}",
"received_events_url": "https://api.github.com/users/rcasero/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Closing since this is a duplicate of #5711",
"> Closing since this is a duplicate of #5711\r\n\r\nSorry @mariosasko , my internet went down went submitting the issue, and somehow it ended up creating a duplicate"
] | 2023-04-05T16:47:10 | 2023-04-06T08:32:37 | 2023-04-05T17:17:44 | NONE | null | ### Describe the bug
Hi,
I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1.
```python
ds = datasets.load_dataset(path=dataset_dir,
name=configuration,
data_dir=dataset_dir,
cache_dir=cache_dir,
aux_dir=aux_dir,
# download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD,
num_proc=18)
```
When upgrading datasets to 2.11.0, it fails with error
```
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 1651, in _download_and_prepare
super()._download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 964, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 682, in _split_generators
self.some_function()
File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 1314, in some_function()
x_df = pd.DataFrame({'cell_type_descriptor': fp['x'].tolist()})
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/numpy/lib/npyio.py", line 248, in __getitem__
bytes = self.zip.open(key)
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 1530, in open
fheader = zef_file.read(sizeFileHeader)
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 744, in read
self._file.seek(self._pos)
ValueError: seek of closed file
```
### Steps to reproduce the bug
Sorry, I cannot share the data or code because they are not mine to share, but the point of failure is a call in `some_function()`
```python
with np.load(filename) as fp:
x_df = pd.DataFrame({'feature': fp['x'].tolist()})
```
I'll try to generate a short snippet that reproduces the error.
### Expected behavior
I would expect that `load_dataset` works on the custom datasets generation script for v2.11.0 the same way it works for 2.10.1, without making `np.load()` give a `ValueError: seek of closed file` error.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.8
- Huggingface_hub version: 0.12.0
- PyArrow version: 11.0.0
- Pandas version: 1.5.2
- numpy: 1.24.2
- This is an offline dataset that uses `datasets.config.HF_DATASETS_OFFLINE = True` in the generation script.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5712/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5711 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5711/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5711/comments | https://api.github.com/repos/huggingface/datasets/issues/5711/events | https://github.com/huggingface/datasets/issues/5711 | 1,655,971,647 | I_kwDODunzps5itB8_ | 5,711 | load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load() | {
"login": "rcasero",
"id": 1219084,
"node_id": "MDQ6VXNlcjEyMTkwODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1219084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rcasero",
"html_url": "https://github.com/rcasero",
"followers_url": "https://api.github.com/users/rcasero/followers",
"following_url": "https://api.github.com/users/rcasero/following{/other_user}",
"gists_url": "https://api.github.com/users/rcasero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rcasero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rcasero/subscriptions",
"organizations_url": "https://api.github.com/users/rcasero/orgs",
"repos_url": "https://api.github.com/users/rcasero/repos",
"events_url": "https://api.github.com/users/rcasero/events{/privacy}",
"received_events_url": "https://api.github.com/users/rcasero/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"It seems like https://github.com/huggingface/datasets/pull/5626 has introduced this error. \r\n\r\ncc @albertvillanova \r\n\r\nI think replacing:\r\nhttps://github.com/huggingface/datasets/blob/0803a006db1c395ac715662cc6079651f77c11ea/src/datasets/download/streaming_download_manager.py#L777-L778\r\nwith:\r\n```python\r\nreturn np.load(xopen(filepath_or_buffer, \"rb\", use_auth_token=use_auth_token), *args, **kwargs)\r\n```\r\nshould fix the issue.\r\n\r\n(Maybe this is also worth doing a patch release afterward)",
"Thanks for reporting, @rcasero.\r\n\r\nI can have a look..."
] | 2023-04-05T16:46:49 | 2023-04-07T09:16:59 | 2023-04-07T09:16:59 | NONE | null | ### Describe the bug
Hi,
I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1.
```python
ds = datasets.load_dataset(path=dataset_dir,
name=configuration,
data_dir=dataset_dir,
cache_dir=cache_dir,
aux_dir=aux_dir,
# download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD,
num_proc=18)
```
When upgrading datasets to 2.11.0, it fails with error
```
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 1651, in _download_and_prepare
super()._download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 964, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 682, in _split_generators
self.some_function()
File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 1314, in some_function()
x_df = pd.DataFrame({'cell_type_descriptor': fp['x'].tolist()})
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/numpy/lib/npyio.py", line 248, in __getitem__
bytes = self.zip.open(key)
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 1530, in open
fheader = zef_file.read(sizeFileHeader)
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 744, in read
self._file.seek(self._pos)
ValueError: seek of closed file
```
### Steps to reproduce the bug
Sorry, I cannot share the data or code because they are not mine to share, but the point of failure is a call in `some_function()`
```python
with np.load(embedding_filename) as fp:
x_df = pd.DataFrame({'feature': fp['x'].tolist()})
```
I'll try to generate a short snippet that reproduces the error.
### Expected behavior
I would expect that `load_dataset` works on the custom datasets generation script for v2.11.0 the same way it works for 2.10.1, without making `np.load()` give a `ValueError: seek of closed file` error.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.8
- Huggingface_hub version: 0.12.0
- PyArrow version: 11.0.0
- Pandas version: 1.5.2
- numpy: 1.24.2
- This is an offline dataset that uses `datasets.config.HF_DATASETS_OFFLINE = True` in the generation script.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5711/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5710 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5710/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5710/comments | https://api.github.com/repos/huggingface/datasets/issues/5710/events | https://github.com/huggingface/datasets/issues/5710 | 1,655,703,534 | I_kwDODunzps5isAfu | 5,710 | OSError: Memory mapping file failed: Cannot allocate memory | {
"login": "Saibo-creator",
"id": 53392976,
"node_id": "MDQ6VXNlcjUzMzkyOTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/53392976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Saibo-creator",
"html_url": "https://github.com/Saibo-creator",
"followers_url": "https://api.github.com/users/Saibo-creator/followers",
"following_url": "https://api.github.com/users/Saibo-creator/following{/other_user}",
"gists_url": "https://api.github.com/users/Saibo-creator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Saibo-creator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Saibo-creator/subscriptions",
"organizations_url": "https://api.github.com/users/Saibo-creator/orgs",
"repos_url": "https://api.github.com/users/Saibo-creator/repos",
"events_url": "https://api.github.com/users/Saibo-creator/events{/privacy}",
"received_events_url": "https://api.github.com/users/Saibo-creator/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! This error means that PyArrow's internal [`mmap`](https://man7.org/linux/man-pages/man2/mmap.2.html) call failed to allocate memory, which can be tricky to debug. Since this error is more related to PyArrow than us, I think it's best to report this issue in their [repo](https://github.com/apache/arrow) (they are more experienced on this matter). Also, googling \"mmap cannot allocate memory\" returns some approaches to solving this problem."
] | 2023-04-05T14:11:26 | 2023-04-20T17:16:40 | 2023-04-20T17:16:40 | NONE | null | ### Describe the bug
Hello, I have a series of datasets each of 5 GB, 600 datasets in total. So together this makes 3TB.
When I trying to load all the 600 datasets into memory, I get the above error message.
Is this normal because I'm hitting the max size of memory mapping of the OS?
Thank you
```terminal
0_21/cache-e9c42499f65b1881.arrow
load_hf_datasets_from_disk: 82%|████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 494/600 [07:26<01:35, 1.11it/s]
Traceback (most recent call last):
File "example_load_genkalm_dataset.py", line 35, in <module>
multi_ds.post_process(max_node_num=args.max_node_num,max_seq_length=args.max_seq_length,delay=args.delay)
File "/home/geng/GenKaLM/src/dataloader/dataset.py", line 142, in post_process
genkalm_dataset = GenKaLM_Dataset.from_hf_dataset(path_or_name=ds_path, max_seq_length=self.max_seq_length,
File "/home/geng/GenKaLM/src/dataloader/dataset.py", line 47, in from_hf_dataset
hf_ds = load_from_disk(path_or_name)
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/load.py", line 1848, in load_from_disk
return Dataset.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1549, in load_from_disk
arrow_table = concat_tables(
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 1805, in concat_tables
tables = list(tables)
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1550, in <genexpr>
table_cls.from_file(Path(dataset_path, data_file["filename"]).as_posix())
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 1065, in from_file
table = _memory_mapped_arrow_table_from_file(filename)
File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 50, in _memory_mapped_arrow_table_from_file
memory_mapped_stream = pa.memory_map(filename)
File "pyarrow/io.pxi", line 950, in pyarrow.lib.memory_map
File "pyarrow/io.pxi", line 911, in pyarrow.lib.MemoryMappedFile._open
File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status
OSError: Memory mapping file failed: Cannot allocate memory
```
### Steps to reproduce the bug
Sorry I can not provide a reproducible code as the data is stored on my server and it's too large to share.
### Expected behavior
I expect the 3TB of data can be fully mapped to memory
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-4.15.0-204-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyArrow version: 11.0.0
- Pandas version: 1.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5710/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5709 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5709/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5709/comments | https://api.github.com/repos/huggingface/datasets/issues/5709/events | https://github.com/huggingface/datasets/issues/5709 | 1,655,423,503 | I_kwDODunzps5iq8IP | 5,709 | Manually dataset info made not taken into account | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"hi @jplu ! Did I understand you correctly that you create the dataset, push it to the Hub with `.push_to_hub` and you see a `dataset_infos.json` file there, then you edit this file, load the dataset with `load_dataset` and you don't see any changes in `.info` attribute of a dataset object? \r\n\r\nThis is actually weird that when you push your dataset to the Hub, a `dataset_infos.json` file is created, because this file is deprecated and it should create `README.md` with the `dataset_info` field instead. Some keys are also deprecated, like \"supervised_keys\" and \"task_templates\".\r\n\r\nCan you please provide a toy reproducible example of how you create and push the dataset? And also why do you want to change this file, especially the number of bytes and examples?",
"Hi @polinaeterna Yes I have created the dataset with `Dataset.from_dict` applied some updates afterward and when I pushed to the hub I had a `dataset_infos.json` file and there was a `README.md` file as well.\r\n\r\nI didn't know that the JSON file was deprecated. So I have built my dataset with `ImageBuilder` instead and now it works like a charm without having to touch anything.\r\n\r\nI haven't succeed to reproduce the creation of the JSON file with a toy example, hence, I certainly did some mistakes when I have manipulated my dataset manually at first. My bad."
] | 2023-04-05T11:15:17 | 2023-04-06T08:52:20 | 2023-04-06T08:52:19 | CONTRIBUTOR | null | ### Describe the bug
Hello,
I'm manually building an image dataset with the `from_dict` approach. I also build the features with the `cast_features` methods. Once the dataset is created I push it on the hub, and a default `dataset_infos.json` file seems to have been automatically added to the repo in same time. Hence I update it manually with all the missing info, but when I download the dataset the info are never updated.
Former `dataset_infos.json` file:
```
{"default": {
"description": "",
"citation": "",
"homepage": "",
"license": "",
"features": {
"image": {
"_type": "Image"
},
"labels": {
"names": [
"Fake",
"Real"
],
"_type": "ClassLabel"
}
},
"splits": {
"validation": {
"name": "validation",
"num_bytes": 901010094.0,
"num_examples": 3200,
"dataset_name": null
},
"train": {
"name": "train",
"num_bytes": 901010094.0,
"num_examples": 3200,
"dataset_name": null
}
},
"download_size": 1802008414,
"dataset_size": 1802020188.0,
"size_in_bytes": 3604028602.0
}}
```
After I update it manually it looks like:
```
{
"bstrai--deepfake-detection":{
"description":"",
"citation":"",
"homepage":"",
"license":"",
"features":{
"image":{
"decode":true,
"id":null,
"_type":"Image"
},
"labels":{
"num_classes":2,
"names":[
"Fake",
"Real"
],
"id":null,
"_type":"ClassLabel"
}
},
"supervised_keys":{
"input":"image",
"output":"labels"
},
"task_templates":[
{
"task":"image-classification",
"image_column":"image",
"label_column":"labels"
}
],
"config_name":null,
"splits":{
"validation":{
"name":"validation",
"num_bytes":36627822,
"num_examples":123,
"dataset_name":"deepfake-detection"
},
"train":{
"name":"train",
"num_bytes":901023694,
"num_examples":3200,
"dataset_name":"deepfake-detection"
}
},
"download_checksums":null,
"download_size":937562209,
"dataset_size":937651516,
"size_in_bytes":1875213725
}
}
```
Anything I should do to have the new infos in the `dataset_infos.json` to be taken into account? Or it is not possible yet?
Thanks!
### Steps to reproduce the bug
-
### Expected behavior
-
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.13.3
- PyArrow version: 11.0.0
- Pandas version: 2.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5709/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5708 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5708/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5708/comments | https://api.github.com/repos/huggingface/datasets/issues/5708/events | https://github.com/huggingface/datasets/issues/5708 | 1,655,023,642 | I_kwDODunzps5ipaga | 5,708 | Dataset sizes are in MiB instead of MB in dataset cards | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Example of bulk edit: https://huggingface.co/datasets/aeslc/discussions/5",
"looks great! \r\n\r\nDo you encode the fact that you've already converted a dataset? (to not convert it twice) or do you base yourself on the info contained in `dataset_info`",
"I am only looping trough the dataset cards, assuming that all of them were created with MiB.\r\n\r\nI agree we should only run the bulk edit once for all canonical datasets: I'm using a for-loop over canonical datasets.",
"yes, worst case, we have this in structured data:\r\n\r\n<img width=\"337\" alt=\"image\" src=\"https://user-images.githubusercontent.com/326577/230037051-06caddcb-08c8-4953-a710-f3d122917db3.png\">\r\n",
"I have just included as well the conversion from MB to GB if necessary. See: \r\n- https://huggingface.co/datasets/bookcorpus/discussions/2/files\r\n- https://huggingface.co/datasets/asnq/discussions/2/files",
"Nice. Is it another loop? Because in https://huggingface.co/datasets/amazon_us_reviews/discussions/2/files we have `32377.29 MB` for example",
"First, I tested some batches to check the changes made. Then I incorporated the MB to GB conversion. Now I'm running the rest.",
"The bulk edit parsed 751 canonical datasets and updated 166.",
"Thanks a lot!\r\n\r\nThe sizes now match as expected!\r\n\r\n<img width=\"1446\" alt=\"Capture d’écran 2023-04-05 à 16 10 15\" src=\"https://user-images.githubusercontent.com/1676121/230107044-ac2a76ea-a4fe-4e81-a925-f464b85f5edd.png\">\r\n",
"I made another bulk edit of ancient canonical datasets that were moved to community organization. I have parsed 11 datasets and opened a PR on 3 of them:\r\n- [ ] \"allenai/scicite\": https://huggingface.co/datasets/allenai/scicite/discussions/3\r\n- [ ] \"allenai/scifact\": https://huggingface.co/datasets/allenai/scifact/discussions/2\r\n- [x] \"dair-ai/emotion\": https://huggingface.co/datasets/dair-ai/emotion/discussions/6"
] | 2023-04-05T06:36:03 | 2023-04-24T19:23:40 | null | MEMBER | null | As @severo reported in an internal discussion (https://github.com/huggingface/moon-landing/issues/5929):
Now we show the dataset size:
- from the dataset card (in the side column)
- from the datasets-server (in the viewer)
But, even if the size is the same, we see a mismatch because the viewer shows MB, while the info from the README generally shows MiB (even if it's written MB -> https://huggingface.co/datasets/blimp/blob/main/README.md?code=true#L1932)
<img width="664" alt="Capture d’écran 2023-04-04 à 10 16 01" src="https://user-images.githubusercontent.com/1676121/229730887-0bd8fa6e-9462-46c6-bd4e-4d2c5784cabb.png">
TODO: Values to be fixed in: `Size of downloaded dataset files:`, `Size of the generated dataset:` and `Total amount of disk used:`
- [x] Bulk edit on the Hub to fix this in all canonical datasets
- [x] Bulk PR on the Hub to fix ancient canonical datasets that were moved to organizations | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5708/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5706 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5706/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5706/comments | https://api.github.com/repos/huggingface/datasets/issues/5706/events | https://github.com/huggingface/datasets/issues/5706 | 1,653,545,835 | I_kwDODunzps5ijxtr | 5,706 | Support categorical data types for Parquet | {
"login": "kklemon",
"id": 1430243,
"node_id": "MDQ6VXNlcjE0MzAyNDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1430243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kklemon",
"html_url": "https://github.com/kklemon",
"followers_url": "https://api.github.com/users/kklemon/followers",
"following_url": "https://api.github.com/users/kklemon/following{/other_user}",
"gists_url": "https://api.github.com/users/kklemon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kklemon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kklemon/subscriptions",
"organizations_url": "https://api.github.com/users/kklemon/orgs",
"repos_url": "https://api.github.com/users/kklemon/repos",
"events_url": "https://api.github.com/users/kklemon/events{/privacy}",
"received_events_url": "https://api.github.com/users/kklemon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "mhattingpete",
"id": 22622299,
"node_id": "MDQ6VXNlcjIyNjIyMjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/22622299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mhattingpete",
"html_url": "https://github.com/mhattingpete",
"followers_url": "https://api.github.com/users/mhattingpete/followers",
"following_url": "https://api.github.com/users/mhattingpete/following{/other_user}",
"gists_url": "https://api.github.com/users/mhattingpete/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mhattingpete/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mhattingpete/subscriptions",
"organizations_url": "https://api.github.com/users/mhattingpete/orgs",
"repos_url": "https://api.github.com/users/mhattingpete/repos",
"events_url": "https://api.github.com/users/mhattingpete/events{/privacy}",
"received_events_url": "https://api.github.com/users/mhattingpete/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mhattingpete",
"id": 22622299,
"node_id": "MDQ6VXNlcjIyNjIyMjk5",
"avatar_url": "https://avatars.githubusercontent.com/u/22622299?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mhattingpete",
"html_url": "https://github.com/mhattingpete",
"followers_url": "https://api.github.com/users/mhattingpete/followers",
"following_url": "https://api.github.com/users/mhattingpete/following{/other_user}",
"gists_url": "https://api.github.com/users/mhattingpete/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mhattingpete/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mhattingpete/subscriptions",
"organizations_url": "https://api.github.com/users/mhattingpete/orgs",
"repos_url": "https://api.github.com/users/mhattingpete/repos",
"events_url": "https://api.github.com/users/mhattingpete/events{/privacy}",
"received_events_url": "https://api.github.com/users/mhattingpete/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! We could definitely a type that holds the categories and uses a DictionaryType storage. There's a ClassLabel type that is similar with a 'names' parameter (similar to a id2label in deep learning frameworks) that uses an integer array as storage.\r\n\r\nIt can be added in `features.py`. Here are some pointers:\r\n- the conversion from HF type to PyArrow type is done in `get_nested_type`\r\n- the conversion from Pyarrow type to HF type is done in `generate_from_arrow_type`\r\n- `encode_nested_example` and `decode_nested_example` are used to do user's value (what users see) <-> storage value (what is in the pyarrow.array) if there's any conversion to do",
"@kklemon did you implement this? Otherwise I would like to give it a try",
"@mhattingpete no, I hadn't time for this so far. Feel free to work on this.",
"#self-assign",
"This would be super useful, so +1. \r\n\r\nAlso, these prior issues/PRs seem relevant: \r\nhttps://github.com/huggingface/datasets/issues/1906\r\nhttps://github.com/huggingface/datasets/pull/1936",
"Hi, this is a really useful feature, has this been implemented yet? "
] | 2023-04-04T09:45:35 | 2023-08-11T13:57:39 | null | NONE | null | ### Feature request
Huggingface datasets does not seem to support categorical / dictionary data types for Parquet as of now. There seems to be a `TODO` in the code for this feature but no implementation yet. Below you can find sample code to reproduce the error that is currently thrown when attempting to read a Parquet file with categorical columns:
```python
import pandas as pd
import pyarrow.parquet as pq
from datasets import load_dataset
# Create categorical sample DataFrame
df = pd.DataFrame({'type': ['foo', 'bar']}).astype('category')
df.to_parquet('data.parquet')
# Read back as pyarrow table
table = pq.read_table('data.parquet')
print(table.schema)
# type: dictionary<values=string, indices=int32, ordered=0>
# Load with huggingface datasets
load_dataset('parquet', data_files='data.parquet')
```
Error:
```
Traceback (most recent call last):
File ".venv/lib/python3.10/site-packages/datasets/builder.py", line 1875, in _prepare_split_single
writer.write_table(table)
File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 566, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 379, in _build_writer
inferred_features = Features.from_arrow_schema(inferred_schema)
File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in from_arrow_schema
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in <dictcomp>
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1361, in generate_from_arrow_type
raise NotImplementedError # TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table
NotImplementedError
```
### Motivation
Categorical data types, as offered by Pandas and implemented with the `DictionaryType` dtype in `pyarrow` can significantly reduce dataset size and are a handy way to turn textual features into numerical representations and back. Lack of support in Huggingface datasets greatly reduces compatibility with a common Pandas / Parquet feature.
### Your contribution
I could provide a PR. However, it would be nice to have an initial complexity estimate from one of the core developers first. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5706/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5705 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5705/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5705/comments | https://api.github.com/repos/huggingface/datasets/issues/5705/events | https://github.com/huggingface/datasets/issues/5705 | 1,653,500,383 | I_kwDODunzps5ijmnf | 5,705 | Getting next item from IterableDataset took forever. | {
"login": "HongtaoYang",
"id": 16588434,
"node_id": "MDQ6VXNlcjE2NTg4NDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/16588434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HongtaoYang",
"html_url": "https://github.com/HongtaoYang",
"followers_url": "https://api.github.com/users/HongtaoYang/followers",
"following_url": "https://api.github.com/users/HongtaoYang/following{/other_user}",
"gists_url": "https://api.github.com/users/HongtaoYang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HongtaoYang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HongtaoYang/subscriptions",
"organizations_url": "https://api.github.com/users/HongtaoYang/orgs",
"repos_url": "https://api.github.com/users/HongtaoYang/repos",
"events_url": "https://api.github.com/users/HongtaoYang/events{/privacy}",
"received_events_url": "https://api.github.com/users/HongtaoYang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! It can take some time to iterate over Parquet files as big as yours, convert the samples to Python, and find the first one that matches a filter predicate before yielding it...",
"Thanks @mariosasko, I figured it was the filter operation. I'm closing this issue because it is not a bug, it is the expected beheaviour."
] | 2023-04-04T09:16:17 | 2023-04-05T23:35:41 | 2023-04-05T23:35:41 | NONE | null | ### Describe the bug
I have a large dataset, about 500GB. The format of the dataset is parquet.
I then load the dataset and try to get the first item
```python
def get_one_item():
dataset = load_dataset("path/to/datafiles", split="train", cache_dir=".", streaming=True)
dataset = dataset.filter(lambda example: example['text'].startswith('Ar'))
print(next(iter(dataset)))
```
However, this function never finish. I waited ~10mins, the function was still running so I killed the process. I'm now using `line_profiler` to profile how long it would take to return one item. I'll be patient and wait for as long as it needs.
I suspect the filter operation is the reason why it took so long. Can I get some possible reasons behind this?
### Steps to reproduce the bug
Unfortunately without my data files, there is no way to reproduce this bug.
### Expected behavior
With `IteralbeDataset`, I expect the first item to be returned instantly.
### Environment info
- datasets version: 2.11.0
- python: 3.7.12 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5705/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5704 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5704/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5704/comments | https://api.github.com/repos/huggingface/datasets/issues/5704/events | https://github.com/huggingface/datasets/pull/5704 | 1,653,471,356 | PR_kwDODunzps5NkEvJ | 5,704 | 5537 speedup load | {
"login": "semajyllek",
"id": 35013374,
"node_id": "MDQ6VXNlcjM1MDEzMzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/35013374?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/semajyllek",
"html_url": "https://github.com/semajyllek",
"followers_url": "https://api.github.com/users/semajyllek/followers",
"following_url": "https://api.github.com/users/semajyllek/following{/other_user}",
"gists_url": "https://api.github.com/users/semajyllek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/semajyllek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/semajyllek/subscriptions",
"organizations_url": "https://api.github.com/users/semajyllek/orgs",
"repos_url": "https://api.github.com/users/semajyllek/repos",
"events_url": "https://api.github.com/users/semajyllek/events{/privacy}",
"received_events_url": "https://api.github.com/users/semajyllek/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Awesome ! cc @mariosasko :)",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5704). All of your documentation changes will be reflected on that endpoint.",
"Hi, thanks for working on this!\r\n\r\nYour solution only works if the `root` is `\"\"`, e.g., this would yield an incorrect result:\r\n```python\r\ndset = load_dataset(\"user/hf-dataset-repo\", data_dir=\"path/to/data_dir\")\r\n```\r\n\r\nAlso, the `HfFileSystem` implementation in `datasets` will be replaced with the more powerful [one](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py) from `huggingface_hub` soon (I plan to open a PR that makes `find` much faster in the coming days). \r\n\r\nSo I don't think we want to merge this PR in the current state, but thanks again for the effort.\r\n\r\n (I'll comment on the original issue to propose a cleaner solution)",
"Ooof. Sorry, I should have checked that more thoroughly then! I would say we could just add that check and only use my approach if the root is \"\", which would still be faster in many cases, but it sounds like you have a better solution on the way. Thanks for the feedback Mario."
] | 2023-04-04T08:58:14 | 2023-04-07T16:10:55 | null | NONE | null | I reimplemented fsspec.spec.glob() in `hffilesystem.py` as `_glob`, used it in `_resolve_single_pattern_in_dataset_repository` only, and saw a 20% speedup in times to load the config, on average.
That's not much when usually this step takes only 2-3 seconds for most datasets, but in this particular case, `bigcode/the-stack-dedup` , the loading time to get the config (not download the entire 6tb dataset, of course), went from ~170 secs to ~20 secs.
What makes this work is this code in `_glob`:
```
if self.dir_cache is not None:
allpaths = self.dir_cache
else:
allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs)
```
I also had to `import glob.has_magic( )` for `_glob()` (confusing, I know).
I hope there is no issue with copying most of the code from `fsspec.spec.glob`, as it is a BSD 3-Clause License,
and I left a comment about this in the docstring of` _glob()`, that we may want to delete.
As mentioned, I evaluated the speedup across a random selection of about 1000 datasets (not all 27k+), and verified that old_config.eq(new_method_config) with the build in method, but deleted this test and related code changes on the subsequent commit. It's in the commit history if anyone wants to see it. (Note this does not include the outlier of `bigcode/the-stack-dedup`
| | old_time | new _time | diff | pct_diff |
| -- | -- | -- | -- | -- |
| mean | 3.340 | 2.642 | 0.698 | 18.404 |
| min | 2.024 | 1.976 | -0.840 | -37.634 |
| max | 66.582 | 41.517 | 30.927 | 85.538 | | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5704/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5704",
"html_url": "https://github.com/huggingface/datasets/pull/5704",
"diff_url": "https://github.com/huggingface/datasets/pull/5704.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5704.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5703 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5703/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5703/comments | https://api.github.com/repos/huggingface/datasets/issues/5703/events | https://github.com/huggingface/datasets/pull/5703 | 1,653,158,955 | PR_kwDODunzps5NjCCV | 5,703 | [WIP][Test, Please ignore] Investigate performance impact of using multiprocessing only | {
"login": "hvaara",
"id": 1535968,
"node_id": "MDQ6VXNlcjE1MzU5Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1535968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hvaara",
"html_url": "https://github.com/hvaara",
"followers_url": "https://api.github.com/users/hvaara/followers",
"following_url": "https://api.github.com/users/hvaara/following{/other_user}",
"gists_url": "https://api.github.com/users/hvaara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hvaara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hvaara/subscriptions",
"organizations_url": "https://api.github.com/users/hvaara/orgs",
"repos_url": "https://api.github.com/users/hvaara/repos",
"events_url": "https://api.github.com/users/hvaara/events{/privacy}",
"received_events_url": "https://api.github.com/users/hvaara/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"`multiprocess` uses `dill` instead of `pickle` for pickling shared objects and, as such, can pickle more types than `multiprocessing`. And I don't think this is something we want to change :).",
"That makes sense to me, and I don't think you should merge this change. I was only curious about the performance impact. I saw the benchmarks that was produced in other PRs, and wanted to get a better understanding of it. I created this PR to see if it got automatically added here.\r\n\r\nIs there a way I can generate those benchmarks myself?",
"You can find some speed comparisons between dill and pickle on SO if you google \"dill vs pickle speed\".\r\n\r\nAnd for the benchmarks, you can generate them locally with DVC running this code from the repo root: https://github.com/huggingface/datasets/blob/0803a006db1c395ac715662cc6079651f77c11ea/.github/workflows/benchmarks.yaml#L23-L47.",
"Thanks for the help @mariosasko!"
] | 2023-04-04T04:37:49 | 2023-04-20T03:17:37 | 2023-04-20T03:17:32 | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5703/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5703",
"html_url": "https://github.com/huggingface/datasets/pull/5703",
"diff_url": "https://github.com/huggingface/datasets/pull/5703.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5703.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5702 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5702/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5702/comments | https://api.github.com/repos/huggingface/datasets/issues/5702/events | https://github.com/huggingface/datasets/issues/5702 | 1,653,104,720 | I_kwDODunzps5iiGBQ | 5,702 | Is it possible or how to define a `datasets.Sequence` that could potentially be either a dict, a str, or None? | {
"login": "gitforziio",
"id": 10508116,
"node_id": "MDQ6VXNlcjEwNTA4MTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/10508116?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gitforziio",
"html_url": "https://github.com/gitforziio",
"followers_url": "https://api.github.com/users/gitforziio/followers",
"following_url": "https://api.github.com/users/gitforziio/following{/other_user}",
"gists_url": "https://api.github.com/users/gitforziio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gitforziio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gitforziio/subscriptions",
"organizations_url": "https://api.github.com/users/gitforziio/orgs",
"repos_url": "https://api.github.com/users/gitforziio/repos",
"events_url": "https://api.github.com/users/gitforziio/events{/privacy}",
"received_events_url": "https://api.github.com/users/gitforziio/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi ! `datasets` uses Apache Arrow as backend to store the data, and it requires each column to have a fixed type. Therefore a column can't have a mix of dicts/lists/strings.\r\n\r\nThough it's possible to have one (nullable) field for each type:\r\n```python\r\nfeatures = Features({\r\n \"text_alone\": Value(\"string\"),\r\n \"text_with_idxes\": {\r\n \"text\": Value(\"string\"),\r\n \"idxes\": Value(\"int64\")\r\n }\r\n})\r\n```\r\n\r\nbut you'd have to reformat your data fiels or define a [dataset loading script](https://huggingface.co/docs/datasets/dataset_script) to apply the appropriate parsing.\r\n\r\nAlternatively we could explore supporting the Arrow [Union](https://arrow.apache.org/docs/python/generated/pyarrow.UnionType.html) type which could solve this issue, but I don't know if it's well supported in python and with the rest of the ecosystem like Parquet",
"@lhoestq Thank you! I further wonder if it's possible to use list subscripts as keys of a feature? Like\r\n```python\r\nfeatures = Features({\r\n 0: Value(\"string\"),\r\n 1: {\r\n \"text\": Value(\"string\"),\r\n \"idxes\": [Value(\"int64\")]\r\n },\r\n 2: Value(\"string\"),\r\n # ...\r\n})\r\n```",
"Column names need to be strings, so you could use \"1\", \"2\", etc. or give appropriate column names",
"@lhoestq Got it. Thank you!"
] | 2023-04-04T03:20:43 | 2023-04-05T14:15:18 | 2023-04-05T14:15:17 | NONE | null | ### Feature request
Hello! Apologies if my question sounds naive:
I was wondering if it’s possible, or how one would go about defining a 'datasets.Sequence' element in datasets.Features that could potentially be either a dict, a str, or None?
Specifically, I’d like to define a feature for a list that contains 18 elements, each of which has been pre-defined as either a `dict or None` or `str or None` - as demonstrated in the slightly misaligned data provided below:
```json
[
[
{"text":"老妇人","idxes":[0,1,2]},null,{"text":"跪","idxes":[3]},null,null,null,null,{"text":"在那坑里","idxes":[4,5,6,7]},null,null,null,null,null,null,null,null,null,null],
[
{"text":"那些水","idxes":[13,14,15]},null,{"text":"舀","idxes":[11]},null,null,null,null,null,{"text":"在那坑里","idxes":[4,5,6,7]},null,{"text":"出","idxes":[12]},null,null,null,null,null,null,null],
[
{"text":"水","idxes":[38]},
null,
{"text":"舀","idxes":[40]},
"假", // note this is just a standalone string
null,null,null,{"text":"坑里","idxes":[35,36]},null,null,null,null,null,null,null,null,null,null]]
```
### Motivation
I'm currently working with a dataset of the following structure and I couldn't find a solution in the [documentation](https://huggingface.co/docs/datasets/v2.11.0/en/package_reference/main_classes#datasets.Features).
```json
{"qid":"3-train-1058","context":"桑桑害怕了。从玉米地里走到田埂上,他遥望着他家那幢草房子里的灯光,知道母亲没有让他回家的意思,很伤感,有点想哭。但没哭,转身朝阿恕家走去。","corefs":[[{"text":"桑桑","idxes":[0,1]},{"text":"他","idxes":[17]}]],"non_corefs":[],"outputs":[[{"text":"他","idxes":[17]},null,{"text":"走","idxes":[11]},null,null,null,null,null,{"text":"从玉米地里","idxes":[6,7,8,9,10]},{"text":"到田埂上","idxes":[12,13,14,15]},null,null,null,null,null,null,null,null],[{"text":"他","idxes":[17]},null,{"text":"走","idxes":[66]},null,null,null,null,null,null,null,{"text":"转身朝阿恕家去","idxes":[60,61,62,63,64,65,67]},null,null,null,null,null,null,null],[{"text":"灯光","idxes":[30,31]},null,null,null,null,null,null,{"text":"草房子里","idxes":[25,26,27,28]},null,null,null,null,null,null,null,null,null,null],[{"text":"他","idxes":[17]},{"text":"他家那幢草房子","idxes":[21,22,23,24,25,26,27]},null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"远"],[{"text":"他","idxes":[17]},{"text":"阿恕家","idxes":[63,64,65]},null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"变近"]]}
```
### Your contribution
I'm going to provide the dataset at https://huggingface.co/datasets/2030NLP/SpaCE2022 . | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5702/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5701 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5701/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5701/comments | https://api.github.com/repos/huggingface/datasets/issues/5701/events | https://github.com/huggingface/datasets/pull/5701 | 1,652,931,399 | PR_kwDODunzps5NiSCy | 5,701 | Add Dataset.from_spark | {
"login": "maddiedawson",
"id": 106995444,
"node_id": "U_kgDOBmCe9A",
"avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maddiedawson",
"html_url": "https://github.com/maddiedawson",
"followers_url": "https://api.github.com/users/maddiedawson/followers",
"following_url": "https://api.github.com/users/maddiedawson/following{/other_user}",
"gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions",
"organizations_url": "https://api.github.com/users/maddiedawson/orgs",
"repos_url": "https://api.github.com/users/maddiedawson/repos",
"events_url": "https://api.github.com/users/maddiedawson/events{/privacy}",
"received_events_url": "https://api.github.com/users/maddiedawson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@mariosasko Would you or another HF datasets maintainer be able to review this, please?",
"Amazing ! Great job @maddiedawson \r\n\r\nDo you know if it's possible to also support writing to Parquet using the HF ParquetWriter if `file_format=\"parquet\"` ?\r\n\r\nParquet is often used when people want to stream the data to train models - which is suitable for big datasets. On the other hand Arrow is generally used for local memory mapping with random access.\r\n\r\n> Please note there was a previous PR adding this functionality\r\n\r\nAm I right to say that it uses the spark workers to prepare the Arrow files ? If so this should make the data preparation fast and won't fill up the executor's memory as in the previously proposed PR",
"Thanks for taking a look! Unlike the previous PR's approach, this implementation takes advantage of Spark mapping to distribute file writing over multiple tasks. (Also it doesn't load the entire dataset into memory :) )\r\n\r\nSupporting Parquet here sgtm; I'll modify the PR.\r\n\r\nI also updated the PR description with a common Spark-HF use case that we want to improve.",
"Hey @albertvillanova @lhoestq , would one of you be able to re-review please? Thank you!",
"@lhoestq this is ready for another pass! Thanks so much 🙏 ",
"Friendly ping @lhoestq , also cc @polinaeterna who may be able to help take a look?",
"Merging `main` into this branch should fix the CI",
"Just rebased @lhoestq ",
"Thanks @lhoestq ! Is there a way for me to trigger the github workflow myself to triage the test failure? I'm not able to repro the test failures locally.",
"There were two test issues in the workflow that I wasn't able to reproduce locally:\r\n\r\n- Python 3.7: createDataFrame fails due to a pickling error. I modified the tests to instead write and read from json files\r\n- Python 3.10: A worker crashes for unknown reasons. I modified the spark setup to explicitly specify local mode in case it was trying to do something else; let's see if that fixes the issue",
"Also one more question @lhoestq when is the next datasets release? We're hoping this can make it in",
"I just re-ran the CI.\r\nI think we can do a release right after this PR is merged ;)",
"Thanks all! @lhoestq could we re-run CI again please? I think we have to disable this feature on python 3.7 due to the pickling error. The other failure was due to https://issues.apache.org/jira/browse/SPARK-30952 so I rewrote the df processing",
"Thanks @lhoestq , this is ready for another CI run. I pinned the pyspark version to see if that fixes the pickling issue",
"The remaining CI issues have been addressed! They were\r\n\r\n- dill=0.3.1.1 is incompatible with cloudpickle, used by Spark. The min-dependency tests use this dill version, and those were failing. I added a skip-test annotation to skip Spark tests when using this dill version. This shouldn't be a production issue since if users are using that version of dill, they won't really be able to do anything with Spark anyway.\r\n- One of the Spark APIs used in this feature (mapInArrow) is incompatible with Windows. I filed a Spark ticket for the team to investigate. For the tests, I added another annotation to skip Spark tests on Windows. In the next PR (adding streaming mode), we should be able to support Windows since that won't use mapInArrow.\r\n\r\nI ran the CI on my forked branch: https://github.com/maddiedawson/datasets/pull/2 Everything passes except one instance of tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore; it looks like a flake.\r\n\r\n@lhoestq granted that the CI passes here, is this ok to merge and release? We'd like to put out a blog post tomorrow to broadcast this to Spark users!",
"Thanks @lhoestq ! Could you help take a look at the error please? Seems unrelated...\r\n\r\nFAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_multiprocessing_on_disk - NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\\\Users\\\\RUNNER~1\\\\AppData\\\\Local\\\\Temp\\\\tmptfnrdj4x\\\\cache-5c5687cf5629c97a_00000_of_00002.arrow'\r\n===== 1 failed, 2152 passed, 23 skipped, 20 warnings in 461.68s (0:07:41) =====",
"The blog is live btw! https://www.databricks.com/blog/contributing-spark-loader-for-hugging-face-datasets Hopefully there can be a release today?",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012686 / 0.011353 (0.001333) | 0.006051 / 0.011008 (-0.004957) | 0.123057 / 0.038508 (0.084549) | 0.033238 / 0.023109 (0.010128) | 0.388207 / 0.275898 (0.112309) | 0.393972 / 0.323480 (0.070492) | 0.006645 / 0.007986 (-0.001340) | 0.006715 / 0.004328 (0.002386) | 0.098348 / 0.004250 (0.094097) | 0.041410 / 0.037052 (0.004358) | 0.380123 / 0.258489 (0.121634) | 0.427982 / 0.293841 (0.134141) | 0.052194 / 0.128546 (-0.076352) | 0.018775 / 0.075646 (-0.056871) | 0.399063 / 0.419271 (-0.020209) | 0.061019 / 0.043533 (0.017487) | 0.370943 / 0.255139 (0.115804) | 0.398326 / 0.283200 (0.115127) | 0.136893 / 0.141683 (-0.004790) | 1.777431 / 1.452155 (0.325276) | 1.844354 / 1.492716 (0.351638) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267296 / 0.018006 (0.249289) | 0.565133 / 0.000490 (0.564643) | 0.005811 / 0.000200 (0.005611) | 0.000122 / 0.000054 (0.000068) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027009 / 0.037411 (-0.010402) | 0.125907 / 0.014526 (0.111381) | 0.122111 / 0.176557 (-0.054445) | 0.189023 / 0.737135 (-0.548112) | 0.140510 / 0.296338 (-0.155829) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.589269 / 0.215209 (0.374060) | 6.038038 / 2.077655 (3.960384) | 2.394681 / 1.504120 (0.890561) | 2.099268 / 1.541195 (0.558073) | 2.105146 / 1.468490 (0.636656) | 1.216304 / 4.584777 (-3.368473) | 5.823110 / 3.745712 (2.077397) | 4.999323 / 5.269862 (-0.270539) | 2.781554 / 4.565676 (-1.784122) | 0.148370 / 0.424275 (-0.275905) | 0.015163 / 0.007607 (0.007556) | 0.775153 / 0.226044 (0.549109) | 7.425314 / 2.268929 (5.156385) | 3.320254 / 55.444624 (-52.124370) | 2.718595 / 6.876477 (-4.157881) | 2.696215 / 2.142072 (0.554142) | 1.452249 / 4.805227 (-3.352978) | 0.281355 / 6.500664 (-6.219309) | 0.088146 / 0.075469 (0.012677) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.495718 / 1.841788 (-0.346070) | 17.498714 / 8.074308 (9.424405) | 20.109705 / 10.191392 (9.918313) | 0.233053 / 0.680424 (-0.447371) | 0.028336 / 0.534201 (-0.505865) | 0.538146 / 0.579283 (-0.041137) | 0.642106 / 0.434364 (0.207742) | 0.597214 / 0.540337 (0.056876) | 0.732219 / 1.386936 (-0.654717) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008153 / 0.011353 (-0.003200) | 0.005605 / 0.011008 (-0.005403) | 0.096159 / 0.038508 (0.057651) | 0.034102 / 0.023109 (0.010992) | 0.428091 / 0.275898 (0.152193) | 0.476535 / 0.323480 (0.153056) | 0.006278 / 0.007986 (-0.001708) | 0.006752 / 0.004328 (0.002424) | 0.100553 / 0.004250 (0.096302) | 0.045546 / 0.037052 (0.008494) | 0.463236 / 0.258489 (0.204747) | 0.502512 / 0.293841 (0.208671) | 0.051014 / 0.128546 (-0.077533) | 0.018499 / 0.075646 (-0.057148) | 0.127587 / 0.419271 (-0.291685) | 0.059254 / 0.043533 (0.015722) | 0.432248 / 0.255139 (0.177109) | 0.462002 / 0.283200 (0.178802) | 0.124918 / 0.141683 (-0.016765) | 1.689740 / 1.452155 (0.237585) | 1.871546 / 1.492716 (0.378830) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274844 / 0.018006 (0.256838) | 0.570522 / 0.000490 (0.570032) | 0.004008 / 0.000200 (0.003808) | 0.000146 / 0.000054 (0.000091) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025323 / 0.037411 (-0.012088) | 0.116323 / 0.014526 (0.101797) | 0.129434 / 0.176557 (-0.047122) | 0.187069 / 0.737135 (-0.550067) | 0.134459 / 0.296338 (-0.161880) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.633551 / 0.215209 (0.418341) | 6.290078 / 2.077655 (4.212423) | 2.692071 / 1.504120 (1.187951) | 2.354344 / 1.541195 (0.813149) | 2.409260 / 1.468490 (0.940770) | 1.270515 / 4.584777 (-3.314261) | 5.552982 / 3.745712 (1.807270) | 3.041417 / 5.269862 (-2.228444) | 1.920634 / 4.565676 (-2.645043) | 0.142500 / 0.424275 (-0.281775) | 0.014378 / 0.007607 (0.006770) | 0.786444 / 0.226044 (0.560399) | 7.711558 / 2.268929 (5.442630) | 3.439688 / 55.444624 (-52.004936) | 2.742314 / 6.876477 (-4.134163) | 2.800531 / 2.142072 (0.658458) | 1.405843 / 4.805227 (-3.399385) | 0.245322 / 6.500664 (-6.255342) | 0.076662 / 0.075469 (0.001193) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.592961 / 1.841788 (-0.248827) | 18.165647 / 8.074308 (10.091339) | 20.011433 / 10.191392 (9.820041) | 0.240558 / 0.680424 (-0.439866) | 0.026045 / 0.534201 (-0.508156) | 0.529610 / 0.579283 (-0.049674) | 0.652494 / 0.434364 (0.218130) | 0.612284 / 0.540337 (0.071947) | 0.733180 / 1.386936 (-0.653756) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ea251c726c73bd076a1bef7e39e2ac4e97c8d166 \"CML watermark\")\n",
"python 3.9.2\r\nGot an error _pickle.PicklingError use Dataset.from_spark.\r\n\r\nDid the dataset import load data from spark dataframe using multi-node Spark cluster\r\ndf = spark.read.parquet(args.input_data).repartition(50)\r\nds = Dataset.from_spark(df, keep_in_memory=True,\r\n cache_dir=\"/pnc-data/data/nuplan/t5_spark/cache_data\")\r\nds.save_to_disk(args.output_data)\r\n\r\nError : \r\n_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforma\r\ntion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.\r\n23/06/16 21:17:20 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)\r\n",
"Hi @yanzia12138 ! Could you open a new issue please and share the full stack trace ? This will help to know what happened exactly"
] | 2023-04-03T23:51:29 | 2023-06-16T16:39:32 | 2023-04-26T15:43:39 | CONTRIBUTOR | null | Adds static method Dataset.from_spark to create datasets from Spark DataFrames.
This approach alleviates users of the need to materialize their dataframe---a common use case is that the user loads their dataset into a dataframe, uses Spark to apply some transformation to some of the columns, and then wants to train on the dataset.
Related issue: https://github.com/huggingface/datasets/issues/5678 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5701/reactions",
"total_count": 6,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 4,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5701/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5701",
"html_url": "https://github.com/huggingface/datasets/pull/5701",
"diff_url": "https://github.com/huggingface/datasets/pull/5701.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5701.patch",
"merged_at": "2023-04-26T15:43:39"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5700 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5700/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5700/comments | https://api.github.com/repos/huggingface/datasets/issues/5700/events | https://github.com/huggingface/datasets/pull/5700 | 1,652,527,530 | PR_kwDODunzps5Ng6g_ | 5,700 | fix: fix wrong modification of the 'cache_file_name' -related paramet… | {
"login": "FrancoisNoyez",
"id": 47528215,
"node_id": "MDQ6VXNlcjQ3NTI4MjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/47528215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FrancoisNoyez",
"html_url": "https://github.com/FrancoisNoyez",
"followers_url": "https://api.github.com/users/FrancoisNoyez/followers",
"following_url": "https://api.github.com/users/FrancoisNoyez/following{/other_user}",
"gists_url": "https://api.github.com/users/FrancoisNoyez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FrancoisNoyez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FrancoisNoyez/subscriptions",
"organizations_url": "https://api.github.com/users/FrancoisNoyez/orgs",
"repos_url": "https://api.github.com/users/FrancoisNoyez/repos",
"events_url": "https://api.github.com/users/FrancoisNoyez/events{/privacy}",
"received_events_url": "https://api.github.com/users/FrancoisNoyez/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Have you tried to set the cache file names if `keep_in_memory`is True ?\r\n\r\n```diff\r\n- if self.cache_files:\r\n+ if self.cache_files and not keep_in_memory:\r\n```\r\n\r\nThis way it doesn't change the indice cache arguments and leave them as `None`",
"@lhoestq \r\nRegarding what you suggest:\r\nThe thing is, if cached files already exist and do correspond to the split that we are currently trying to perform, then it would be a shame not to use them, would it not? So I don't think that we should necessarily bypass this step in the method (corresponding to the reading of already existing data), if 'keep_in_memory' = True. For me, 'keep_in_memory' = True is supposed to mean \"don't cache the output of this method\", but it should say nothing regarding what to do with potentially already existing cached data, should it?\r\nBesides, even if we do what you suggest, and do only that (so, not the modifs that I suggested), then, assuming that 'keep_in_memory' = False and that there exist cached files, if the following check on the existence of cached files with specific name fails, we will still have ended up modifying an input value which will be then used in the remaining of the method, potentially altering the behavior that the user intended the method's call to have. Basically, the issue with what you suggest is that we can't guaranty that we won't continue with the remaining of the method even if this condition is met. Because of that, in my opinion, the best way to not have to worry about potential, unwanted side effects in the rest of the code is to not modify those variables in place, and so, here, to use other variables.\r\nSo, I'm sorry, but for those two reasons, I don't think that what you are suggesting addresses the problems which are described in the opened issue.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5700). All of your documentation changes will be reflected on that endpoint.",
"Makes sense ! Therefore removing the ValueError messages sounds good to me, thanks for detailing.\r\n\r\nThen I think it's fine to keep using the same variables for the cache file names is enough instead of defining new ones - it doesn't alter the behavior of the function. Otherwise it would feel a bit confusing to have similar variables with slightly modified names just for that",
"Ok for the removing the ValueError exceptions, thanks.\r\n\r\nThat said, it seems to me like we should still find a way not to modify the values input by the user, insofar as they can be used elsewhere down the line in the program. Sure, here, by removing the raising of those ValueError exceptions, we have fixed one use cases were allowing this modification actually caused an issue, but maybe there are other use cases where this would also caused an issue? Also, maybe in the future we will add other functionalities which will depend on the values of those input parameters, with then new risks of such an issue occurring?\r\nThat's why, in order not to have to worry about that, and in order to make the code a bit more future -proof, I suggest that make sure those input values are not modified.\r\n\r\nOne way that I did this is to create different but similar looking variable names. If you find this confusing, we can always add a comment.\r\nAnother way would be to not store the result of the conditional definition of the values (the '\\_cache_file_name = (... if condition else ...)' in my proposition of code), and to use it every time we need. But since we use those new variables at least twice, that creates code redundancy, which is not great either.\r\nFinally, a third way that I can imagine would be to put all this logic into its own method, which would then encapsulate it, and protect the remaining of the 'train_test_split' code from all unintended side effect that this logic can currently cause. This one is probably best. Also, maybe it could be used to remove some code redundancy elsewhere in the definition of the Dataset class? I have not checked if such a code redundancy exists.",
"We're already replacing the user's input by default values automatically in other methods, it's fine to do it here as well and actually fits the library's style.\r\n\r\nNote that the case where it would reload the cache even if `keep_in_memory=True` is not implemented though, but it should be easy to add in `_select_with_indices_mapping`:\r\n- add keep_in_memory in `_new_dataset_with_indices` that uses InMemoryTable.from_file\r\n- inside `_select_with_indices_mapping` return the dataset from `_new_dataset_with_indices` if:\r\n - `keep_in_memory=True`\r\n - and `indices_cache_file_name` is not None and exists \r\n - and `is_caching_enabled()`\r\n\r\nBecause if we let it this way it would recreate the cache file unfortunately",
"> We're already replacing the user's input by default values automatically in other methods, it's fine to do it here as well and actually fits the library's style.\r\n\r\nI think the fact that it's a style of the library is not really an argument in itself; however, after thinking through it several times, I think I know see why your solution is acceptable: as soon as the user specifies that 'keep_in_memory=True', they should not care anymore about the value of the '\\_indices_cache_file_name' variables, since from their point of view those are now irrelevant. So it's \"fine\" if we allow ourselves to modify the value of those variables, if it helps the internal code being more concise.\r\nStill, I find that it's a bit unintuitive, and a risk as far as future evolution of the method / of the code is concerned; someone tasked with doing that would need to have the knowledge of a lot of, if not all, the other methods of the class, in order to understand the potentially far-reaching impact of some modifications made to this portion of the code. But I guess that's a choice which is the library's owners to make. Also, if we use your proposed solution, as I explained, we can't get the benefit of potentially reusing possibly already existing cached data.\r\nOn that note...\r\n\r\n> Note that the case where it would reload the cache even if `keep_in_memory=True` is not implemented though\r\n\r\nI'm not sure what you mean here:\r\nWithin the current code trying to load up the potentially already existing split data, there is no trace of the 'keep_in_memory' variable. So why do you say that 'the case where it would reload the cache even if keep_in_memory=True is not implemented' (I assume that you mean 'currently implemented')? Surely, currently, this bit of code works regardless of the value of the 'keep_in_memory' variable', does it not?"
] | 2023-04-03T18:05:26 | 2023-04-06T17:17:27 | null | NONE | null | …ers values in 'train_test_split' + fix bad interaction between 'keep_in_memory' and 'cache_file_name' -related parameters (#5699) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5700/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5700",
"html_url": "https://github.com/huggingface/datasets/pull/5700",
"diff_url": "https://github.com/huggingface/datasets/pull/5700.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5700.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5699 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5699/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5699/comments | https://api.github.com/repos/huggingface/datasets/issues/5699/events | https://github.com/huggingface/datasets/issues/5699 | 1,652,437,419 | I_kwDODunzps5ifjGr | 5,699 | Issue when wanting to split in memory a cached dataset | {
"login": "FrancoisNoyez",
"id": 47528215,
"node_id": "MDQ6VXNlcjQ3NTI4MjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/47528215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FrancoisNoyez",
"html_url": "https://github.com/FrancoisNoyez",
"followers_url": "https://api.github.com/users/FrancoisNoyez/followers",
"following_url": "https://api.github.com/users/FrancoisNoyez/following{/other_user}",
"gists_url": "https://api.github.com/users/FrancoisNoyez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FrancoisNoyez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FrancoisNoyez/subscriptions",
"organizations_url": "https://api.github.com/users/FrancoisNoyez/orgs",
"repos_url": "https://api.github.com/users/FrancoisNoyez/repos",
"events_url": "https://api.github.com/users/FrancoisNoyez/events{/privacy}",
"received_events_url": "https://api.github.com/users/FrancoisNoyez/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! Good catch, this is wrong indeed and thanks for opening a PR :)"
] | 2023-04-03T17:00:07 | 2023-04-04T16:52:42 | null | NONE | null | ### Describe the bug
**In the 'train_test_split' method of the Dataset class** (defined datasets/arrow_dataset.py), **if 'self.cache_files' is not empty**, then, **regarding the input parameters 'train_indices_cache_file_name' and 'test_indices_cache_file_name', if they are None**, we modify them to make them not None, to see if we can just provide back / work from cached data. But if we can't provide cached data, we move on with the call to the method, except those two values are not None anymore, which will conflict with the use of the 'keep_in_memory' parameter down the line.
Indeed, at some point we end up calling the 'select' method, **and if 'keep_in_memory' is True**, since the value of this method's parameter 'indices_cache_file_name' is now not None anymore, **an exception is raised, whose message is "Please use either 'keep_in_memory' or 'indices_cache_file_name' but not both.".**
Because of that, it's impossible to perform a train / test split of a cached dataset while requesting that the result not be cached. Which is inconvenient when one is just performing experiments, with no intention of caching the result.
Aside from this being inconvenient, **the code which lead up to that situation seems simply wrong** to me: the input variable should not be modified so as to change the user's intention just to perform a test, if that test can fail and respecting the user's intention is necessary to proceed in that case.
To fix this, I suggest to use other variables / other variable names, in order to host the value(s) needed to perform the test, so as not to change the originally input values needed by the rest of the method's code.
Also, **I don't see why an exception should be raised when the 'select' method is called with both 'keep_in_memory'=True and 'indices_cache_file_name'!=None**: should the use of 'keep_in_memory' not prevail anyway, specifying that the user does not want to perform caching, and so making irrelevant the value of 'indices_cache_file_name'? This is indeed what happens when we look further in the code, in the '\_select_with_indices_mapping' method: when 'keep_in_memory' is True, then the value of indices_cache_file_name does not matter, the data will be written to a stream buffer anyway.
Hence I suggest to remove the raising of exception in those circumstances. Notably, to remove the raising of it in the 'select', '\_select_with_indices_mapping', 'shuffle' and 'map' methods.
### Steps to reproduce the bug
```python
import datasets
def generate_examples():
for i in range(10):
yield {"id": i}
dataset_ = datasets.Dataset.from_generator(
generate_examples,
keep_in_memory=False,
)
dataset_.train_test_split(
test_size=3,
shuffle=False,
keep_in_memory=True,
train_indices_cache_file_name=None,
test_indices_cache_file_name=None,
)
```
### Expected behavior
The result of the above code should be a DatasetDict instance.
Instead, we get the following exception stack:
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[3], line 1
----> 1 dataset_.train_test_split(
2 test_size=3,
3 shuffle=False,
4 keep_in_memory=True,
5 train_indices_cache_file_name=None,
6 test_indices_cache_file_name=None,
7 )
File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:528, in transmit_format.<locals>.wrapper(*args, **kwargs)
521 self_format = {
522 "type": self._format_type,
523 "format_kwargs": self._format_kwargs,
524 "columns": self._format_columns,
525 "output_all_columns": self._output_all_columns,
526 }
527 # apply actual function
--> 528 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
529 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
530 # re-apply format to the output
File ~/Work/Developments/datasets/src/datasets/fingerprint.py:511, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
507 validate_fingerprint(kwargs[fingerprint_name])
509 # Call actual function
--> 511 out = func(dataset, *args, **kwargs)
513 # Update fingerprint of in-place transforms + update in-place history of transforms
515 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:4428, in Dataset.train_test_split(self, test_size, train_size, shuffle, stratify_by_column, seed, generator, keep_in_memory, load_from_cache_file, train_indices_cache_file_name, test_indices_cache_file_name, writer_batch_size, train_new_fingerprint, test_new_fingerprint)
4425 test_indices = permutation[:n_test]
4426 train_indices = permutation[n_test : (n_test + n_train)]
-> 4428 train_split = self.select(
4429 indices=train_indices,
4430 keep_in_memory=keep_in_memory,
4431 indices_cache_file_name=train_indices_cache_file_name,
4432 writer_batch_size=writer_batch_size,
4433 new_fingerprint=train_new_fingerprint,
4434 )
4435 test_split = self.select(
4436 indices=test_indices,
4437 keep_in_memory=keep_in_memory,
(...)
4440 new_fingerprint=test_new_fingerprint,
4441 )
4443 return DatasetDict({"train": train_split, "test": test_split})
File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:528, in transmit_format.<locals>.wrapper(*args, **kwargs)
521 self_format = {
522 "type": self._format_type,
523 "format_kwargs": self._format_kwargs,
524 "columns": self._format_columns,
525 "output_all_columns": self._output_all_columns,
526 }
527 # apply actual function
--> 528 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
529 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
530 # re-apply format to the output
File ~/Work/Developments/datasets/src/datasets/fingerprint.py:511, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
507 validate_fingerprint(kwargs[fingerprint_name])
509 # Call actual function
--> 511 out = func(dataset, *args, **kwargs)
513 # Update fingerprint of in-place transforms + update in-place history of transforms
515 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails
File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:3679, in Dataset.select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint)
3645 """Create a new dataset with rows selected following the list/array of indices.
3646
3647 Args:
(...)
3676 ```
3677 """
3678 if keep_in_memory and indices_cache_file_name is not None:
-> 3679 raise ValueError("Please use either `keep_in_memory` or `indices_cache_file_name` but not both.")
3681 if len(self.list_indexes()) > 0:
3682 raise DatasetTransformationNotAllowedError(
3683 "Using `.select` on a dataset with attached indexes is not allowed. You can first run `.drop_index() to remove your index and then re-add it."
3684 )
ValueError: Please use either `keep_in_memory` or `indices_cache_file_name` but not both.
```
### Environment info
- `datasets` version: 2.11.1.dev0
- Platform: Linux-5.4.236-1-MANJARO-x86_64-with-glibc2.2.5
- Python version: 3.8.12
- Huggingface_hub version: 0.13.3
- PyArrow version: 11.0.0
- Pandas version: 2.0.0
***
***
EDIT:
Now with a pull request to fix this [here](https://github.com/huggingface/datasets/pull/5700) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5699/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5698 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5698/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5698/comments | https://api.github.com/repos/huggingface/datasets/issues/5698/events | https://github.com/huggingface/datasets/issues/5698 | 1,652,183,611 | I_kwDODunzps5ielI7 | 5,698 | Add Qdrant as another search index | {
"login": "kacperlukawski",
"id": 2649301,
"node_id": "MDQ6VXNlcjI2NDkzMDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2649301?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kacperlukawski",
"html_url": "https://github.com/kacperlukawski",
"followers_url": "https://api.github.com/users/kacperlukawski/followers",
"following_url": "https://api.github.com/users/kacperlukawski/following{/other_user}",
"gists_url": "https://api.github.com/users/kacperlukawski/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kacperlukawski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kacperlukawski/subscriptions",
"organizations_url": "https://api.github.com/users/kacperlukawski/orgs",
"repos_url": "https://api.github.com/users/kacperlukawski/repos",
"events_url": "https://api.github.com/users/kacperlukawski/events{/privacy}",
"received_events_url": "https://api.github.com/users/kacperlukawski/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"@mariosasko I'd appreciate your feedback on this. "
] | 2023-04-03T14:25:19 | 2023-04-11T10:28:40 | null | CONTRIBUTOR | null | ### Feature request
I'd suggest adding Qdrant (https://qdrant.tech) as another search index available, so users can directly build an index from a dataset. Currently, FAISS and ElasticSearch are only supported: https://huggingface.co/docs/datasets/faiss_es
### Motivation
ElasticSearch is a keyword-based search system, while FAISS is a vector search library. Vector database, such as Qdrant, is a different tool based on similarity (like FAISS) but is not limited to a single machine. It makes the vector database well-suited for bigger datasets and collaboration if several people want to access a particular dataset.
### Your contribution
I can provide a PR implementing that functionality on my own. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5698/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5698/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5697 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5697/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5697/comments | https://api.github.com/repos/huggingface/datasets/issues/5697/events | https://github.com/huggingface/datasets/pull/5697 | 1,651,812,614 | PR_kwDODunzps5NefxZ | 5,697 | Raise an error on missing distributed seed | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009644 / 0.011353 (-0.001709) | 0.006407 / 0.011008 (-0.004601) | 0.148353 / 0.038508 (0.109845) | 0.037537 / 0.023109 (0.014428) | 0.379697 / 0.275898 (0.103799) | 0.466260 / 0.323480 (0.142780) | 0.007884 / 0.007986 (-0.000102) | 0.005140 / 0.004328 (0.000812) | 0.111078 / 0.004250 (0.106827) | 0.049429 / 0.037052 (0.012377) | 0.364766 / 0.258489 (0.106277) | 0.453809 / 0.293841 (0.159968) | 0.051918 / 0.128546 (-0.076628) | 0.020081 / 0.075646 (-0.055566) | 0.616041 / 0.419271 (0.196770) | 0.059834 / 0.043533 (0.016301) | 0.373104 / 0.255139 (0.117965) | 0.419304 / 0.283200 (0.136104) | 0.113526 / 0.141683 (-0.028156) | 1.827160 / 1.452155 (0.375006) | 1.912092 / 1.492716 (0.419376) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269584 / 0.018006 (0.251578) | 0.554100 / 0.000490 (0.553610) | 0.006618 / 0.000200 (0.006418) | 0.000093 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025280 / 0.037411 (-0.012131) | 0.123116 / 0.014526 (0.108591) | 0.127674 / 0.176557 (-0.048883) | 0.189106 / 0.737135 (-0.548030) | 0.142072 / 0.296338 (-0.154267) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.602201 / 0.215209 (0.386992) | 5.959610 / 2.077655 (3.881956) | 2.404856 / 1.504120 (0.900736) | 2.175017 / 1.541195 (0.633823) | 2.154360 / 1.468490 (0.685870) | 1.265339 / 4.584777 (-3.319438) | 5.598429 / 3.745712 (1.852716) | 5.130249 / 5.269862 (-0.139612) | 2.764922 / 4.565676 (-1.800754) | 0.143232 / 0.424275 (-0.281043) | 0.014721 / 0.007607 (0.007114) | 0.764734 / 0.226044 (0.538689) | 7.518810 / 2.268929 (5.249882) | 3.344734 / 55.444624 (-52.099890) | 2.601158 / 6.876477 (-4.275319) | 2.726018 / 2.142072 (0.583945) | 1.397918 / 4.805227 (-3.407309) | 0.253277 / 6.500664 (-6.247387) | 0.077772 / 0.075469 (0.002303) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.499535 / 1.841788 (-0.342253) | 17.782490 / 8.074308 (9.708182) | 21.953064 / 10.191392 (11.761672) | 0.248753 / 0.680424 (-0.431671) | 0.029194 / 0.534201 (-0.505007) | 0.529700 / 0.579283 (-0.049583) | 0.618412 / 0.434364 (0.184048) | 0.605062 / 0.540337 (0.064725) | 0.725661 / 1.386936 (-0.661275) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009489 / 0.011353 (-0.001864) | 0.006423 / 0.011008 (-0.004585) | 0.096789 / 0.038508 (0.058281) | 0.034639 / 0.023109 (0.011530) | 0.403875 / 0.275898 (0.127977) | 0.439368 / 0.323480 (0.115888) | 0.006354 / 0.007986 (-0.001631) | 0.006794 / 0.004328 (0.002466) | 0.095537 / 0.004250 (0.091287) | 0.047749 / 0.037052 (0.010697) | 0.424157 / 0.258489 (0.165668) | 0.487825 / 0.293841 (0.193984) | 0.054675 / 0.128546 (-0.073872) | 0.021349 / 0.075646 (-0.054297) | 0.108917 / 0.419271 (-0.310354) | 0.075891 / 0.043533 (0.032358) | 0.412889 / 0.255139 (0.157750) | 0.464512 / 0.283200 (0.181312) | 0.118832 / 0.141683 (-0.022850) | 1.721215 / 1.452155 (0.269060) | 1.857195 / 1.492716 (0.364478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248308 / 0.018006 (0.230302) | 0.559496 / 0.000490 (0.559006) | 0.007136 / 0.000200 (0.006936) | 0.000160 / 0.000054 (0.000106) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031772 / 0.037411 (-0.005639) | 0.123565 / 0.014526 (0.109039) | 0.132660 / 0.176557 (-0.043896) | 0.201428 / 0.737135 (-0.535707) | 0.135238 / 0.296338 (-0.161101) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.646978 / 0.215209 (0.431769) | 6.183477 / 2.077655 (4.105822) | 2.782117 / 1.504120 (1.277997) | 2.294093 / 1.541195 (0.752898) | 2.346932 / 1.468490 (0.878442) | 1.239085 / 4.584777 (-3.345692) | 5.696364 / 3.745712 (1.950652) | 4.980102 / 5.269862 (-0.289759) | 2.278116 / 4.565676 (-2.287560) | 0.157339 / 0.424275 (-0.266936) | 0.014936 / 0.007607 (0.007329) | 0.778001 / 0.226044 (0.551957) | 7.708066 / 2.268929 (5.439138) | 3.412235 / 55.444624 (-52.032389) | 2.670670 / 6.876477 (-4.205806) | 2.731802 / 2.142072 (0.589730) | 1.446516 / 4.805227 (-3.358712) | 0.263689 / 6.500664 (-6.236975) | 0.086359 / 0.075469 (0.010890) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.573169 / 1.841788 (-0.268619) | 17.690842 / 8.074308 (9.616534) | 20.343336 / 10.191392 (10.151944) | 0.231028 / 0.680424 (-0.449396) | 0.025954 / 0.534201 (-0.508247) | 0.570554 / 0.579283 (-0.008729) | 0.610453 / 0.434364 (0.176089) | 0.675830 / 0.540337 (0.135493) | 0.790650 / 1.386936 (-0.596286) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d094ed07823bfb3271f3a9006daa1f92a64967a5 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007553 / 0.011353 (-0.003800) | 0.005426 / 0.011008 (-0.005582) | 0.096550 / 0.038508 (0.058042) | 0.034393 / 0.023109 (0.011284) | 0.322297 / 0.275898 (0.046399) | 0.340943 / 0.323480 (0.017463) | 0.006350 / 0.007986 (-0.001635) | 0.005700 / 0.004328 (0.001372) | 0.074929 / 0.004250 (0.070678) | 0.054819 / 0.037052 (0.017767) | 0.320151 / 0.258489 (0.061662) | 0.346957 / 0.293841 (0.053116) | 0.036659 / 0.128546 (-0.091887) | 0.012443 / 0.075646 (-0.063204) | 0.332232 / 0.419271 (-0.087040) | 0.051467 / 0.043533 (0.007934) | 0.310952 / 0.255139 (0.055813) | 0.325617 / 0.283200 (0.042417) | 0.104908 / 0.141683 (-0.036775) | 1.446752 / 1.452155 (-0.005403) | 1.558773 / 1.492716 (0.066056) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300639 / 0.018006 (0.282633) | 0.499901 / 0.000490 (0.499411) | 0.007340 / 0.000200 (0.007140) | 0.000255 / 0.000054 (0.000201) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027206 / 0.037411 (-0.010206) | 0.105603 / 0.014526 (0.091077) | 0.118669 / 0.176557 (-0.057887) | 0.174050 / 0.737135 (-0.563086) | 0.125099 / 0.296338 (-0.171239) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404285 / 0.215209 (0.189076) | 4.034587 / 2.077655 (1.956933) | 1.812639 / 1.504120 (0.308519) | 1.625745 / 1.541195 (0.084551) | 1.735523 / 1.468490 (0.267033) | 0.709699 / 4.584777 (-3.875078) | 3.802196 / 3.745712 (0.056484) | 3.656984 / 5.269862 (-1.612877) | 1.968470 / 4.565676 (-2.597206) | 0.086612 / 0.424275 (-0.337663) | 0.012368 / 0.007607 (0.004761) | 0.502622 / 0.226044 (0.276577) | 5.017876 / 2.268929 (2.748948) | 2.279794 / 55.444624 (-53.164831) | 1.956938 / 6.876477 (-4.919538) | 2.150430 / 2.142072 (0.008357) | 0.847691 / 4.805227 (-3.957536) | 0.170157 / 6.500664 (-6.330507) | 0.064141 / 0.075469 (-0.011328) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.172246 / 1.841788 (-0.669542) | 15.229444 / 8.074308 (7.155136) | 14.715913 / 10.191392 (4.524521) | 0.192501 / 0.680424 (-0.487923) | 0.017972 / 0.534201 (-0.516229) | 0.423834 / 0.579283 (-0.155449) | 0.423019 / 0.434364 (-0.011345) | 0.493298 / 0.540337 (-0.047039) | 0.589833 / 1.386936 (-0.797103) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007773 / 0.011353 (-0.003580) | 0.005449 / 0.011008 (-0.005560) | 0.075180 / 0.038508 (0.036672) | 0.035221 / 0.023109 (0.012111) | 0.338169 / 0.275898 (0.062271) | 0.374002 / 0.323480 (0.050522) | 0.006391 / 0.007986 (-0.001595) | 0.004406 / 0.004328 (0.000078) | 0.074925 / 0.004250 (0.070675) | 0.056527 / 0.037052 (0.019475) | 0.338071 / 0.258489 (0.079582) | 0.391882 / 0.293841 (0.098041) | 0.037241 / 0.128546 (-0.091305) | 0.012546 / 0.075646 (-0.063100) | 0.087331 / 0.419271 (-0.331940) | 0.049851 / 0.043533 (0.006318) | 0.335264 / 0.255139 (0.080125) | 0.354813 / 0.283200 (0.071614) | 0.110614 / 0.141683 (-0.031069) | 1.432782 / 1.452155 (-0.019372) | 1.548800 / 1.492716 (0.056083) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.307892 / 0.018006 (0.289886) | 0.518809 / 0.000490 (0.518319) | 0.004058 / 0.000200 (0.003858) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029155 / 0.037411 (-0.008256) | 0.111706 / 0.014526 (0.097180) | 0.122964 / 0.176557 (-0.053592) | 0.170939 / 0.737135 (-0.566196) | 0.128538 / 0.296338 (-0.167801) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426529 / 0.215209 (0.211320) | 4.254218 / 2.077655 (2.176563) | 2.011455 / 1.504120 (0.507335) | 1.817397 / 1.541195 (0.276202) | 1.952915 / 1.468490 (0.484425) | 0.705052 / 4.584777 (-3.879725) | 3.844458 / 3.745712 (0.098746) | 3.592754 / 5.269862 (-1.677107) | 1.573567 / 4.565676 (-2.992109) | 0.086834 / 0.424275 (-0.337441) | 0.012389 / 0.007607 (0.004782) | 0.541695 / 0.226044 (0.315650) | 5.224492 / 2.268929 (2.955564) | 2.473648 / 55.444624 (-52.970976) | 2.167458 / 6.876477 (-4.709019) | 2.253319 / 2.142072 (0.111246) | 0.836322 / 4.805227 (-3.968905) | 0.168680 / 6.500664 (-6.331984) | 0.065699 / 0.075469 (-0.009770) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281886 / 1.841788 (-0.559902) | 15.451741 / 8.074308 (7.377433) | 14.906870 / 10.191392 (4.715478) | 0.168554 / 0.680424 (-0.511870) | 0.017365 / 0.534201 (-0.516836) | 0.434183 / 0.579283 (-0.145100) | 0.421891 / 0.434364 (-0.012473) | 0.538993 / 0.540337 (-0.001344) | 0.636212 / 1.386936 (-0.750724) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1f428b8172319a6bfe95d7a4356b1d14a8d386d8 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007362 / 0.011353 (-0.003991) | 0.004992 / 0.011008 (-0.006016) | 0.098730 / 0.038508 (0.060222) | 0.033673 / 0.023109 (0.010563) | 0.296334 / 0.275898 (0.020436) | 0.328208 / 0.323480 (0.004728) | 0.005658 / 0.007986 (-0.002327) | 0.004130 / 0.004328 (-0.000199) | 0.074596 / 0.004250 (0.070346) | 0.048230 / 0.037052 (0.011178) | 0.295631 / 0.258489 (0.037142) | 0.347176 / 0.293841 (0.053335) | 0.036359 / 0.128546 (-0.092187) | 0.011889 / 0.075646 (-0.063758) | 0.332889 / 0.419271 (-0.086382) | 0.049708 / 0.043533 (0.006175) | 0.291207 / 0.255139 (0.036068) | 0.311066 / 0.283200 (0.027867) | 0.098418 / 0.141683 (-0.043265) | 1.415450 / 1.452155 (-0.036705) | 1.526928 / 1.492716 (0.034212) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212636 / 0.018006 (0.194630) | 0.432337 / 0.000490 (0.431847) | 0.006839 / 0.000200 (0.006639) | 0.000205 / 0.000054 (0.000150) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026045 / 0.037411 (-0.011366) | 0.107427 / 0.014526 (0.092901) | 0.114634 / 0.176557 (-0.061922) | 0.169943 / 0.737135 (-0.567192) | 0.123290 / 0.296338 (-0.173048) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409432 / 0.215209 (0.194223) | 4.097910 / 2.077655 (2.020255) | 1.857177 / 1.504120 (0.353057) | 1.672355 / 1.541195 (0.131160) | 1.740130 / 1.468490 (0.271640) | 0.706520 / 4.584777 (-3.878257) | 3.773606 / 3.745712 (0.027893) | 2.101635 / 5.269862 (-3.168226) | 1.326295 / 4.565676 (-3.239382) | 0.085672 / 0.424275 (-0.338604) | 0.012142 / 0.007607 (0.004534) | 0.501168 / 0.226044 (0.275123) | 5.049784 / 2.268929 (2.780855) | 2.322477 / 55.444624 (-53.122148) | 1.990105 / 6.876477 (-4.886372) | 2.115003 / 2.142072 (-0.027070) | 0.837518 / 4.805227 (-3.967709) | 0.168457 / 6.500664 (-6.332207) | 0.064622 / 0.075469 (-0.010847) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188152 / 1.841788 (-0.653635) | 14.991585 / 8.074308 (6.917276) | 14.635187 / 10.191392 (4.443795) | 0.183708 / 0.680424 (-0.496716) | 0.017452 / 0.534201 (-0.516749) | 0.418963 / 0.579283 (-0.160320) | 0.428893 / 0.434364 (-0.005471) | 0.502108 / 0.540337 (-0.038229) | 0.596345 / 1.386936 (-0.790591) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007404 / 0.011353 (-0.003949) | 0.005148 / 0.011008 (-0.005860) | 0.074785 / 0.038508 (0.036277) | 0.033815 / 0.023109 (0.010706) | 0.332752 / 0.275898 (0.056854) | 0.368018 / 0.323480 (0.044538) | 0.005642 / 0.007986 (-0.002344) | 0.004041 / 0.004328 (-0.000287) | 0.073455 / 0.004250 (0.069205) | 0.047380 / 0.037052 (0.010328) | 0.337017 / 0.258489 (0.078528) | 0.384185 / 0.293841 (0.090344) | 0.036592 / 0.128546 (-0.091954) | 0.012109 / 0.075646 (-0.063537) | 0.086862 / 0.419271 (-0.332410) | 0.049030 / 0.043533 (0.005497) | 0.336542 / 0.255139 (0.081403) | 0.350295 / 0.283200 (0.067096) | 0.100998 / 0.141683 (-0.040685) | 1.469749 / 1.452155 (0.017594) | 1.588355 / 1.492716 (0.095639) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227552 / 0.018006 (0.209546) | 0.438087 / 0.000490 (0.437598) | 0.000394 / 0.000200 (0.000194) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030575 / 0.037411 (-0.006836) | 0.111914 / 0.014526 (0.097388) | 0.124583 / 0.176557 (-0.051973) | 0.175471 / 0.737135 (-0.561665) | 0.129535 / 0.296338 (-0.166803) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425625 / 0.215209 (0.210416) | 4.228328 / 2.077655 (2.150673) | 2.021087 / 1.504120 (0.516967) | 1.832550 / 1.541195 (0.291355) | 1.925572 / 1.468490 (0.457082) | 0.690772 / 4.584777 (-3.894005) | 3.724900 / 3.745712 (-0.020813) | 2.080286 / 5.269862 (-3.189576) | 1.316854 / 4.565676 (-3.248822) | 0.085123 / 0.424275 (-0.339152) | 0.012078 / 0.007607 (0.004471) | 0.525802 / 0.226044 (0.299758) | 5.242598 / 2.268929 (2.973670) | 2.491596 / 55.444624 (-52.953028) | 2.125156 / 6.876477 (-4.751320) | 2.185922 / 2.142072 (0.043850) | 0.823116 / 4.805227 (-3.982111) | 0.165188 / 6.500664 (-6.335476) | 0.063970 / 0.075469 (-0.011499) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256948 / 1.841788 (-0.584840) | 14.981990 / 8.074308 (6.907682) | 14.565266 / 10.191392 (4.373874) | 0.175064 / 0.680424 (-0.505360) | 0.017628 / 0.534201 (-0.516573) | 0.429979 / 0.579283 (-0.149304) | 0.422509 / 0.434364 (-0.011855) | 0.546262 / 0.540337 (0.005924) | 0.647103 / 1.386936 (-0.739833) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0803a006db1c395ac715662cc6079651f77c11ea \"CML watermark\")\n"
] | 2023-04-03T10:44:58 | 2023-04-04T15:05:24 | 2023-04-04T14:58:16 | MEMBER | null | close https://github.com/huggingface/datasets/issues/5696 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5697/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5697",
"html_url": "https://github.com/huggingface/datasets/pull/5697",
"diff_url": "https://github.com/huggingface/datasets/pull/5697.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5697.patch",
"merged_at": "2023-04-04T14:58:16"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5696 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5696/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5696/comments | https://api.github.com/repos/huggingface/datasets/issues/5696/events | https://github.com/huggingface/datasets/issues/5696 | 1,651,707,008 | I_kwDODunzps5icwyA | 5,696 | Shuffle a sharded iterable dataset without seed can lead to duplicate data | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-04-03T09:40:03 | 2023-04-04T14:58:18 | 2023-04-04T14:58:18 | MEMBER | null | As reported in https://github.com/huggingface/datasets/issues/5360
If `seed=None` in `.shuffle()`, shuffled datasets don't use the same shuffling seed across nodes.
Because of that, the lists of shards is not shuffled the same way across nodes, and therefore some shards may be assigned to multiple nodes instead of exactly one.
This can happen only when you have a number of shards that is a factor of the number of nodes.
The current workaround is to always set a `seed` in `.shuffle()` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5696/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5696/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5695 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5695/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5695/comments | https://api.github.com/repos/huggingface/datasets/issues/5695/events | https://github.com/huggingface/datasets/issues/5695 | 1,650,974,156 | I_kwDODunzps5iZ93M | 5,695 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError | {
"login": "amariucaitheodor",
"id": 32778667,
"node_id": "MDQ6VXNlcjMyNzc4NjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/32778667?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amariucaitheodor",
"html_url": "https://github.com/amariucaitheodor",
"followers_url": "https://api.github.com/users/amariucaitheodor/followers",
"following_url": "https://api.github.com/users/amariucaitheodor/following{/other_user}",
"gists_url": "https://api.github.com/users/amariucaitheodor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amariucaitheodor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amariucaitheodor/subscriptions",
"organizations_url": "https://api.github.com/users/amariucaitheodor/orgs",
"repos_url": "https://api.github.com/users/amariucaitheodor/repos",
"events_url": "https://api.github.com/users/amariucaitheodor/events{/privacy}",
"received_events_url": "https://api.github.com/users/amariucaitheodor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! It looks like an issue with PyArrow: https://issues.apache.org/jira/browse/ARROW-5030\r\n\r\nIt appears it can happen when you have parquet files with row groups larger than 2GB.\r\nI can see that your parquet files are around 10GB. It is usually advised to keep a value around the default value 500MB to avoid these issues.\r\n\r\nNote that currently the row group size is simply defined by the number of rows `datasets.config.DEFAULT_MAX_BATCH_SIZE`, so reducing this value could let you have parquet files bigger than 2GB and with row groups lower than 2GB.\r\n\r\nWould it be possible for you to re-upload the dataset with the default shard size 500MB ?",
"Hey, thanks for the reply! I've since switched to working with the locally-saved dataset (which works).\r\nMaybe it makes sense to show a warning for uploads with large shard sizes? Since the functionality completely breaks (due to the PyArrow bug).",
"Just tried uploading the same dataset with 500MB shards, I get an errors 4 hours in:\r\n\r\n```\r\nPushing dataset shards to the dataset hub: 25%|██▍ | 358/1453 [4:40:31<14:18:00, 47.01s/it]\r\nTraceback (most recent call last):\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 344, in _inner_upload_lfs_object\r\n return _upload_lfs_object(operation=operation, lfs_batch_action=batch_action, token=token)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 391, in _upload_lfs_object\r\n lfs_upload(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/lfs.py\", line 254, in lfs_upload\r\n _upload_multi_part(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/lfs.py\", line 374, in _upload_multi_part\r\n hf_raise_for_status(part_upload_res)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py\", line 301, in hf_raise_for_status\r\n raise HfHubHTTPError(str(e), response=response) from e\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py\", line 46, in __init__\r\n server_data = response.json()\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/requests/models.py\", line 899, in json\r\n return complexjson.loads(\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/json/__init__.py\", line 357, in loads\r\n return _default_decoder.decode(s)\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/json/decoder.py\", line 337, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/json/decoder.py\", line 355, in raw_decode\r\n raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"process_wit.py\", line 146, in <module>\r\n dataset.push_to_hub(FINAL_PATH, max_shard_size=\"500MB\", private=False)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/datasets/dataset_dict.py\", line 1534, in push_to_hub\r\n repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parquet_shards_to_hub(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 4804, in _push_parquet_shards_to_hub\r\n _retry(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 281, in _retry\r\n return func(*func_args, **func_kwargs)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py\", line 120, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/hf_api.py\", line 2593, in upload_file\r\n commit_info = self.create_commit(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py\", line 120, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/hf_api.py\", line 2411, in create_commit\r\n upload_lfs_files(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py\", line 120, in _inner_fn\r\n return fn(*args, **kwargs)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 351, in upload_lfs_files\r\n thread_map(\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/tqdm/contrib/concurrent.py\", line 69, in thread_map\r\n return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/tqdm/contrib/concurrent.py\", line 51, in _executor_map\r\n return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs))\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/tqdm/std.py\", line 1178, in __iter__\r\n for obj in iterable:\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/concurrent/futures/_base.py\", line 619, in result_iterator\r\n yield fs.pop().result()\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/concurrent/futures/_base.py\", line 444, in result\r\n return self.__get_result()\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/concurrent/futures/_base.py\", line 389, in __get_result\r\n raise self._exception\r\n File \"/cluster/work/cotterell/tamariucai/miniconda3/envs/torch-multimodal/lib/python3.8/concurrent/futures/thread.py\", line 57, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n File \"/cluster/home/tamariucai/.local/lib/python3.8/site-packages/huggingface_hub/_commit_api.py\", line 346, in _inner_upload_lfs_object\r\n raise RuntimeError(f\"Error while uploading '{operation.path_in_repo}' to the Hub.\") from exc\r\nRuntimeError: Error while uploading 'data/train-00358-of-01453-22a5cc8b3eb12be3.parquet' to the Hub.\r\n```\r\nLocal saves do work, however.",
"Hmmm that was probably an intermitent bug, you can resume the upload by re-running push_to_hub",
"Leaving this other error here for the record, which occurs when I load the +700GB dataset from the hub with shard sizes of 500MB:\r\n\r\n```\r\n Traceback (most recent call last): \r\n File \"/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py\", line 1860, in _prepare_split_single\r\n for _, table in generator:\r\n File \"/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py\", line 69, in _generate_tables\r\n for batch_idx, record_batch in enumerate(\r\n File \"pyarrow/_parquet.pyx\", line 1323, in iter_batches\r\n File \"pyarrow/error.pxi\", line 115, in pyarrow.lib.check_status\r\nOSError: Corrupt snappy compressed data.\r\n```\r\nI will probably switch back to the local big dataset or shrink it."
] | 2023-04-02T14:42:44 | 2023-04-11T09:17:54 | 2023-04-10T08:04:04 | NONE | null | ### Describe the bug
Calling `datasets.load_dataset` to load the (publicly available) dataset `theodor1289/wit` fails with `pyarrow.lib.ArrowNotImplementedError`.
### Steps to reproduce the bug
Steps to reproduce this behavior:
1. `!pip install datasets`
2. `!huggingface-cli login`
3. This step will throw the error (it might take a while as the dataset has ~170GB):
```python
from datasets import load_dataset
dataset = load_dataset("theodor1289/wit", "train", use_auth_token=True)
```
Stack trace:
```
(torch-multimodal) bash-4.2$ python test.py
Downloading and preparing dataset None/None to /cluster/work/cotterell/tamariucai/HuggingfaceDatasets/theodor1289___parquet/theodor1289--wit-7a3e984414a86a0f/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...
Downloading data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 491.68it/s]
Extracting data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 16.93it/s]
Traceback (most recent call last):
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 1860, in _prepare_split_single
for _, table in generator:
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 69, in _generate_tables
for batch_idx, record_batch in enumerate(
File "pyarrow/_parquet.pyx", line 1323, in iter_batches
File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/cluster/work/cotterell/tamariucai/multimodal-mirror/examples/test.py", line 2, in <module>
dataset = load_dataset("theodor1289/wit", "train", use_auth_token=True)
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/cluster/home/tamariucai/.local/lib/python3.10/site-packages/datasets/builder.py", line 1893, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
The dataset is loaded in variable `dataset`.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.10.4
- Huggingface_hub version: 0.13.3
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5695/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5694 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5694/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5694/comments | https://api.github.com/repos/huggingface/datasets/issues/5694/events | https://github.com/huggingface/datasets/issues/5694 | 1,650,467,793 | I_kwDODunzps5iYCPR | 5,694 | Dataset configuration | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | null | [] | null | [
"Originally we also though about adding it to the YAML part of the README.md:\r\n\r\n```yaml\r\nbuilder_config:\r\n data_dir: data\r\n data_files:\r\n - split: train\r\n pattern: \"train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*\"\r\n```\r\n\r\nHaving it in the README.md could make it easier to modify it in the UI on HF, and for validation on commit",
"From internal discussions we agreed to go with the YAML approach, since it's the one that seems more appropriate to be modified by a human on the Hub or locally (while JSON e.g. for models are usually created programmatically).",
"Current format:\r\n```yaml\r\nbuilder_config:\r\n data_files:\r\n - split: train\r\n pattern: data/train-*\r\n```"
] | 2023-04-01T13:08:05 | 2023-04-04T14:54:37 | null | MEMBER | null | Following discussions from https://github.com/huggingface/datasets/pull/5331
We could have something like `config.json` to define the configuration of a dataset.
```json
{
"data_dir": "data"
"data_files": {
"train": "train-[0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*"
}
}
```
we could also support a list for several configs with a 'config_name' field.
The alternative was to use YAML in the README.md.
I think it could also support a `dataset_type` field to specify which dataset builder class to use, and the other parameters would be the builder's parameters. Some parameters exist for all builders like `data_files` and `data_dir`, but some parameters are builder specific like `sep` for csv.
This format would be used in `push_to_hub` to be able to push multiple configs.
cc @huggingface/datasets
EDIT: actually we're going for the YAML approach in README.md | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5694/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5694/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5693 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5693/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5693/comments | https://api.github.com/repos/huggingface/datasets/issues/5693/events | https://github.com/huggingface/datasets/pull/5693 | 1,649,934,749 | PR_kwDODunzps5NYdPS | 5,693 | [docs] Split pattern search order | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007841 / 0.011353 (-0.003512) | 0.005640 / 0.011008 (-0.005368) | 0.096465 / 0.038508 (0.057957) | 0.036476 / 0.023109 (0.013367) | 0.306431 / 0.275898 (0.030533) | 0.339545 / 0.323480 (0.016065) | 0.006064 / 0.007986 (-0.001922) | 0.004404 / 0.004328 (0.000076) | 0.073130 / 0.004250 (0.068879) | 0.052765 / 0.037052 (0.015713) | 0.309895 / 0.258489 (0.051406) | 0.354037 / 0.293841 (0.060196) | 0.037127 / 0.128546 (-0.091420) | 0.012387 / 0.075646 (-0.063260) | 0.333503 / 0.419271 (-0.085769) | 0.059799 / 0.043533 (0.016266) | 0.305496 / 0.255139 (0.050358) | 0.324122 / 0.283200 (0.040922) | 0.107007 / 0.141683 (-0.034676) | 1.416743 / 1.452155 (-0.035411) | 1.520772 / 1.492716 (0.028055) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261233 / 0.018006 (0.243227) | 0.573806 / 0.000490 (0.573316) | 0.000390 / 0.000200 (0.000190) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027672 / 0.037411 (-0.009740) | 0.112803 / 0.014526 (0.098278) | 0.121085 / 0.176557 (-0.055471) | 0.176056 / 0.737135 (-0.561080) | 0.127171 / 0.296338 (-0.169167) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414756 / 0.215209 (0.199547) | 4.148743 / 2.077655 (2.071088) | 1.883940 / 1.504120 (0.379820) | 1.698771 / 1.541195 (0.157576) | 1.811926 / 1.468490 (0.343436) | 0.708293 / 4.584777 (-3.876484) | 3.780456 / 3.745712 (0.034744) | 2.098556 / 5.269862 (-3.171306) | 1.323512 / 4.565676 (-3.242164) | 0.086253 / 0.424275 (-0.338022) | 0.012587 / 0.007607 (0.004980) | 0.514824 / 0.226044 (0.288779) | 5.157415 / 2.268929 (2.888487) | 2.382519 / 55.444624 (-53.062105) | 2.014539 / 6.876477 (-4.861938) | 2.215239 / 2.142072 (0.073166) | 0.847178 / 4.805227 (-3.958049) | 0.170053 / 6.500664 (-6.330611) | 0.066461 / 0.075469 (-0.009008) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.199056 / 1.841788 (-0.642732) | 15.244999 / 8.074308 (7.170691) | 14.661593 / 10.191392 (4.470201) | 0.168855 / 0.680424 (-0.511569) | 0.017889 / 0.534201 (-0.516312) | 0.424961 / 0.579283 (-0.154322) | 0.428632 / 0.434364 (-0.005732) | 0.502680 / 0.540337 (-0.037658) | 0.597827 / 1.386936 (-0.789109) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007749 / 0.011353 (-0.003604) | 0.005527 / 0.011008 (-0.005482) | 0.074774 / 0.038508 (0.036266) | 0.035367 / 0.023109 (0.012258) | 0.340594 / 0.275898 (0.064696) | 0.373970 / 0.323480 (0.050490) | 0.006094 / 0.007986 (-0.001892) | 0.004428 / 0.004328 (0.000100) | 0.074120 / 0.004250 (0.069869) | 0.054852 / 0.037052 (0.017800) | 0.357173 / 0.258489 (0.098684) | 0.388877 / 0.293841 (0.095036) | 0.037002 / 0.128546 (-0.091545) | 0.012337 / 0.075646 (-0.063309) | 0.086962 / 0.419271 (-0.332310) | 0.050370 / 0.043533 (0.006837) | 0.342989 / 0.255139 (0.087850) | 0.358065 / 0.283200 (0.074865) | 0.111063 / 0.141683 (-0.030620) | 1.516704 / 1.452155 (0.064549) | 1.634359 / 1.492716 (0.141643) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.261493 / 0.018006 (0.243487) | 0.566288 / 0.000490 (0.565799) | 0.000439 / 0.000200 (0.000239) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030426 / 0.037411 (-0.006985) | 0.114606 / 0.014526 (0.100080) | 0.126134 / 0.176557 (-0.050423) | 0.175324 / 0.737135 (-0.561812) | 0.132766 / 0.296338 (-0.163573) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426785 / 0.215209 (0.211576) | 4.243555 / 2.077655 (2.165900) | 2.089631 / 1.504120 (0.585511) | 1.994562 / 1.541195 (0.453367) | 2.140284 / 1.468490 (0.671794) | 0.698645 / 4.584777 (-3.886132) | 3.807471 / 3.745712 (0.061759) | 3.275343 / 5.269862 (-1.994519) | 1.796756 / 4.565676 (-2.768921) | 0.085986 / 0.424275 (-0.338289) | 0.012213 / 0.007607 (0.004606) | 0.536815 / 0.226044 (0.310771) | 5.344611 / 2.268929 (3.075683) | 2.498578 / 55.444624 (-52.946047) | 2.153260 / 6.876477 (-4.723217) | 2.251310 / 2.142072 (0.109237) | 0.839104 / 4.805227 (-3.966123) | 0.169639 / 6.500664 (-6.331025) | 0.065880 / 0.075469 (-0.009589) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.268610 / 1.841788 (-0.573178) | 15.624915 / 8.074308 (7.550606) | 15.163684 / 10.191392 (4.972292) | 0.172992 / 0.680424 (-0.507432) | 0.018154 / 0.534201 (-0.516047) | 0.440485 / 0.579283 (-0.138798) | 0.431949 / 0.434364 (-0.002415) | 0.547935 / 0.540337 (0.007597) | 0.662442 / 1.386936 (-0.724494) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#5c8a6ba43c4aaa0ca0665d8dadd87ef33e28e8e4 \"CML watermark\")\n"
] | 2023-03-31T19:51:38 | 2023-04-03T18:43:30 | 2023-04-03T18:29:58 | MEMBER | null | This PR addresses #5681 about the order of split patterns 🤗 Datasets searches for when generating dataset splits. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5693/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5693",
"html_url": "https://github.com/huggingface/datasets/pull/5693",
"diff_url": "https://github.com/huggingface/datasets/pull/5693.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5693.patch",
"merged_at": "2023-04-03T18:29:58"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5692 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5692/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5692/comments | https://api.github.com/repos/huggingface/datasets/issues/5692/events | https://github.com/huggingface/datasets/issues/5692 | 1,649,818,644 | I_kwDODunzps5iVjwU | 5,692 | pyarrow.lib.ArrowInvalid: Unable to merge: Field <field> has incompatible types | {
"login": "cyanic-selkie",
"id": 32219669,
"node_id": "MDQ6VXNlcjMyMjE5NjY5",
"avatar_url": "https://avatars.githubusercontent.com/u/32219669?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cyanic-selkie",
"html_url": "https://github.com/cyanic-selkie",
"followers_url": "https://api.github.com/users/cyanic-selkie/followers",
"following_url": "https://api.github.com/users/cyanic-selkie/following{/other_user}",
"gists_url": "https://api.github.com/users/cyanic-selkie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cyanic-selkie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cyanic-selkie/subscriptions",
"organizations_url": "https://api.github.com/users/cyanic-selkie/orgs",
"repos_url": "https://api.github.com/users/cyanic-selkie/repos",
"events_url": "https://api.github.com/users/cyanic-selkie/events{/privacy}",
"received_events_url": "https://api.github.com/users/cyanic-selkie/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi! The link pointing to the code that generated the dataset is broken. Can you please fix it to make debugging easier?",
"> Hi! The link pointing to the code that generated the dataset is broken. Can you please fix it to make debugging easier?\r\n\r\nSorry about that, it's fixed now.\r\n"
] | 2023-03-31T18:19:40 | 2023-04-04T14:38:30 | null | NONE | null | ### Describe the bug
When loading the dataset [wikianc-en](https://huggingface.co/datasets/cyanic-selkie/wikianc-en) which I created using [this](https://github.com/cyanic-selkie/wikianc) code, I get the following error:
```
Traceback (most recent call last):
File "/home/sven/code/rector/answer-detection/train.py", line 106, in <module>
(dataset, weights) = get_dataset(args.dataset, tokenizer, labels, args.padding)
File "/home/sven/code/rector/answer-detection/dataset.py", line 106, in get_dataset
dataset = load_dataset("cyanic-selkie/wikianc-en")
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/load.py", line 1794, in load_dataset
ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1106, in as_dataset
datasets = map_nested(
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 443, in map_nested
mapped = [
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 444, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 346, in _single_map_nested
return function(data_struct)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1136, in _build_single_dataset
ds = self._as_dataset(
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1207, in _as_dataset
dataset_kwargs = ArrowReader(cache_dir, self.info).read(
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 239, in read
return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 260, in read_files
pa_table = self._read_files(files, in_memory=in_memory)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 203, in _read_files
pa_table = concat_tables(pa_tables) if len(pa_tables) != 1 else pa_tables[0]
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1808, in concat_tables
return ConcatenationTable.from_tables(tables, axis=axis)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1514, in from_tables
return cls.from_blocks(blocks)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1427, in from_blocks
table = cls._concat_blocks(blocks, axis=0)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1373, in _concat_blocks
return pa.concat_tables(pa_tables, promote=True)
File "pyarrow/table.pxi", line 5224, in pyarrow.lib.concat_tables
File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Unable to merge: Field paragraph_anchors has incompatible types: list<: struct<start: uint32 not null, end: uint32 not null, qid: uint32, pageid: uint32, title: string not null> not null> vs list<item: struct<start: uint32, end: uint32, qid: uint32, pageid: uint32, title: string>>
```
This only happens when I load the `train` split, indicating that the size of the dataset is the deciding factor.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("cyanic-selkie/wikianc-en", split="train")
```
### Expected behavior
The dataset should load normally without any errors.
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-6.2.8-arch1-1-x86_64-with-glibc2.37
- Python version: 3.10.10
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5692/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5691 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5691/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5691/comments | https://api.github.com/repos/huggingface/datasets/issues/5691/events | https://github.com/huggingface/datasets/pull/5691 | 1,649,737,526 | PR_kwDODunzps5NX08d | 5,691 | [docs] Compress data files | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"[Confirmed](https://huggingface.slack.com/archives/C02EMARJ65P/p1680541667004199) with the Hub team the file size limit for the Hugging Face Hub is 10MB :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006789 / 0.011353 (-0.004564) | 0.004935 / 0.011008 (-0.006073) | 0.096796 / 0.038508 (0.058288) | 0.032485 / 0.023109 (0.009376) | 0.335342 / 0.275898 (0.059444) | 0.354999 / 0.323480 (0.031519) | 0.005467 / 0.007986 (-0.002519) | 0.005267 / 0.004328 (0.000939) | 0.073988 / 0.004250 (0.069737) | 0.044402 / 0.037052 (0.007350) | 0.331156 / 0.258489 (0.072666) | 0.363595 / 0.293841 (0.069754) | 0.035301 / 0.128546 (-0.093245) | 0.012141 / 0.075646 (-0.063505) | 0.333164 / 0.419271 (-0.086107) | 0.048818 / 0.043533 (0.005286) | 0.331458 / 0.255139 (0.076319) | 0.343567 / 0.283200 (0.060367) | 0.094963 / 0.141683 (-0.046720) | 1.444383 / 1.452155 (-0.007772) | 1.520093 / 1.492716 (0.027377) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212311 / 0.018006 (0.194305) | 0.436413 / 0.000490 (0.435923) | 0.000333 / 0.000200 (0.000133) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026670 / 0.037411 (-0.010742) | 0.105774 / 0.014526 (0.091248) | 0.115796 / 0.176557 (-0.060760) | 0.176504 / 0.737135 (-0.560631) | 0.121883 / 0.296338 (-0.174456) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400783 / 0.215209 (0.185574) | 4.006608 / 2.077655 (1.928953) | 1.817659 / 1.504120 (0.313539) | 1.619777 / 1.541195 (0.078582) | 1.684247 / 1.468490 (0.215757) | 0.701116 / 4.584777 (-3.883661) | 3.684056 / 3.745712 (-0.061656) | 2.065258 / 5.269862 (-3.204603) | 1.425460 / 4.565676 (-3.140217) | 0.084519 / 0.424275 (-0.339757) | 0.011949 / 0.007607 (0.004342) | 0.496793 / 0.226044 (0.270749) | 4.978864 / 2.268929 (2.709935) | 2.303388 / 55.444624 (-53.141237) | 1.978341 / 6.876477 (-4.898135) | 2.055744 / 2.142072 (-0.086329) | 0.832022 / 4.805227 (-3.973206) | 0.164715 / 6.500664 (-6.335949) | 0.062701 / 0.075469 (-0.012768) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.178723 / 1.841788 (-0.663065) | 14.583986 / 8.074308 (6.509678) | 14.189402 / 10.191392 (3.998010) | 0.183867 / 0.680424 (-0.496557) | 0.017565 / 0.534201 (-0.516636) | 0.421345 / 0.579283 (-0.157938) | 0.420235 / 0.434364 (-0.014129) | 0.496758 / 0.540337 (-0.043580) | 0.591558 / 1.386936 (-0.795378) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007019 / 0.011353 (-0.004334) | 0.004996 / 0.011008 (-0.006012) | 0.073345 / 0.038508 (0.034836) | 0.033077 / 0.023109 (0.009968) | 0.335954 / 0.275898 (0.060056) | 0.372616 / 0.323480 (0.049136) | 0.005678 / 0.007986 (-0.002308) | 0.003906 / 0.004328 (-0.000423) | 0.072841 / 0.004250 (0.068591) | 0.046829 / 0.037052 (0.009777) | 0.335177 / 0.258489 (0.076688) | 0.382862 / 0.293841 (0.089021) | 0.038406 / 0.128546 (-0.090141) | 0.012110 / 0.075646 (-0.063536) | 0.085796 / 0.419271 (-0.333476) | 0.049896 / 0.043533 (0.006363) | 0.338232 / 0.255139 (0.083093) | 0.361054 / 0.283200 (0.077855) | 0.103171 / 0.141683 (-0.038512) | 1.556692 / 1.452155 (0.104538) | 1.540023 / 1.492716 (0.047306) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223705 / 0.018006 (0.205699) | 0.438771 / 0.000490 (0.438282) | 0.002838 / 0.000200 (0.002639) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028423 / 0.037411 (-0.008988) | 0.110560 / 0.014526 (0.096035) | 0.121629 / 0.176557 (-0.054928) | 0.173638 / 0.737135 (-0.563498) | 0.127062 / 0.296338 (-0.169277) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425806 / 0.215209 (0.210597) | 4.251051 / 2.077655 (2.173397) | 2.059735 / 1.504120 (0.555615) | 1.864886 / 1.541195 (0.323692) | 1.941553 / 1.468490 (0.473063) | 0.700084 / 4.584777 (-3.884693) | 3.753150 / 3.745712 (0.007438) | 3.218606 / 5.269862 (-2.051256) | 1.439648 / 4.565676 (-3.126028) | 0.085239 / 0.424275 (-0.339037) | 0.012026 / 0.007607 (0.004419) | 0.521564 / 0.226044 (0.295520) | 5.217902 / 2.268929 (2.948973) | 2.557831 / 55.444624 (-52.886793) | 2.240223 / 6.876477 (-4.636254) | 2.364664 / 2.142072 (0.222591) | 0.825884 / 4.805227 (-3.979343) | 0.167800 / 6.500664 (-6.332864) | 0.063552 / 0.075469 (-0.011917) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255532 / 1.841788 (-0.586256) | 14.747783 / 8.074308 (6.673475) | 14.352263 / 10.191392 (4.160871) | 0.143659 / 0.680424 (-0.536765) | 0.017517 / 0.534201 (-0.516684) | 0.419863 / 0.579283 (-0.159421) | 0.416674 / 0.434364 (-0.017690) | 0.485694 / 0.540337 (-0.054643) | 0.584810 / 1.386936 (-0.802126) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#61db0e9c936bc67c18b37b0960e2f0bb1f8ffdcd \"CML watermark\")\n"
] | 2023-03-31T17:17:26 | 2023-04-19T13:37:32 | 2023-04-19T07:25:58 | MEMBER | null | This PR addresses the comments in #5687 about compressing text file extensions before uploading to the Hub. Also clarified what "too large" means based on the GitLFS [docs](https://docs.github.com/en/repositories/working-with-files/managing-large-files/about-git-large-file-storage). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5691/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5691",
"html_url": "https://github.com/huggingface/datasets/pull/5691",
"diff_url": "https://github.com/huggingface/datasets/pull/5691.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5691.patch",
"merged_at": "2023-04-19T07:25:58"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5689 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5689/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5689/comments | https://api.github.com/repos/huggingface/datasets/issues/5689/events | https://github.com/huggingface/datasets/pull/5689 | 1,648,956,349 | PR_kwDODunzps5NVMuI | 5,689 | Support streaming Beam datasets from HF GCS preprocessed data | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"wikipedia\", \"20220301.en\", split=\"train\", streaming=True); item = next(iter(ds)); item\r\nOut[2]: \r\n{'id': '12',\r\n 'url': 'https://en.wikipedia.org/wiki/Anarchism',\r\n 'title': 'Anarchism',\r\n 'text': 'Anarchism is a political philosophy and movement that is sceptical of authority and rejects all involuntary, coercive forms of hierarchy. Anarchism calls for the abolition of the state, which it holds to be unnecessary, undesirable, and harmful. As a historically left-wing movement, placed on the farthest left of the political spectrum, it is usually described alongside communalism and libertarian Marxism as the libertarian wing (libertarian socialism) of the socialist movement,...}\r\n```",
"I love your example 🏴🅰️",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007859 / 0.011353 (-0.003493) | 0.005129 / 0.011008 (-0.005879) | 0.098070 / 0.038508 (0.059562) | 0.036500 / 0.023109 (0.013391) | 0.311575 / 0.275898 (0.035677) | 0.338351 / 0.323480 (0.014872) | 0.005962 / 0.007986 (-0.002024) | 0.004060 / 0.004328 (-0.000268) | 0.072970 / 0.004250 (0.068719) | 0.049289 / 0.037052 (0.012237) | 0.310303 / 0.258489 (0.051814) | 0.347449 / 0.293841 (0.053608) | 0.046912 / 0.128546 (-0.081634) | 0.011952 / 0.075646 (-0.063694) | 0.333600 / 0.419271 (-0.085671) | 0.052700 / 0.043533 (0.009167) | 0.325486 / 0.255139 (0.070347) | 0.326920 / 0.283200 (0.043720) | 0.107683 / 0.141683 (-0.034000) | 1.416679 / 1.452155 (-0.035476) | 1.502418 / 1.492716 (0.009702) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216520 / 0.018006 (0.198514) | 0.448450 / 0.000490 (0.447960) | 0.004213 / 0.000200 (0.004013) | 0.000082 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027081 / 0.037411 (-0.010331) | 0.110989 / 0.014526 (0.096463) | 0.116087 / 0.176557 (-0.060470) | 0.173771 / 0.737135 (-0.563364) | 0.121240 / 0.296338 (-0.175099) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399938 / 0.215209 (0.184729) | 4.017665 / 2.077655 (1.940010) | 1.782327 / 1.504120 (0.278207) | 1.612955 / 1.541195 (0.071761) | 1.698839 / 1.468490 (0.230349) | 0.706702 / 4.584777 (-3.878075) | 4.533425 / 3.745712 (0.787713) | 2.102611 / 5.269862 (-3.167250) | 1.461429 / 4.565676 (-3.104248) | 0.085719 / 0.424275 (-0.338556) | 0.012104 / 0.007607 (0.004497) | 0.507397 / 0.226044 (0.281352) | 5.061572 / 2.268929 (2.792643) | 2.272106 / 55.444624 (-53.172518) | 1.935575 / 6.876477 (-4.940901) | 2.102541 / 2.142072 (-0.039532) | 0.838395 / 4.805227 (-3.966832) | 0.168573 / 6.500664 (-6.332091) | 0.064234 / 0.075469 (-0.011235) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.190077 / 1.841788 (-0.651710) | 15.765587 / 8.074308 (7.691279) | 14.694626 / 10.191392 (4.503234) | 0.142912 / 0.680424 (-0.537512) | 0.017669 / 0.534201 (-0.516532) | 0.421502 / 0.579283 (-0.157781) | 0.452732 / 0.434364 (0.018368) | 0.497480 / 0.540337 (-0.042857) | 0.586310 / 1.386936 (-0.800626) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007629 / 0.011353 (-0.003724) | 0.005330 / 0.011008 (-0.005679) | 0.076366 / 0.038508 (0.037858) | 0.034703 / 0.023109 (0.011593) | 0.356300 / 0.275898 (0.080402) | 0.392909 / 0.323480 (0.069429) | 0.005959 / 0.007986 (-0.002026) | 0.004140 / 0.004328 (-0.000188) | 0.075289 / 0.004250 (0.071039) | 0.047880 / 0.037052 (0.010828) | 0.357289 / 0.258489 (0.098800) | 0.404554 / 0.293841 (0.110714) | 0.037182 / 0.128546 (-0.091365) | 0.012266 / 0.075646 (-0.063380) | 0.088554 / 0.419271 (-0.330718) | 0.049698 / 0.043533 (0.006165) | 0.353453 / 0.255139 (0.098314) | 0.373252 / 0.283200 (0.090052) | 0.101892 / 0.141683 (-0.039791) | 1.481534 / 1.452155 (0.029380) | 1.553818 / 1.492716 (0.061102) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229891 / 0.018006 (0.211884) | 0.452444 / 0.000490 (0.451954) | 0.000434 / 0.000200 (0.000234) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030170 / 0.037411 (-0.007241) | 0.115097 / 0.014526 (0.100571) | 0.122094 / 0.176557 (-0.054463) | 0.171352 / 0.737135 (-0.565784) | 0.128441 / 0.296338 (-0.167898) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428347 / 0.215209 (0.213138) | 4.266243 / 2.077655 (2.188588) | 2.148327 / 1.504120 (0.644207) | 1.874141 / 1.541195 (0.332946) | 1.968737 / 1.468490 (0.500246) | 0.715320 / 4.584777 (-3.869457) | 4.166097 / 3.745712 (0.420384) | 2.169550 / 5.269862 (-3.100312) | 1.377441 / 4.565676 (-3.188236) | 0.086376 / 0.424275 (-0.337899) | 0.012018 / 0.007607 (0.004411) | 0.517433 / 0.226044 (0.291388) | 5.167327 / 2.268929 (2.898398) | 2.545822 / 55.444624 (-52.898803) | 2.241726 / 6.876477 (-4.634751) | 2.327220 / 2.142072 (0.185147) | 0.841618 / 4.805227 (-3.963609) | 0.169473 / 6.500664 (-6.331191) | 0.065505 / 0.075469 (-0.009964) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.270476 / 1.841788 (-0.571312) | 17.049885 / 8.074308 (8.975577) | 14.847615 / 10.191392 (4.656223) | 0.168671 / 0.680424 (-0.511753) | 0.017564 / 0.534201 (-0.516637) | 0.424780 / 0.579283 (-0.154503) | 0.517392 / 0.434364 (0.083028) | 0.561197 / 0.540337 (0.020859) | 0.697792 / 1.386936 (-0.689144) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ce06edf0afb70027ffbd3c2ddec5d28037e9bd31 \"CML watermark\")\n"
] | 2023-03-31T08:44:24 | 2023-04-12T05:57:55 | 2023-04-12T05:50:31 | MEMBER | null | This PR implements streaming Apache Beam datasets that are already preprocessed by us and stored in the HF Google Cloud Storage:
- natural_questions
- wiki40b
- wikipedia
This is done by streaming from the prepared Arrow files in HF Google Cloud Storage.
This will fix their corresponding dataset viewers. Related to:
- https://github.com/huggingface/datasets-server/pull/988#discussion_r1150767138
Related to:
- https://huggingface.co/datasets/natural_questions/discussions/4
- https://huggingface.co/datasets/wiki40b/discussions/2
- https://huggingface.co/datasets/wikipedia/discussions/9
CC: @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5689/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5689/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5689",
"html_url": "https://github.com/huggingface/datasets/pull/5689",
"diff_url": "https://github.com/huggingface/datasets/pull/5689.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5689.patch",
"merged_at": "2023-04-12T05:50:30"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5690 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5690/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5690/comments | https://api.github.com/repos/huggingface/datasets/issues/5690/events | https://github.com/huggingface/datasets/issues/5690 | 1,649,289,883 | I_kwDODunzps5iTiqb | 5,690 | raise AttributeError(f"No {package_name} attribute {name}") AttributeError: No huggingface_hub attribute hf_api | {
"login": "wccccp",
"id": 55964850,
"node_id": "MDQ6VXNlcjU1OTY0ODUw",
"avatar_url": "https://avatars.githubusercontent.com/u/55964850?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wccccp",
"html_url": "https://github.com/wccccp",
"followers_url": "https://api.github.com/users/wccccp/followers",
"following_url": "https://api.github.com/users/wccccp/following{/other_user}",
"gists_url": "https://api.github.com/users/wccccp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wccccp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wccccp/subscriptions",
"organizations_url": "https://api.github.com/users/wccccp/orgs",
"repos_url": "https://api.github.com/users/wccccp/repos",
"events_url": "https://api.github.com/users/wccccp/events{/privacy}",
"received_events_url": "https://api.github.com/users/wccccp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @wccccp, thanks for reporting. \r\nThat's weird since `huggingface_hub` _has_ a module called `hf_api` and you are using a recent version of it. \r\n\r\nWhich version of `datasets` are you using? And is it a bug that you experienced only recently? (cc @lhoestq can it be somehow related to the recent release of `datasets`?)\r\n\r\n~@wccccp what I can suggest you is to uninstall and reinstall completely huggingface_hub and datasets? My first guess is that there is a discrepancy somewhere in your setup 😕~",
"@wccccp Actually I have also been able to reproduce the error so it's not an issue with your setup.\r\n\r\n@huggingface/datasets I found this issue quite weird. Is this a module that is not used very often?\r\nThe problematic line is [this one](https://github.com/huggingface/datasets/blame/c33e8ce68b5000988bf6b2e4bca27ffaa469acea/src/datasets/data_files.py#L476) where `huggingface_hub.hf_api.DatasetInfo` is used. `huggingface_hub` is imported [here](https://github.com/huggingface/datasets/blame/c33e8ce68b5000988bf6b2e4bca27ffaa469acea/src/datasets/data_files.py#L6) as `import huggingface_hub`. However since modules are lazy-loaded in `hfh` you need to explicitly import them (i.e. `import huggingface_hub.hf_api`).\r\n\r\nWhat's weird is that nothing has changed for months. Datasets code seems that it didn't change for 2 years when I git-blame this part. And lazy-loading was introduced 1 year ago in `huggingface_hub`. Could it be that `data_files.py` is a file almost never used?\r\n",
"For context, I tried to run `import huggingface_hub; huggingface_hub.hf_api.DatasetInfo` in the terminal with different versions of `hfh` and I need to go back to `huggingface_hub==0.7.0` to make it work (latest is 0.13.3).",
"Before the error happens at line 120 in `data_files.py`, `datasets.filesystems.hffilesystem` is imported at the top of `data_files.py` and this file does `from huggingface_hub.hf_api import DatasetInfo` - so `huggingface_hub.hf_api` is imported. Not sure how the error could happen, what version of `datasets` are you using @wccccp ?",
"Closing due to inactivity."
] | 2023-03-31T08:22:22 | 2023-07-21T14:21:57 | 2023-07-21T14:21:57 | NONE | null | ### Describe the bug
rta.sh
Traceback (most recent call last):
File "run.py", line 7, in <module>
import datasets
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module>
from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/builder.py", line 44, in <module>
from .data_files import DataFilesDict, _sanitize_patterns
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/data_files.py", line 120, in <module>
dataset_info: huggingface_hub.hf_api.DatasetInfo,
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/huggingface_hub/__init__.py", line 290, in __getattr__
raise AttributeError(f"No {package_name} attribute {name}")
AttributeError: No huggingface_hub attribute hf_api
### Reproduction
_No response_
### Logs
```shell
Traceback (most recent call last):
File "run.py", line 7, in <module>
import datasets
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/__init__.py", line 37, in <module>
from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/builder.py", line 44, in <module>
from .data_files import DataFilesDict, _sanitize_patterns
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/datasets/data_files.py", line 120, in <module>
dataset_info: huggingface_hub.hf_api.DatasetInfo,
File "/home/appuser/miniconda3/envs/pt2/lib/python3.8/site-packages/huggingface_hub/__init__.py", line 290, in __getattr__
raise AttributeError(f"No {package_name} attribute {name}")
AttributeError: No huggingface_hub attribute hf_api
```
### System info
```shell
- huggingface_hub version: 0.13.2
- Platform: Linux-5.4.0-144-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /home/appuser/.cache/huggingface/token
- Has saved token ?: False
- Configured git credential helpers:
- FastAI: N/A
- Tensorflow: N/A
- Torch: 1.7.1
- Jinja2: N/A
- Graphviz: N/A
- Pydot: N/A
- Pillow: 9.3.0
- hf_transfer: N/A
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /home/appuser/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /home/appuser/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/appuser/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5690/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5690/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5688/comments | https://api.github.com/repos/huggingface/datasets/issues/5688/events | https://github.com/huggingface/datasets/issues/5688 | 1,648,463,504 | I_kwDODunzps5iQY6Q | 5,688 | Wikipedia download_and_prepare for GCS | {
"login": "adrianfagerland",
"id": 25522531,
"node_id": "MDQ6VXNlcjI1NTIyNTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/25522531?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adrianfagerland",
"html_url": "https://github.com/adrianfagerland",
"followers_url": "https://api.github.com/users/adrianfagerland/followers",
"following_url": "https://api.github.com/users/adrianfagerland/following{/other_user}",
"gists_url": "https://api.github.com/users/adrianfagerland/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adrianfagerland/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adrianfagerland/subscriptions",
"organizations_url": "https://api.github.com/users/adrianfagerland/orgs",
"repos_url": "https://api.github.com/users/adrianfagerland/repos",
"events_url": "https://api.github.com/users/adrianfagerland/events{/privacy}",
"received_events_url": "https://api.github.com/users/adrianfagerland/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @adrianfagerland, thanks for reporting.\r\n\r\nPlease note that \"wikipedia\" is a special dataset, with an Apache Beam builder: https://beam.apache.org/\r\nYou can find more info about Beam datasets in our docs: https://huggingface.co/docs/datasets/beam\r\n\r\nIt was implemented to be run in parallel processing, using one of the distributed back-ends supported by Apache Beam: https://beam.apache.org/get-started/beam-overview/#apache-beam-pipeline-runners\r\n\r\nThat is, you are trying to process the source wikipedia data on your machine (not distributed) when passing `beam_runner=\"DirectRunner\"`.\r\n\r\nAs documented in the wikipedia dataset page (https://huggingface.co/datasets/wikipedia):\r\n\r\n Some subsets of Wikipedia have already been processed by HuggingFace, and you can load them just with:\r\n \r\n from datasets import load_dataset\r\n \r\n load_dataset(\"wikipedia\", \"20220301.en\")\r\n\r\n The list of pre-processed subsets is:\r\n - \"20220301.de\"\r\n - \"20220301.en\"\r\n - \"20220301.fr\"\r\n - \"20220301.frr\"\r\n - \"20220301.it\"\r\n - \"20220301.simple\"\r\n\r\nTo download the available processed data (in Arrow format):\r\n```python\r\nbuilder = datasets.load_dataset_builder(\"wikipedia\", \"20220301.en\")\r\nbuilder.download_and_prepare(your_path)\r\n```",
"When running this using :\r\n```\r\nimport datasets\r\nfrom apache_beam.options.pipeline_options import PipelineOptions\r\nfrom gcsfs import GCSFileSystem\r\n\r\nstorage_options = {\"project\":\"tdt4310\", \"token\":\"cloud\"}\r\nfs = GCSFileSystem(**storage_options)\r\n\r\noutput_dir = \"gcs://quiz_transformer/\"\r\nbeam_options = PipelineOptions(\r\n region=\"europe-west4\",\r\n project=\"tdt4310\",\r\n temp_location=output_dir+\"tmp/\")\r\n\r\n\r\nbuilder = datasets.load_dataset_builder(\"wikipedia\", \"20220301.en\", beam_runner=\"dataflow\", beam_options=beam_options)\r\nbuilder.download_and_prepare(\r\n output_dir, storage_options=storage_options, file_format=\"parquet\")\r\n```\r\nI now get this error:\r\n```\r\nraise FileNotFoundError(f\"Couldn't find file at {url}\")\r\nFileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json\r\nDownloading data files: 0%| | 0/1 [00:00<?, ?it/s]\r\n```\r\n\r\nI get the same error for this:\r\n```\r\nimport datasets\r\nfrom gcsfs import GCSFileSystem\r\n\r\nstorage_options = {\"project\":\"tdt4310\", \"token\":\"cloud\"}\r\nfs = GCSFileSystem(**storage_options)\r\n\r\noutput_dir = \"gcs://quiz_transformer/\"\r\nbuilder = datasets.load_dataset_builder(\"wikipedia\", \"20220301.en\")\r\nbuilder.download_and_prepare(\r\n output_dir, storage_options=storage_options, file_format=\"parquet\")\r\n```\r\n\r\n\r\n\r\n"
] | 2023-03-30T23:43:22 | 2023-03-31T13:31:32 | null | NONE | null | ### Describe the bug
I am unable to download the wikipedia dataset onto GCS.
When I run the script provided the memory firstly gets eaten up, then it crashes.
I tried running this on a VM with 128GB RAM and all I got was a two empty files: _data_builder.lock_, _data.incomplete/beam-temp-wikipedia-train-1ab2039acf3611ed87a9893475de0093_
I have troubleshot this for two straight days now, but I am just unable to get the dataset into storage.
### Steps to reproduce the bug
Run this and insert a path:
```
import datasets
builder = datasets.load_dataset_builder(
"wikipedia", language="en", date="20230320", beam_runner="DirectRunner")
builder.download_and_prepare({path}, file_format="parquet")
```
This is where the problem of it eating RAM occurs.
I have also tried several versions of this, based on the docs:
```
import gcsfs
import datasets
storage_options = {"project": "tdt4310", "token": "cloud"}
fs = gcsfs.GCSFileSystem(**storage_options)
output_dir = "gcs://wikipediadata/"
builder = datasets.load_dataset_builder(
"wikipedia", date="20230320", language="en", beam_runner="DirectRunner")
builder.download_and_prepare(
output_dir, storage_options=storage_options, file_format="parquet")
```
The error message that is received here is:
> ValueError: Unable to get filesystem from specified path, please use the correct path or ensure the required dependency is installed, e.g., pip install apache-beam[gcp]. Path specified: gcs://wikipediadata/wikipedia-train [while running 'train/Save to parquet/Write/WriteImpl/InitializeWrite']
I have ran `pip install apache-beam[gcp]`
### Expected behavior
The wikipedia data loaded into GCS
Everything worked when testing with a smaller demo dataset found somewhere in the docs
### Environment info
Newest published version of datasets. Python 3.9. Also tested with Python 3.7. 128GB RAM Google Cloud VM instance. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5688/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5687 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5687/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5687/comments | https://api.github.com/repos/huggingface/datasets/issues/5687/events | https://github.com/huggingface/datasets/issues/5687 | 1,647,009,018 | I_kwDODunzps5iK1z6 | 5,687 | Document to compress data files before uploading | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"Great idea!\r\n\r\nShould we also take this opportunity to include some audio/image file formats? Currently, it still reads very text heavy. Something like:\r\n\r\n> We support many text, audio, and image data extensions such as `.zip`, `.rar`, `.mp3`, and `.jpg` among many others. For data extensions like `.csv`, `.json`, `.jsonl`, and `txt`, we recommend compressing them before uploading to the Hub. These file extensions are not tracked by Git LFS by default, and if they're too large, they will not be committed and uploaded. Take a look at the `.gitattributes` file in your repository for a complete list of supported file extensions.",
"Hi @stevhliu, thanks for your suggestion.\r\n\r\nI agree it is a good opportunity to mention that audio/image file formats are also supported.\r\n\r\nNit:\r\nI would not mention .zip, .rar after \"text, audio, and image data extensions\". Those are \"compression\" extensions and not \"text, audio, and image data extensions\".\r\n\r\nWhat about something similar to:\r\n> We support many text, audio, and image data extensions such as `.csv`, `.mp3`, and `.jpg` among many others. For text data extensions like `.csv`, `.json`, `.jsonl`, and `.txt`, we recommend compressing them before uploading to the Hub (to `.zip` or `.gz` file extension for example). \r\n>\r\n> Note that text file extensions are not tracked by Git LFS by default, and if they're too large, they will not be committed and uploaded. Take a look at the `.gitattributes` file in your repository for a complete list of tracked file extensions by default.\r\n\r\nNote that for compressions I have mentioned:\r\n- gz, to compress individual files\r\n- zip, to compress and archive multiple files; zip is preferred rather than tar because it supports streaming out of the box",
"Perfect, thanks for making the distinction between compression and data extensions!"
] | 2023-03-30T06:41:07 | 2023-04-19T07:25:59 | 2023-04-19T07:25:59 | MEMBER | null | In our docs to [Share a dataset to the Hub](https://huggingface.co/docs/datasets/upload_dataset), we tell users to upload directly their data files, like CSV, JSON, JSON-Lines, text,... However, these extensions are not tracked by Git LFS by default, as they are not in the `.giattributes` file. Therefore, if they are too large, Git will fail to commit/upload them.
I think for those file extensions (.csv, .json, .jsonl, .txt), we should better recommend to **compress** their data files (using ZIP for example) before uploading them to the Hub.
- Compressed files are tracked by Git LFS in our default `.gitattributes` file
What do you think?
CC: @stevhliu
See related issue:
- https://huggingface.co/datasets/tcor0005/langchain-docs-400-chunksize/discussions/1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5687/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5687/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5686 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5686/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5686/comments | https://api.github.com/repos/huggingface/datasets/issues/5686/events | https://github.com/huggingface/datasets/pull/5686 | 1,646,308,228 | PR_kwDODunzps5NMXdu | 5,686 | set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5686). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008460 / 0.011353 (-0.002893) | 0.006114 / 0.011008 (-0.004894) | 0.121496 / 0.038508 (0.082987) | 0.035030 / 0.023109 (0.011920) | 0.397778 / 0.275898 (0.121880) | 0.429020 / 0.323480 (0.105540) | 0.007811 / 0.007986 (-0.000174) | 0.006269 / 0.004328 (0.001940) | 0.098895 / 0.004250 (0.094645) | 0.045407 / 0.037052 (0.008355) | 0.413679 / 0.258489 (0.155189) | 0.437491 / 0.293841 (0.143650) | 0.053207 / 0.128546 (-0.075339) | 0.018471 / 0.075646 (-0.057175) | 0.414800 / 0.419271 (-0.004472) | 0.060864 / 0.043533 (0.017332) | 0.398501 / 0.255139 (0.143362) | 0.421142 / 0.283200 (0.137942) | 0.114908 / 0.141683 (-0.026775) | 1.678630 / 1.452155 (0.226475) | 1.782313 / 1.492716 (0.289596) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280783 / 0.018006 (0.262777) | 0.591573 / 0.000490 (0.591083) | 0.005797 / 0.000200 (0.005597) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030431 / 0.037411 (-0.006981) | 0.117342 / 0.014526 (0.102816) | 0.128456 / 0.176557 (-0.048101) | 0.198782 / 0.737135 (-0.538354) | 0.128501 / 0.296338 (-0.167838) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.603073 / 0.215209 (0.387864) | 6.101354 / 2.077655 (4.023699) | 2.527812 / 1.504120 (1.023692) | 2.101468 / 1.541195 (0.560273) | 2.092813 / 1.468490 (0.624323) | 1.182150 / 4.584777 (-3.402627) | 5.389278 / 3.745712 (1.643566) | 5.041001 / 5.269862 (-0.228860) | 2.650581 / 4.565676 (-1.915095) | 0.138761 / 0.424275 (-0.285514) | 0.014209 / 0.007607 (0.006602) | 0.748596 / 0.226044 (0.522552) | 7.373937 / 2.268929 (5.105008) | 3.245882 / 55.444624 (-52.198742) | 2.523569 / 6.876477 (-4.352908) | 2.581343 / 2.142072 (0.439270) | 1.340436 / 4.805227 (-3.464791) | 0.241388 / 6.500664 (-6.259276) | 0.076634 / 0.075469 (0.001164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.480237 / 1.841788 (-0.361551) | 16.781338 / 8.074308 (8.707030) | 19.735028 / 10.191392 (9.543636) | 0.256872 / 0.680424 (-0.423551) | 0.029211 / 0.534201 (-0.504990) | 0.503292 / 0.579283 (-0.075991) | 0.584510 / 0.434364 (0.150146) | 0.580293 / 0.540337 (0.039955) | 0.678863 / 1.386936 (-0.708073) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009972 / 0.011353 (-0.001381) | 0.006107 / 0.011008 (-0.004902) | 0.096188 / 0.038508 (0.057680) | 0.033320 / 0.023109 (0.010210) | 0.420789 / 0.275898 (0.144891) | 0.460488 / 0.323480 (0.137008) | 0.006492 / 0.007986 (-0.001493) | 0.005325 / 0.004328 (0.000997) | 0.094974 / 0.004250 (0.090723) | 0.047708 / 0.037052 (0.010655) | 0.426689 / 0.258489 (0.168200) | 0.476440 / 0.293841 (0.182599) | 0.052776 / 0.128546 (-0.075770) | 0.018779 / 0.075646 (-0.056868) | 0.119598 / 0.419271 (-0.299673) | 0.061800 / 0.043533 (0.018267) | 0.421305 / 0.255139 (0.166166) | 0.441125 / 0.283200 (0.157925) | 0.114221 / 0.141683 (-0.027462) | 1.712681 / 1.452155 (0.260526) | 1.852316 / 1.492716 (0.359600) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.272412 / 0.018006 (0.254405) | 0.583996 / 0.000490 (0.583506) | 0.000505 / 0.000200 (0.000305) | 0.000077 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029553 / 0.037411 (-0.007858) | 0.124921 / 0.014526 (0.110395) | 0.133338 / 0.176557 (-0.043218) | 0.193811 / 0.737135 (-0.543325) | 0.147973 / 0.296338 (-0.148365) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.595241 / 0.215209 (0.380032) | 6.012015 / 2.077655 (3.934360) | 2.611295 / 1.504120 (1.107175) | 2.290127 / 1.541195 (0.748932) | 2.300366 / 1.468490 (0.831876) | 1.197602 / 4.584777 (-3.387175) | 5.439064 / 3.745712 (1.693352) | 2.906088 / 5.269862 (-2.363773) | 1.919183 / 4.565676 (-2.646493) | 0.132166 / 0.424275 (-0.292109) | 0.014544 / 0.007607 (0.006937) | 0.726377 / 0.226044 (0.500333) | 7.361023 / 2.268929 (5.092094) | 3.289266 / 55.444624 (-52.155358) | 2.635570 / 6.876477 (-4.240907) | 2.595691 / 2.142072 (0.453619) | 1.329458 / 4.805227 (-3.475769) | 0.239419 / 6.500664 (-6.261245) | 0.076316 / 0.075469 (0.000847) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.547616 / 1.841788 (-0.294172) | 17.374315 / 8.074308 (9.300007) | 20.216275 / 10.191392 (10.024883) | 0.252102 / 0.680424 (-0.428322) | 0.027535 / 0.534201 (-0.506665) | 0.524618 / 0.579283 (-0.054666) | 0.596803 / 0.434364 (0.162439) | 0.652632 / 0.540337 (0.112294) | 0.762272 / 1.386936 (-0.624664) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8c7d4b2f981f8cf639dcbd80f40a41aa5b1693c6 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008236 / 0.011353 (-0.003117) | 0.006186 / 0.011008 (-0.004822) | 0.117852 / 0.038508 (0.079344) | 0.034711 / 0.023109 (0.011602) | 0.447564 / 0.275898 (0.171666) | 0.438727 / 0.323480 (0.115247) | 0.006576 / 0.007986 (-0.001410) | 0.005903 / 0.004328 (0.001574) | 0.094309 / 0.004250 (0.090059) | 0.042760 / 0.037052 (0.005708) | 0.393269 / 0.258489 (0.134780) | 0.438061 / 0.293841 (0.144220) | 0.059029 / 0.128546 (-0.069517) | 0.020296 / 0.075646 (-0.055350) | 0.412057 / 0.419271 (-0.007215) | 0.059808 / 0.043533 (0.016275) | 0.407243 / 0.255139 (0.152104) | 0.414290 / 0.283200 (0.131090) | 0.107701 / 0.141683 (-0.033981) | 1.671522 / 1.452155 (0.219367) | 1.775055 / 1.492716 (0.282338) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275242 / 0.018006 (0.257236) | 0.599698 / 0.000490 (0.599208) | 0.001289 / 0.000200 (0.001089) | 0.000101 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029579 / 0.037411 (-0.007832) | 0.127249 / 0.014526 (0.112723) | 0.137431 / 0.176557 (-0.039126) | 0.220330 / 0.737135 (-0.516805) | 0.133540 / 0.296338 (-0.162798) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.571989 / 0.215209 (0.356780) | 5.931503 / 2.077655 (3.853848) | 2.526646 / 1.504120 (1.022527) | 2.189476 / 1.541195 (0.648281) | 2.151935 / 1.468490 (0.683444) | 1.242440 / 4.584777 (-3.342337) | 5.599675 / 3.745712 (1.853963) | 3.242035 / 5.269862 (-2.027826) | 2.368361 / 4.565676 (-2.197315) | 0.145659 / 0.424275 (-0.278616) | 0.013813 / 0.007607 (0.006206) | 0.782495 / 0.226044 (0.556451) | 7.861619 / 2.268929 (5.592690) | 3.241001 / 55.444624 (-52.203623) | 2.611025 / 6.876477 (-4.265452) | 2.667263 / 2.142072 (0.525191) | 1.429992 / 4.805227 (-3.375235) | 0.243008 / 6.500664 (-6.257656) | 0.083686 / 0.075469 (0.008217) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.565526 / 1.841788 (-0.276262) | 18.260815 / 8.074308 (10.186507) | 22.586133 / 10.191392 (12.394741) | 0.231864 / 0.680424 (-0.448559) | 0.030877 / 0.534201 (-0.503324) | 0.569726 / 0.579283 (-0.009557) | 0.678638 / 0.434364 (0.244274) | 0.611810 / 0.540337 (0.071472) | 0.718771 / 1.386936 (-0.668165) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009398 / 0.011353 (-0.001955) | 0.006452 / 0.011008 (-0.004556) | 0.103352 / 0.038508 (0.064844) | 0.034773 / 0.023109 (0.011664) | 0.523782 / 0.275898 (0.247884) | 0.523554 / 0.323480 (0.200074) | 0.006990 / 0.007986 (-0.000996) | 0.004994 / 0.004328 (0.000666) | 0.102199 / 0.004250 (0.097949) | 0.050087 / 0.037052 (0.013035) | 0.496662 / 0.258489 (0.238173) | 0.563130 / 0.293841 (0.269289) | 0.052851 / 0.128546 (-0.075695) | 0.019824 / 0.075646 (-0.055822) | 0.122657 / 0.419271 (-0.296614) | 0.057714 / 0.043533 (0.014181) | 0.470502 / 0.255139 (0.215363) | 0.518908 / 0.283200 (0.235708) | 0.114374 / 0.141683 (-0.027309) | 1.795918 / 1.452155 (0.343763) | 1.957461 / 1.492716 (0.464744) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.303921 / 0.018006 (0.285915) | 0.584406 / 0.000490 (0.583916) | 0.000444 / 0.000200 (0.000244) | 0.000096 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032254 / 0.037411 (-0.005158) | 0.129966 / 0.014526 (0.115440) | 0.151000 / 0.176557 (-0.025557) | 0.234060 / 0.737135 (-0.503076) | 0.149444 / 0.296338 (-0.146895) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.666627 / 0.215209 (0.451418) | 7.054701 / 2.077655 (4.977046) | 2.836895 / 1.504120 (1.332775) | 2.561994 / 1.541195 (1.020799) | 2.672460 / 1.468490 (1.203970) | 1.411929 / 4.584777 (-3.172848) | 6.026918 / 3.745712 (2.281206) | 3.341745 / 5.269862 (-1.928116) | 2.280317 / 4.565676 (-2.285359) | 0.156635 / 0.424275 (-0.267641) | 0.014256 / 0.007607 (0.006649) | 0.804830 / 0.226044 (0.578786) | 8.106960 / 2.268929 (5.838031) | 3.597452 / 55.444624 (-51.847172) | 3.002847 / 6.876477 (-3.873630) | 2.931160 / 2.142072 (0.789088) | 1.484172 / 4.805227 (-3.321056) | 0.254166 / 6.500664 (-6.246498) | 0.080554 / 0.075469 (0.005085) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.809909 / 1.841788 (-0.031879) | 18.988994 / 8.074308 (10.914686) | 23.153442 / 10.191392 (12.962050) | 0.250554 / 0.680424 (-0.429870) | 0.048677 / 0.534201 (-0.485524) | 0.574109 / 0.579283 (-0.005174) | 0.640917 / 0.434364 (0.206553) | 0.725215 / 0.540337 (0.184878) | 0.878234 / 1.386936 (-0.508702) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e3667d6e17d68503469c8e88ec344b7cccfa2346 \"CML watermark\")\n"
] | 2023-03-29T18:24:13 | 2023-03-29T18:33:49 | 2023-03-29T18:24:22 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5686/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5686",
"html_url": "https://github.com/huggingface/datasets/pull/5686",
"diff_url": "https://github.com/huggingface/datasets/pull/5686.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5686.patch",
"merged_at": "2023-03-29T18:24:22"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5685 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5685/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5685/comments | https://api.github.com/repos/huggingface/datasets/issues/5685/events | https://github.com/huggingface/datasets/issues/5685 | 1,646,048,667 | I_kwDODunzps5iHLWb | 5,685 | Broken Image render on the hub website | {
"login": "FrancescoSaverioZuppichini",
"id": 15908060,
"node_id": "MDQ6VXNlcjE1OTA4MDYw",
"avatar_url": "https://avatars.githubusercontent.com/u/15908060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FrancescoSaverioZuppichini",
"html_url": "https://github.com/FrancescoSaverioZuppichini",
"followers_url": "https://api.github.com/users/FrancescoSaverioZuppichini/followers",
"following_url": "https://api.github.com/users/FrancescoSaverioZuppichini/following{/other_user}",
"gists_url": "https://api.github.com/users/FrancescoSaverioZuppichini/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FrancescoSaverioZuppichini/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FrancescoSaverioZuppichini/subscriptions",
"organizations_url": "https://api.github.com/users/FrancescoSaverioZuppichini/orgs",
"repos_url": "https://api.github.com/users/FrancescoSaverioZuppichini/repos",
"events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/events{/privacy}",
"received_events_url": "https://api.github.com/users/FrancescoSaverioZuppichini/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi! \r\n\r\nYou can fix the viewer by adding the `dataset_info` YAML field deleted in https://huggingface.co/datasets/Francesco/cell-towers/commit/b95b59ddd91ebe9c12920f0efe0ed415cd0d4298 back to the metadata section of the card. \r\n\r\nTo avoid this issue in the feature, you can use `huggingface_hub`'s [RepoCard](https://huggingface.co/docs/huggingface_hub/package_reference/cards) API to update the dataset card instead of `upload_file`:\r\n```python\r\nfrom huggingface_hub import DatasetCard\r\n# Load card\r\ncard = DatasetCard.load(\"<namespace>/<repo_id>\")\r\n# Modify card content\r\ncard.content = ...\r\n# Push card to the Hub\r\ncard.push_to_hub(\"<namespace>/<repo_id>\")\r\n```\r\n\r\nHowever, the best solution would be to use the features info stored in the header of the Parquet shards generated with `push_to_hub` on the viewer side to avoid unexpected issues such as this one. This shouldn't be too hard to address.",
"Thanks for reporting @FrancescoSaverioZuppichini.\r\n\r\nFor future issues with your specific dataset, you can use its \"Community\" tab to start a conversation: https://huggingface.co/datasets/Francesco/cell-towers/discussions/new",
"Thanks @albertvillanova , @mariosasko I was not aware of this requirement from the doc (must have skipped :sweat_smile: )\r\n\r\nConfirmed, adding back `dataset_info` fixed the issu"
] | 2023-03-29T15:25:30 | 2023-03-30T07:54:25 | 2023-03-30T07:54:25 | NONE | null | ### Describe the bug
Hi :wave:
Not sure if this is the right place to ask, but I am trying to load a huge amount of datasets on the hub (:partying_face: ) but I am facing a little issue with the `image` type
![image](https://user-images.githubusercontent.com/15908060/228587875-427a37f1-3a31-4e17-8bbe-0f759003910d.png)
See this [dataset](https://huggingface.co/datasets/Francesco/cell-towers), basically for some reason the first image has numerical bytes inside, not sure if that is okay, but the image render feature **doesn't work**
So the dataset is stored in the following way
```python
builder.download_and_prepare(output_dir=str(output_dir))
ds = builder.as_dataset(split="train")
# [NOTE] no idea how to push it from the builder folder
ds.push_to_hub(repo_id=repo_id)
builder.as_dataset(split="validation").push_to_hub(repo_id=repo_id)
ds = builder.as_dataset(split="test")
ds.push_to_hub(repo_id=repo_id)
```
The build is this class
```python
class COCOLikeDatasetBuilder(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("1.0.0")
def _info(self):
features = datasets.Features(
{
"image_id": datasets.Value("int64"),
"image": datasets.Image(),
"width": datasets.Value("int32"),
"height": datasets.Value("int32"),
"objects": datasets.Sequence(
{
"id": datasets.Value("int64"),
"area": datasets.Value("int64"),
"bbox": datasets.Sequence(
datasets.Value("float32"), length=4
),
"category": datasets.ClassLabel(names=categories),
}
),
}
)
return datasets.DatasetInfo(
description=description,
features=features,
homepage=homepage,
license=license,
citation=citation,
)
def _split_generators(self, dl_manager):
archive = dl_manager.download(url)
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"annotation_file_path": "train/_annotations.coco.json",
"files": dl_manager.iter_archive(archive),
},
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION,
gen_kwargs={
"annotation_file_path": "test/_annotations.coco.json",
"files": dl_manager.iter_archive(archive),
},
),
datasets.SplitGenerator(
name=datasets.Split.TEST,
gen_kwargs={
"annotation_file_path": "valid/_annotations.coco.json",
"files": dl_manager.iter_archive(archive),
},
),
]
def _generate_examples(self, annotation_file_path, files):
def process_annot(annot, category_id_to_category):
return {
"id": annot["id"],
"area": annot["area"],
"bbox": annot["bbox"],
"category": category_id_to_category[annot["category_id"]],
}
image_id_to_image = {}
idx = 0
# This loop relies on the ordering of the files in the archive:
# Annotation files come first, then the images.
for path, f in files:
file_name = os.path.basename(path)
if annotation_file_path in path:
annotations = json.load(f)
category_id_to_category = {
category["id"]: category["name"]
for category in annotations["categories"]
}
print(category_id_to_category)
image_id_to_annotations = collections.defaultdict(list)
for annot in annotations["annotations"]:
image_id_to_annotations[annot["image_id"]].append(annot)
image_id_to_image = {
annot["file_name"]: annot for annot in annotations["images"]
}
elif file_name in image_id_to_image:
image = image_id_to_image[file_name]
objects = [
process_annot(annot, category_id_to_category)
for annot in image_id_to_annotations[image["id"]]
]
print(file_name)
yield idx, {
"image_id": image["id"],
"image": {"path": path, "bytes": f.read()},
"width": image["width"],
"height": image["height"],
"objects": objects,
}
idx += 1
```
Basically, I want to add to the hub every dataset I come across on coco format
Thanks
Fra
### Steps to reproduce the bug
In this case, you can just navigate on the [dataset](https://huggingface.co/datasets/Francesco/cell-towers)
### Expected behavior
I was expecting the image rendering feature to work
### Environment info
Not a lot to share, I am using `datasets` from a fresh venv | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5685/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5684 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5684/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5684/comments | https://api.github.com/repos/huggingface/datasets/issues/5684/events | https://github.com/huggingface/datasets/pull/5684 | 1,646,013,226 | PR_kwDODunzps5NLXWm | 5,684 | Release: 2.11.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007017 / 0.011353 (-0.004335) | 0.004917 / 0.011008 (-0.006091) | 0.098391 / 0.038508 (0.059883) | 0.032677 / 0.023109 (0.009568) | 0.312126 / 0.275898 (0.036227) | 0.352477 / 0.323480 (0.028998) | 0.005960 / 0.007986 (-0.002025) | 0.003801 / 0.004328 (-0.000528) | 0.073916 / 0.004250 (0.069666) | 0.045610 / 0.037052 (0.008557) | 0.319626 / 0.258489 (0.061137) | 0.370575 / 0.293841 (0.076734) | 0.035888 / 0.128546 (-0.092658) | 0.012012 / 0.075646 (-0.063635) | 0.338290 / 0.419271 (-0.080982) | 0.049452 / 0.043533 (0.005919) | 0.301226 / 0.255139 (0.046087) | 0.336744 / 0.283200 (0.053545) | 0.100835 / 0.141683 (-0.040847) | 1.500008 / 1.452155 (0.047853) | 1.566757 / 1.492716 (0.074041) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220668 / 0.018006 (0.202662) | 0.449273 / 0.000490 (0.448784) | 0.003861 / 0.000200 (0.003661) | 0.000126 / 0.000054 (0.000072) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026847 / 0.037411 (-0.010565) | 0.105916 / 0.014526 (0.091390) | 0.116245 / 0.176557 (-0.060312) | 0.172617 / 0.737135 (-0.564519) | 0.122846 / 0.296338 (-0.173492) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417906 / 0.215209 (0.202697) | 4.169092 / 2.077655 (2.091437) | 1.934439 / 1.504120 (0.430319) | 1.735718 / 1.541195 (0.194523) | 1.828205 / 1.468490 (0.359715) | 0.697446 / 4.584777 (-3.887331) | 3.802830 / 3.745712 (0.057118) | 3.686464 / 5.269862 (-1.583398) | 1.863924 / 4.565676 (-2.701752) | 0.086520 / 0.424275 (-0.337755) | 0.012101 / 0.007607 (0.004493) | 0.521252 / 0.226044 (0.295208) | 5.200937 / 2.268929 (2.932009) | 2.414290 / 55.444624 (-53.030334) | 2.070890 / 6.876477 (-4.805587) | 2.237693 / 2.142072 (0.095621) | 0.843417 / 4.805227 (-3.961811) | 0.167856 / 6.500664 (-6.332809) | 0.064997 / 0.075469 (-0.010472) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212334 / 1.841788 (-0.629454) | 14.710632 / 8.074308 (6.636324) | 14.877489 / 10.191392 (4.686097) | 0.151268 / 0.680424 (-0.529156) | 0.018663 / 0.534201 (-0.515538) | 0.429678 / 0.579283 (-0.149605) | 0.425054 / 0.434364 (-0.009310) | 0.502804 / 0.540337 (-0.037533) | 0.587932 / 1.386936 (-0.799004) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007462 / 0.011353 (-0.003891) | 0.005307 / 0.011008 (-0.005701) | 0.074309 / 0.038508 (0.035801) | 0.033437 / 0.023109 (0.010328) | 0.355087 / 0.275898 (0.079189) | 0.391417 / 0.323480 (0.067937) | 0.005904 / 0.007986 (-0.002082) | 0.004062 / 0.004328 (-0.000266) | 0.073801 / 0.004250 (0.069550) | 0.048503 / 0.037052 (0.011451) | 0.359547 / 0.258489 (0.101058) | 0.405325 / 0.293841 (0.111484) | 0.036615 / 0.128546 (-0.091931) | 0.012185 / 0.075646 (-0.063461) | 0.086829 / 0.419271 (-0.332443) | 0.049101 / 0.043533 (0.005569) | 0.334259 / 0.255139 (0.079120) | 0.376317 / 0.283200 (0.093117) | 0.099935 / 0.141683 (-0.041748) | 1.483166 / 1.452155 (0.031011) | 1.569092 / 1.492716 (0.076375) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207528 / 0.018006 (0.189521) | 0.437473 / 0.000490 (0.436983) | 0.004915 / 0.000200 (0.004715) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028632 / 0.037411 (-0.008780) | 0.111782 / 0.014526 (0.097256) | 0.122545 / 0.176557 (-0.054011) | 0.171191 / 0.737135 (-0.565945) | 0.128999 / 0.296338 (-0.167339) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424422 / 0.215209 (0.209213) | 4.239488 / 2.077655 (2.161833) | 2.027969 / 1.504120 (0.523849) | 1.800667 / 1.541195 (0.259473) | 1.898701 / 1.468490 (0.430211) | 0.711453 / 4.584777 (-3.873324) | 3.766696 / 3.745712 (0.020984) | 2.107530 / 5.269862 (-3.162331) | 1.347137 / 4.565676 (-3.218540) | 0.086823 / 0.424275 (-0.337452) | 0.012137 / 0.007607 (0.004530) | 0.523143 / 0.226044 (0.297099) | 5.273434 / 2.268929 (3.004505) | 2.545463 / 55.444624 (-52.899161) | 2.246683 / 6.876477 (-4.629793) | 2.296862 / 2.142072 (0.154789) | 0.855690 / 4.805227 (-3.949538) | 0.168526 / 6.500664 (-6.332138) | 0.063392 / 0.075469 (-0.012078) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.248926 / 1.841788 (-0.592862) | 14.676308 / 8.074308 (6.602000) | 14.524364 / 10.191392 (4.332972) | 0.184138 / 0.680424 (-0.496286) | 0.017259 / 0.534201 (-0.516942) | 0.433875 / 0.579283 (-0.145408) | 0.416787 / 0.434364 (-0.017577) | 0.532391 / 0.540337 (-0.007947) | 0.628572 / 1.386936 (-0.758364) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3929cc227a474ce0c716146c8d14ae94f8a7625b \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006469 / 0.011353 (-0.004884) | 0.004499 / 0.011008 (-0.006510) | 0.098856 / 0.038508 (0.060348) | 0.027753 / 0.023109 (0.004644) | 0.321348 / 0.275898 (0.045450) | 0.351480 / 0.323480 (0.028000) | 0.004949 / 0.007986 (-0.003036) | 0.004655 / 0.004328 (0.000327) | 0.076732 / 0.004250 (0.072482) | 0.036175 / 0.037052 (-0.000878) | 0.310111 / 0.258489 (0.051622) | 0.372427 / 0.293841 (0.078586) | 0.031947 / 0.128546 (-0.096599) | 0.011669 / 0.075646 (-0.063977) | 0.323086 / 0.419271 (-0.096186) | 0.043578 / 0.043533 (0.000045) | 0.325549 / 0.255139 (0.070410) | 0.363827 / 0.283200 (0.080627) | 0.087819 / 0.141683 (-0.053864) | 1.479429 / 1.452155 (0.027274) | 1.549797 / 1.492716 (0.057080) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178502 / 0.018006 (0.160496) | 0.415954 / 0.000490 (0.415465) | 0.008767 / 0.000200 (0.008567) | 0.000429 / 0.000054 (0.000375) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023639 / 0.037411 (-0.013772) | 0.096266 / 0.014526 (0.081740) | 0.106406 / 0.176557 (-0.070151) | 0.168819 / 0.737135 (-0.568317) | 0.109158 / 0.296338 (-0.187181) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420729 / 0.215209 (0.205520) | 4.219469 / 2.077655 (2.141814) | 1.885673 / 1.504120 (0.381553) | 1.681868 / 1.541195 (0.140674) | 1.709240 / 1.468490 (0.240749) | 0.694763 / 4.584777 (-3.890014) | 3.395377 / 3.745712 (-0.350335) | 1.846811 / 5.269862 (-3.423051) | 1.158381 / 4.565676 (-3.407296) | 0.082717 / 0.424275 (-0.341558) | 0.012302 / 0.007607 (0.004695) | 0.518148 / 0.226044 (0.292103) | 5.189590 / 2.268929 (2.920661) | 2.294127 / 55.444624 (-53.150498) | 1.960080 / 6.876477 (-4.916397) | 2.045359 / 2.142072 (-0.096713) | 0.803739 / 4.805227 (-4.001488) | 0.152322 / 6.500664 (-6.348342) | 0.067051 / 0.075469 (-0.008418) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206582 / 1.841788 (-0.635206) | 13.590515 / 8.074308 (5.516207) | 14.083739 / 10.191392 (3.892347) | 0.128738 / 0.680424 (-0.551686) | 0.016577 / 0.534201 (-0.517624) | 0.375499 / 0.579283 (-0.203784) | 0.383256 / 0.434364 (-0.051108) | 0.439441 / 0.540337 (-0.100896) | 0.518102 / 1.386936 (-0.868834) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006708 / 0.011353 (-0.004645) | 0.004591 / 0.011008 (-0.006417) | 0.076512 / 0.038508 (0.038004) | 0.027977 / 0.023109 (0.004868) | 0.341915 / 0.275898 (0.066017) | 0.374381 / 0.323480 (0.050901) | 0.004985 / 0.007986 (-0.003001) | 0.003374 / 0.004328 (-0.000954) | 0.075334 / 0.004250 (0.071083) | 0.037522 / 0.037052 (0.000470) | 0.341702 / 0.258489 (0.083213) | 0.384342 / 0.293841 (0.090501) | 0.032231 / 0.128546 (-0.096315) | 0.011494 / 0.075646 (-0.064153) | 0.084897 / 0.419271 (-0.334375) | 0.041914 / 0.043533 (-0.001619) | 0.342030 / 0.255139 (0.086891) | 0.371024 / 0.283200 (0.087825) | 0.089936 / 0.141683 (-0.051746) | 1.497242 / 1.452155 (0.045087) | 1.585203 / 1.492716 (0.092486) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227681 / 0.018006 (0.209674) | 0.398995 / 0.000490 (0.398505) | 0.003232 / 0.000200 (0.003032) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024705 / 0.037411 (-0.012706) | 0.099906 / 0.014526 (0.085380) | 0.106806 / 0.176557 (-0.069750) | 0.157521 / 0.737135 (-0.579614) | 0.110803 / 0.296338 (-0.185535) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457442 / 0.215209 (0.242233) | 4.580101 / 2.077655 (2.502446) | 2.094687 / 1.504120 (0.590567) | 1.880722 / 1.541195 (0.339528) | 1.938746 / 1.468490 (0.470256) | 0.700933 / 4.584777 (-3.883844) | 3.416278 / 3.745712 (-0.329434) | 2.852183 / 5.269862 (-2.417679) | 1.602659 / 4.565676 (-2.963017) | 0.083949 / 0.424275 (-0.340326) | 0.012255 / 0.007607 (0.004648) | 0.551631 / 0.226044 (0.325586) | 5.539225 / 2.268929 (3.270296) | 2.707298 / 55.444624 (-52.737326) | 2.354720 / 6.876477 (-4.521757) | 2.320790 / 2.142072 (0.178717) | 0.807152 / 4.805227 (-3.998075) | 0.152048 / 6.500664 (-6.348616) | 0.067723 / 0.075469 (-0.007746) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.295690 / 1.841788 (-0.546097) | 13.738082 / 8.074308 (5.663774) | 14.129549 / 10.191392 (3.938157) | 0.161568 / 0.680424 (-0.518855) | 0.016678 / 0.534201 (-0.517522) | 0.386609 / 0.579283 (-0.192674) | 0.383538 / 0.434364 (-0.050826) | 0.477872 / 0.540337 (-0.062465) | 0.564547 / 1.386936 (-0.822389) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2ab4c98618bce7c1f60ce96d4a853a940ae4b250 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007247 / 0.011353 (-0.004106) | 0.005044 / 0.011008 (-0.005964) | 0.095135 / 0.038508 (0.056627) | 0.033622 / 0.023109 (0.010513) | 0.309969 / 0.275898 (0.034071) | 0.340354 / 0.323480 (0.016875) | 0.005635 / 0.007986 (-0.002351) | 0.003938 / 0.004328 (-0.000391) | 0.072089 / 0.004250 (0.067838) | 0.045592 / 0.037052 (0.008539) | 0.316620 / 0.258489 (0.058131) | 0.358174 / 0.293841 (0.064333) | 0.036446 / 0.128546 (-0.092100) | 0.011961 / 0.075646 (-0.063685) | 0.332299 / 0.419271 (-0.086973) | 0.049955 / 0.043533 (0.006422) | 0.307638 / 0.255139 (0.052499) | 0.331719 / 0.283200 (0.048519) | 0.095115 / 0.141683 (-0.046568) | 1.457960 / 1.452155 (0.005806) | 1.502812 / 1.492716 (0.010096) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223747 / 0.018006 (0.205740) | 0.444837 / 0.000490 (0.444347) | 0.002583 / 0.000200 (0.002383) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026461 / 0.037411 (-0.010951) | 0.103946 / 0.014526 (0.089420) | 0.114355 / 0.176557 (-0.062201) | 0.170076 / 0.737135 (-0.567059) | 0.121087 / 0.296338 (-0.175252) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403252 / 0.215209 (0.188043) | 4.016911 / 2.077655 (1.939257) | 1.787168 / 1.504120 (0.283048) | 1.605206 / 1.541195 (0.064012) | 1.657012 / 1.468490 (0.188522) | 0.701425 / 4.584777 (-3.883352) | 3.818308 / 3.745712 (0.072596) | 3.493757 / 5.269862 (-1.776105) | 1.860534 / 4.565676 (-2.705142) | 0.084994 / 0.424275 (-0.339281) | 0.011904 / 0.007607 (0.004297) | 0.534199 / 0.226044 (0.308155) | 4.992703 / 2.268929 (2.723774) | 2.286231 / 55.444624 (-53.158393) | 1.918163 / 6.876477 (-4.958314) | 2.029811 / 2.142072 (-0.112262) | 0.837532 / 4.805227 (-3.967695) | 0.168545 / 6.500664 (-6.332119) | 0.062866 / 0.075469 (-0.012604) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.172862 / 1.841788 (-0.668926) | 14.966793 / 8.074308 (6.892485) | 14.202079 / 10.191392 (4.010687) | 0.144688 / 0.680424 (-0.535736) | 0.017499 / 0.534201 (-0.516702) | 0.443081 / 0.579283 (-0.136202) | 0.427496 / 0.434364 (-0.006868) | 0.525182 / 0.540337 (-0.015155) | 0.611849 / 1.386936 (-0.775087) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007264 / 0.011353 (-0.004089) | 0.005106 / 0.011008 (-0.005902) | 0.074101 / 0.038508 (0.035593) | 0.033388 / 0.023109 (0.010279) | 0.337108 / 0.275898 (0.061210) | 0.369820 / 0.323480 (0.046340) | 0.005701 / 0.007986 (-0.002284) | 0.003976 / 0.004328 (-0.000353) | 0.073517 / 0.004250 (0.069267) | 0.048741 / 0.037052 (0.011688) | 0.339118 / 0.258489 (0.080629) | 0.398687 / 0.293841 (0.104846) | 0.036661 / 0.128546 (-0.091886) | 0.012082 / 0.075646 (-0.063564) | 0.086743 / 0.419271 (-0.332529) | 0.050150 / 0.043533 (0.006617) | 0.335572 / 0.255139 (0.080433) | 0.354306 / 0.283200 (0.071107) | 0.102074 / 0.141683 (-0.039609) | 1.442911 / 1.452155 (-0.009244) | 1.531564 / 1.492716 (0.038848) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183163 / 0.018006 (0.165157) | 0.439273 / 0.000490 (0.438783) | 0.002765 / 0.000200 (0.002565) | 0.000225 / 0.000054 (0.000171) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028185 / 0.037411 (-0.009227) | 0.107337 / 0.014526 (0.092811) | 0.119925 / 0.176557 (-0.056631) | 0.172120 / 0.737135 (-0.565015) | 0.124332 / 0.296338 (-0.172007) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428750 / 0.215209 (0.213541) | 4.268933 / 2.077655 (2.191279) | 2.050135 / 1.504120 (0.546015) | 1.837567 / 1.541195 (0.296372) | 1.907040 / 1.468490 (0.438549) | 0.694162 / 4.584777 (-3.890615) | 3.831542 / 3.745712 (0.085830) | 3.476580 / 5.269862 (-1.793281) | 1.855097 / 4.565676 (-2.710580) | 0.085816 / 0.424275 (-0.338459) | 0.012195 / 0.007607 (0.004588) | 0.544920 / 0.226044 (0.318876) | 5.332977 / 2.268929 (3.064049) | 2.592097 / 55.444624 (-52.852527) | 2.295411 / 6.876477 (-4.581065) | 2.330803 / 2.142072 (0.188730) | 0.833268 / 4.805227 (-3.971959) | 0.177698 / 6.500664 (-6.322966) | 0.063780 / 0.075469 (-0.011689) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.273361 / 1.841788 (-0.568427) | 14.981380 / 8.074308 (6.907072) | 14.395166 / 10.191392 (4.203774) | 0.186590 / 0.680424 (-0.493834) | 0.017676 / 0.534201 (-0.516525) | 0.432100 / 0.579283 (-0.147183) | 0.422490 / 0.434364 (-0.011874) | 0.531421 / 0.540337 (-0.008916) | 0.628548 / 1.386936 (-0.758388) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3b16e08dd599f4646a77a5ca88b6445467e1e7e9 \"CML watermark\")\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009005 / 0.011353 (-0.002348) | 0.005803 / 0.011008 (-0.005205) | 0.103491 / 0.038508 (0.064983) | 0.048099 / 0.023109 (0.024990) | 0.304026 / 0.275898 (0.028128) | 0.340840 / 0.323480 (0.017360) | 0.006782 / 0.007986 (-0.001204) | 0.004625 / 0.004328 (0.000296) | 0.076695 / 0.004250 (0.072445) | 0.057541 / 0.037052 (0.020489) | 0.304015 / 0.258489 (0.045526) | 0.347822 / 0.293841 (0.053981) | 0.037904 / 0.128546 (-0.090642) | 0.012686 / 0.075646 (-0.062960) | 0.368093 / 0.419271 (-0.051179) | 0.051795 / 0.043533 (0.008262) | 0.302553 / 0.255139 (0.047415) | 0.328581 / 0.283200 (0.045381) | 0.108947 / 0.141683 (-0.032736) | 1.449770 / 1.452155 (-0.002385) | 1.541944 / 1.492716 (0.049227) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207529 / 0.018006 (0.189523) | 0.455313 / 0.000490 (0.454823) | 0.008276 / 0.000200 (0.008076) | 0.000322 / 0.000054 (0.000268) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030564 / 0.037411 (-0.006848) | 0.122790 / 0.014526 (0.108264) | 0.126981 / 0.176557 (-0.049576) | 0.187203 / 0.737135 (-0.549932) | 0.129931 / 0.296338 (-0.166408) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402680 / 0.215209 (0.187471) | 4.017505 / 2.077655 (1.939850) | 1.801480 / 1.504120 (0.297360) | 1.647984 / 1.541195 (0.106790) | 1.702596 / 1.468490 (0.234106) | 0.717469 / 4.584777 (-3.867308) | 3.793813 / 3.745712 (0.048101) | 2.288014 / 5.269862 (-2.981848) | 1.497545 / 4.565676 (-3.068132) | 0.091241 / 0.424275 (-0.333034) | 0.013115 / 0.007607 (0.005508) | 0.498567 / 0.226044 (0.272522) | 4.990203 / 2.268929 (2.721275) | 2.334983 / 55.444624 (-53.109642) | 2.047888 / 6.876477 (-4.828589) | 2.167825 / 2.142072 (0.025753) | 0.863769 / 4.805227 (-3.941459) | 0.172699 / 6.500664 (-6.327965) | 0.069285 / 0.075469 (-0.006184) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.397331 / 1.841788 (-0.444457) | 16.678240 / 8.074308 (8.603932) | 16.665143 / 10.191392 (6.473751) | 0.151011 / 0.680424 (-0.529412) | 0.018303 / 0.534201 (-0.515898) | 0.445389 / 0.579283 (-0.133894) | 0.444644 / 0.434364 (0.010280) | 0.524647 / 0.540337 (-0.015690) | 0.629747 / 1.386936 (-0.757189) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008853 / 0.011353 (-0.002499) | 0.006196 / 0.011008 (-0.004813) | 0.078595 / 0.038508 (0.040087) | 0.048348 / 0.023109 (0.025239) | 0.347038 / 0.275898 (0.071140) | 0.385807 / 0.323480 (0.062327) | 0.007047 / 0.007986 (-0.000938) | 0.004772 / 0.004328 (0.000443) | 0.076116 / 0.004250 (0.071866) | 0.058805 / 0.037052 (0.021752) | 0.345731 / 0.258489 (0.087242) | 0.401589 / 0.293841 (0.107748) | 0.039349 / 0.128546 (-0.089197) | 0.012949 / 0.075646 (-0.062697) | 0.089761 / 0.419271 (-0.329511) | 0.060001 / 0.043533 (0.016468) | 0.351587 / 0.255139 (0.096448) | 0.377708 / 0.283200 (0.094509) | 0.117391 / 0.141683 (-0.024292) | 1.471622 / 1.452155 (0.019467) | 1.568759 / 1.492716 (0.076042) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191390 / 0.018006 (0.173384) | 0.469033 / 0.000490 (0.468544) | 0.003615 / 0.000200 (0.003415) | 0.000113 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032706 / 0.037411 (-0.004706) | 0.127095 / 0.014526 (0.112569) | 0.128755 / 0.176557 (-0.047801) | 0.182590 / 0.737135 (-0.554545) | 0.136939 / 0.296338 (-0.159400) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427392 / 0.215209 (0.212183) | 4.246708 / 2.077655 (2.169053) | 2.115557 / 1.504120 (0.611437) | 2.021221 / 1.541195 (0.480026) | 2.177559 / 1.468490 (0.709069) | 0.713930 / 4.584777 (-3.870847) | 4.192467 / 3.745712 (0.446755) | 3.645437 / 5.269862 (-1.624424) | 1.964986 / 4.565676 (-2.600690) | 0.089436 / 0.424275 (-0.334839) | 0.012917 / 0.007607 (0.005310) | 0.530468 / 0.226044 (0.304423) | 5.310759 / 2.268929 (3.041831) | 2.613566 / 55.444624 (-52.831058) | 2.350443 / 6.876477 (-4.526034) | 2.385278 / 2.142072 (0.243205) | 0.862838 / 4.805227 (-3.942389) | 0.172246 / 6.500664 (-6.328418) | 0.069570 / 0.075469 (-0.005899) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.310008 / 1.841788 (-0.531780) | 16.557079 / 8.074308 (8.482771) | 15.818145 / 10.191392 (5.626752) | 0.180337 / 0.680424 (-0.500087) | 0.018117 / 0.534201 (-0.516083) | 0.433189 / 0.579283 (-0.146095) | 0.429276 / 0.434364 (-0.005088) | 0.539757 / 0.540337 (-0.000580) | 0.640905 / 1.386936 (-0.746031) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3b16e08dd599f4646a77a5ca88b6445467e1e7e9 \"CML watermark\")\n"
] | 2023-03-29T15:06:07 | 2023-03-29T18:30:34 | 2023-03-29T18:15:54 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5684/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5684",
"html_url": "https://github.com/huggingface/datasets/pull/5684",
"diff_url": "https://github.com/huggingface/datasets/pull/5684.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5684.patch",
"merged_at": "2023-03-29T18:15:54"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5683 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5683/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5683/comments | https://api.github.com/repos/huggingface/datasets/issues/5683/events | https://github.com/huggingface/datasets/pull/5683 | 1,646,001,197 | PR_kwDODunzps5NLUq1 | 5,683 | Fix verification_mode when ignore_verifications is passed | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006935 / 0.011353 (-0.004418) | 0.004711 / 0.011008 (-0.006297) | 0.098461 / 0.038508 (0.059953) | 0.028889 / 0.023109 (0.005780) | 0.332167 / 0.275898 (0.056269) | 0.363309 / 0.323480 (0.039829) | 0.005179 / 0.007986 (-0.002807) | 0.004783 / 0.004328 (0.000455) | 0.074293 / 0.004250 (0.070043) | 0.038778 / 0.037052 (0.001726) | 0.318871 / 0.258489 (0.060382) | 0.362975 / 0.293841 (0.069134) | 0.032897 / 0.128546 (-0.095649) | 0.011685 / 0.075646 (-0.063961) | 0.322824 / 0.419271 (-0.096447) | 0.043842 / 0.043533 (0.000309) | 0.334789 / 0.255139 (0.079650) | 0.352922 / 0.283200 (0.069723) | 0.089692 / 0.141683 (-0.051991) | 1.490110 / 1.452155 (0.037955) | 1.601530 / 1.492716 (0.108813) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201882 / 0.018006 (0.183875) | 0.410875 / 0.000490 (0.410385) | 0.002472 / 0.000200 (0.002272) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023636 / 0.037411 (-0.013775) | 0.102168 / 0.014526 (0.087642) | 0.107247 / 0.176557 (-0.069310) | 0.171858 / 0.737135 (-0.565278) | 0.110619 / 0.296338 (-0.185720) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.433740 / 0.215209 (0.218531) | 4.332121 / 2.077655 (2.254466) | 2.075398 / 1.504120 (0.571278) | 1.941074 / 1.541195 (0.399879) | 2.033331 / 1.468490 (0.564841) | 0.697134 / 4.584777 (-3.887643) | 3.463855 / 3.745712 (-0.281857) | 3.080446 / 5.269862 (-2.189416) | 1.575020 / 4.565676 (-2.990656) | 0.083054 / 0.424275 (-0.341221) | 0.012454 / 0.007607 (0.004847) | 0.537996 / 0.226044 (0.311951) | 5.366765 / 2.268929 (3.097836) | 2.464398 / 55.444624 (-52.980227) | 2.143912 / 6.876477 (-4.732564) | 2.245706 / 2.142072 (0.103634) | 0.801397 / 4.805227 (-4.003831) | 0.150954 / 6.500664 (-6.349710) | 0.066758 / 0.075469 (-0.008711) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.216412 / 1.841788 (-0.625376) | 13.679322 / 8.074308 (5.605014) | 14.055286 / 10.191392 (3.863894) | 0.130264 / 0.680424 (-0.550160) | 0.016566 / 0.534201 (-0.517635) | 0.379126 / 0.579283 (-0.200157) | 0.390815 / 0.434364 (-0.043549) | 0.437586 / 0.540337 (-0.102751) | 0.526822 / 1.386936 (-0.860114) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006898 / 0.011353 (-0.004455) | 0.004705 / 0.011008 (-0.006304) | 0.078592 / 0.038508 (0.040084) | 0.028635 / 0.023109 (0.005525) | 0.340143 / 0.275898 (0.064245) | 0.377526 / 0.323480 (0.054047) | 0.005645 / 0.007986 (-0.002340) | 0.003533 / 0.004328 (-0.000796) | 0.078441 / 0.004250 (0.074191) | 0.039408 / 0.037052 (0.002356) | 0.342303 / 0.258489 (0.083814) | 0.386837 / 0.293841 (0.092996) | 0.032427 / 0.128546 (-0.096119) | 0.011763 / 0.075646 (-0.063883) | 0.087984 / 0.419271 (-0.331287) | 0.042126 / 0.043533 (-0.001406) | 0.339951 / 0.255139 (0.084812) | 0.366165 / 0.283200 (0.082966) | 0.091414 / 0.141683 (-0.050269) | 1.502034 / 1.452155 (0.049880) | 1.597901 / 1.492716 (0.105184) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232122 / 0.018006 (0.214115) | 0.410205 / 0.000490 (0.409715) | 0.000418 / 0.000200 (0.000218) | 0.000063 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026013 / 0.037411 (-0.011399) | 0.105520 / 0.014526 (0.090995) | 0.108649 / 0.176557 (-0.067908) | 0.159324 / 0.737135 (-0.577811) | 0.114033 / 0.296338 (-0.182306) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.455634 / 0.215209 (0.240425) | 4.508544 / 2.077655 (2.430889) | 2.087065 / 1.504120 (0.582945) | 1.872622 / 1.541195 (0.331427) | 1.935617 / 1.468490 (0.467127) | 0.696909 / 4.584777 (-3.887868) | 3.449365 / 3.745712 (-0.296348) | 3.008399 / 5.269862 (-2.261462) | 1.459245 / 4.565676 (-3.106431) | 0.083637 / 0.424275 (-0.340638) | 0.012358 / 0.007607 (0.004750) | 0.547232 / 0.226044 (0.321187) | 5.522395 / 2.268929 (3.253466) | 2.691019 / 55.444624 (-52.753605) | 2.408083 / 6.876477 (-4.468394) | 2.369239 / 2.142072 (0.227166) | 0.807148 / 4.805227 (-3.998080) | 0.152030 / 6.500664 (-6.348634) | 0.067883 / 0.075469 (-0.007586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.336956 / 1.841788 (-0.504832) | 14.403730 / 8.074308 (6.329422) | 14.854084 / 10.191392 (4.662692) | 0.146530 / 0.680424 (-0.533894) | 0.016611 / 0.534201 (-0.517590) | 0.398557 / 0.579283 (-0.180726) | 0.393194 / 0.434364 (-0.041170) | 0.486824 / 0.540337 (-0.053513) | 0.572844 / 1.386936 (-0.814092) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#411f9cc281e50954ea0c903e7a0a6618b3d31b9e \"CML watermark\")\n"
] | 2023-03-29T15:00:50 | 2023-03-29T17:36:06 | 2023-03-29T17:28:57 | MEMBER | null | This PR fixes the values assigned to `verification_mode` when passing `ignore_verifications` to `load_dataset`.
Related to:
- #5303
Fix #5682. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5683/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5683",
"html_url": "https://github.com/huggingface/datasets/pull/5683",
"diff_url": "https://github.com/huggingface/datasets/pull/5683.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5683.patch",
"merged_at": "2023-03-29T17:28:57"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5682 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5682/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5682/comments | https://api.github.com/repos/huggingface/datasets/issues/5682/events | https://github.com/huggingface/datasets/issues/5682 | 1,646,000,571 | I_kwDODunzps5iG_m7 | 5,682 | ValueError when passing ignore_verifications | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-03-29T15:00:30 | 2023-03-29T17:28:58 | 2023-03-29T17:28:58 | MEMBER | null | When passing `ignore_verifications=True` to `load_dataset`, we get a ValueError:
```
ValueError: 'none' is not a valid VerificationMode
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5682/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5681 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5681/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5681/comments | https://api.github.com/repos/huggingface/datasets/issues/5681/events | https://github.com/huggingface/datasets/issues/5681 | 1,645,630,784 | I_kwDODunzps5iFlVA | 5,681 | Add information about patterns search order to the doc about structuring repo | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Good idea, I think I've seen this a couple of times before too on the forums. I can work on this :)",
"Closed in #5693 "
] | 2023-03-29T11:44:49 | 2023-04-03T18:31:11 | 2023-04-03T18:31:11 | CONTRIBUTOR | null | Following [this](https://github.com/huggingface/datasets/issues/5650) issue I think we should add a note about the order of patterns that is used to find splits, see [my comment](https://github.com/huggingface/datasets/issues/5650#issuecomment-1488412527). Also we should reference this page in pages about packaged loaders.
I have a déjà vu that it had already been discussed as some point but I don't remember.... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5681/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5680 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5680/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5680/comments | https://api.github.com/repos/huggingface/datasets/issues/5680/events | https://github.com/huggingface/datasets/pull/5680 | 1,645,430,103 | PR_kwDODunzps5NJYNz | 5,680 | Fix a description error for interleave_datasets. | {
"login": "QizhiPei",
"id": 55624066,
"node_id": "MDQ6VXNlcjU1NjI0MDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/55624066?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QizhiPei",
"html_url": "https://github.com/QizhiPei",
"followers_url": "https://api.github.com/users/QizhiPei/followers",
"following_url": "https://api.github.com/users/QizhiPei/following{/other_user}",
"gists_url": "https://api.github.com/users/QizhiPei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/QizhiPei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QizhiPei/subscriptions",
"organizations_url": "https://api.github.com/users/QizhiPei/orgs",
"repos_url": "https://api.github.com/users/QizhiPei/repos",
"events_url": "https://api.github.com/users/QizhiPei/events{/privacy}",
"received_events_url": "https://api.github.com/users/QizhiPei/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006772 / 0.011353 (-0.004581) | 0.004674 / 0.011008 (-0.006335) | 0.098702 / 0.038508 (0.060194) | 0.028257 / 0.023109 (0.005148) | 0.368008 / 0.275898 (0.092110) | 0.402825 / 0.323480 (0.079345) | 0.005158 / 0.007986 (-0.002828) | 0.003470 / 0.004328 (-0.000858) | 0.075541 / 0.004250 (0.071291) | 0.039755 / 0.037052 (0.002702) | 0.373431 / 0.258489 (0.114942) | 0.410159 / 0.293841 (0.116318) | 0.031355 / 0.128546 (-0.097192) | 0.011632 / 0.075646 (-0.064014) | 0.325475 / 0.419271 (-0.093797) | 0.042574 / 0.043533 (-0.000958) | 0.373629 / 0.255139 (0.118490) | 0.393921 / 0.283200 (0.110721) | 0.084669 / 0.141683 (-0.057013) | 1.459947 / 1.452155 (0.007792) | 1.529593 / 1.492716 (0.036877) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.189994 / 0.018006 (0.171988) | 0.409091 / 0.000490 (0.408602) | 0.003693 / 0.000200 (0.003493) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024649 / 0.037411 (-0.012762) | 0.097702 / 0.014526 (0.083177) | 0.103650 / 0.176557 (-0.072906) | 0.167141 / 0.737135 (-0.569994) | 0.108460 / 0.296338 (-0.187879) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429544 / 0.215209 (0.214335) | 4.277106 / 2.077655 (2.199451) | 2.018745 / 1.504120 (0.514625) | 1.814782 / 1.541195 (0.273587) | 1.897030 / 1.468490 (0.428540) | 0.700332 / 4.584777 (-3.884445) | 3.421761 / 3.745712 (-0.323951) | 3.008281 / 5.269862 (-2.261581) | 1.554230 / 4.565676 (-3.011446) | 0.082922 / 0.424275 (-0.341353) | 0.012312 / 0.007607 (0.004705) | 0.527757 / 0.226044 (0.301713) | 5.287450 / 2.268929 (3.018522) | 2.329083 / 55.444624 (-53.115542) | 2.016651 / 6.876477 (-4.859826) | 2.214510 / 2.142072 (0.072437) | 0.807676 / 4.805227 (-3.997551) | 0.151752 / 6.500664 (-6.348912) | 0.066819 / 0.075469 (-0.008651) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.239522 / 1.841788 (-0.602266) | 13.923672 / 8.074308 (5.849364) | 14.317394 / 10.191392 (4.126002) | 0.159379 / 0.680424 (-0.521045) | 0.016537 / 0.534201 (-0.517664) | 0.376808 / 0.579283 (-0.202475) | 0.376351 / 0.434364 (-0.058012) | 0.437124 / 0.540337 (-0.103213) | 0.520589 / 1.386936 (-0.866347) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006892 / 0.011353 (-0.004461) | 0.004671 / 0.011008 (-0.006337) | 0.075841 / 0.038508 (0.037333) | 0.028713 / 0.023109 (0.005604) | 0.345105 / 0.275898 (0.069207) | 0.380694 / 0.323480 (0.057214) | 0.005155 / 0.007986 (-0.002830) | 0.003379 / 0.004328 (-0.000949) | 0.075134 / 0.004250 (0.070883) | 0.039990 / 0.037052 (0.002938) | 0.345540 / 0.258489 (0.087051) | 0.389913 / 0.293841 (0.096072) | 0.032089 / 0.128546 (-0.096458) | 0.011583 / 0.075646 (-0.064063) | 0.085169 / 0.419271 (-0.334102) | 0.041847 / 0.043533 (-0.001686) | 0.341504 / 0.255139 (0.086365) | 0.367582 / 0.283200 (0.084382) | 0.092684 / 0.141683 (-0.048999) | 1.498647 / 1.452155 (0.046492) | 1.549056 / 1.492716 (0.056339) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228643 / 0.018006 (0.210637) | 0.410680 / 0.000490 (0.410191) | 0.000398 / 0.000200 (0.000198) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025354 / 0.037411 (-0.012057) | 0.101567 / 0.014526 (0.087041) | 0.108340 / 0.176557 (-0.068217) | 0.157804 / 0.737135 (-0.579332) | 0.113985 / 0.296338 (-0.182354) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436427 / 0.215209 (0.221218) | 4.359331 / 2.077655 (2.281676) | 2.047877 / 1.504120 (0.543757) | 1.844242 / 1.541195 (0.303047) | 1.924553 / 1.468490 (0.456063) | 0.695986 / 4.584777 (-3.888791) | 3.435571 / 3.745712 (-0.310141) | 1.905189 / 5.269862 (-3.364673) | 1.198542 / 4.565676 (-3.367134) | 0.083386 / 0.424275 (-0.340889) | 0.012442 / 0.007607 (0.004835) | 0.542562 / 0.226044 (0.316517) | 5.416554 / 2.268929 (3.147625) | 2.499496 / 55.444624 (-52.945128) | 2.160658 / 6.876477 (-4.715819) | 2.210535 / 2.142072 (0.068462) | 0.803324 / 4.805227 (-4.001903) | 0.151735 / 6.500664 (-6.348929) | 0.068392 / 0.075469 (-0.007078) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.319915 / 1.841788 (-0.521873) | 14.176755 / 8.074308 (6.102446) | 14.376366 / 10.191392 (4.184974) | 0.141219 / 0.680424 (-0.539204) | 0.017181 / 0.534201 (-0.517020) | 0.383589 / 0.579283 (-0.195694) | 0.389352 / 0.434364 (-0.045012) | 0.474465 / 0.540337 (-0.065873) | 0.563047 / 1.386936 (-0.823889) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c33e8ce68b5000988bf6b2e4bca27ffaa469acea \"CML watermark\")\n"
] | 2023-03-29T09:50:23 | 2023-03-30T13:14:19 | 2023-03-30T13:07:18 | CONTRIBUTOR | null | There is a description mistake in the annotation of interleave_dataset with "all_exhausted" stopping_strategy.
``` python
d1 = Dataset.from_dict({"a": [0, 1, 2]})
d2 = Dataset.from_dict({"a": [10, 11, 12, 13]})
d3 = Dataset.from_dict({"a": [20, 21, 22, 23, 24]})
dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted")
```
According to the interleave way, the correct output of `dataset["a"]` is `[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 10, 24]`, not `[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24]` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5680/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5680",
"html_url": "https://github.com/huggingface/datasets/pull/5680",
"diff_url": "https://github.com/huggingface/datasets/pull/5680.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5680.patch",
"merged_at": "2023-03-30T13:07:18"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5679 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5679/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5679/comments | https://api.github.com/repos/huggingface/datasets/issues/5679/events | https://github.com/huggingface/datasets/issues/5679 | 1,645,184,622 | I_kwDODunzps5iD4Zu | 5,679 | Allow load_dataset to take a working dir for intermediate data | {
"login": "lu-wang-dl",
"id": 38018689,
"node_id": "MDQ6VXNlcjM4MDE4Njg5",
"avatar_url": "https://avatars.githubusercontent.com/u/38018689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lu-wang-dl",
"html_url": "https://github.com/lu-wang-dl",
"followers_url": "https://api.github.com/users/lu-wang-dl/followers",
"following_url": "https://api.github.com/users/lu-wang-dl/following{/other_user}",
"gists_url": "https://api.github.com/users/lu-wang-dl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lu-wang-dl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lu-wang-dl/subscriptions",
"organizations_url": "https://api.github.com/users/lu-wang-dl/orgs",
"repos_url": "https://api.github.com/users/lu-wang-dl/repos",
"events_url": "https://api.github.com/users/lu-wang-dl/events{/privacy}",
"received_events_url": "https://api.github.com/users/lu-wang-dl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! AFAIK a dataset must be present on a local disk to be able to efficiently memory map the datasets Arrow files. What makes you think that it is possible to load from a cloud storage and have good performance ?\r\n\r\nAnyway it's already possible to download_and_prepare a dataset as Arrow files in a cloud storage with:\r\n```python\r\nbuilder = load_dataset_builder(..., cache_dir=\"/temp/dir\")\r\nbuilder.download_and_prepare(\"/cloud_dir\")\r\n```\r\n\r\nbut then \r\n```python\r\nds = builder.as_dataset()\r\n```\r\nwould fail if \"/cloud_dir\" is not a local directory.",
"In my use case, I am trying to mount the S3 bucket as local system with S3FS-FUSE / [goofys](https://github.com/kahing/goofys). I want to use S3 to save the download data and save checkpoint for training for persistent. Setting the s3 location as cache directory is not fast enough. That is why I want to set a work directory for temp data for memory map and only save the final result to s3 cache. ",
"You can try setting `HF_DATASETS_DOWNLOADED_DATASETS_PATH` and `HF_DATASETS_EXTRACTED_DATASETS_PATH` to S3, and `HF_DATASETS_CACHE` to your local disk.\r\n\r\nThis way all your downloaded and extracted data are on your mounted S3, but the datasets Arrow files are on your local disk",
"If we hope to also persist the Arrow files on the mounted S3 but work with the efficiency of local disk, is there any recommended way to do this, other than copying the Arrow files from local disk to S3?"
] | 2023-03-29T07:21:09 | 2023-04-12T22:30:25 | null | NONE | null | ### Feature request
As a user, I can set a working dir for intermediate data creation. The processed files will be moved to the cache dir, like
```
load_dataset(…, working_dir=”/temp/dir”, cache_dir=”/cloud_dir”).
```
### Motivation
This will help the use case for using datasets with cloud storage as cache. It will help boost the performance.
### Your contribution
I can provide a PR to fix this if the proposal seems reasonable. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5679/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5679/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5678 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5678/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5678/comments | https://api.github.com/repos/huggingface/datasets/issues/5678/events | https://github.com/huggingface/datasets/issues/5678 | 1,645,018,359 | I_kwDODunzps5iDPz3 | 5,678 | Add support to create a Dataset from spark dataframe | {
"login": "lu-wang-dl",
"id": 38018689,
"node_id": "MDQ6VXNlcjM4MDE4Njg5",
"avatar_url": "https://avatars.githubusercontent.com/u/38018689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lu-wang-dl",
"html_url": "https://github.com/lu-wang-dl",
"followers_url": "https://api.github.com/users/lu-wang-dl/followers",
"following_url": "https://api.github.com/users/lu-wang-dl/following{/other_user}",
"gists_url": "https://api.github.com/users/lu-wang-dl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lu-wang-dl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lu-wang-dl/subscriptions",
"organizations_url": "https://api.github.com/users/lu-wang-dl/orgs",
"repos_url": "https://api.github.com/users/lu-wang-dl/repos",
"events_url": "https://api.github.com/users/lu-wang-dl/events{/privacy}",
"received_events_url": "https://api.github.com/users/lu-wang-dl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"if i read spark Dataframe , got an error on multi-node Spark cluster.\r\nDid the Api (Dataset.from_spark) support Spark cluster, read dataframe and save_to_disk?\r\n\r\nError: \r\n_pickle.PicklingError: Could not serialize object: RuntimeError: It appears that you are attempting to reference SparkContext from a broadcast variable, action, or transforma\r\ntion. SparkContext can only be used on the driver, not in code that it run on workers. For more information, see SPARK-5063.\r\n23/06/16 21:17:20 WARN ExecutorPodsWatchSnapshotSource: Kubernetes client has been closed (this is expected if the application is shutting down.)\r\n\r\n",
"How to perform predictions on Dataset object in Spark with multi-node cluster parallelism?",
"Addressed in #5701"
] | 2023-03-29T04:36:28 | 2023-07-21T14:15:38 | 2023-07-21T14:15:38 | NONE | null | ### Feature request
Add a new API `Dataset.from_spark` to create a Dataset from Spark DataFrame.
### Motivation
Spark is a distributed computing framework that can handle large datasets. By supporting loading Spark DataFrames directly into Hugging Face Datasets, we enable take the advantages of spark to processing the data in parallel.
By providing a seamless integration between these two frameworks, we make it easier for data scientists and developers to work with both Spark and Hugging Face in the same workflow.
### Your contribution
We can discuss about the ideas and I can help preparing a PR for this feature. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5678/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5678/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5677 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5677/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5677/comments | https://api.github.com/repos/huggingface/datasets/issues/5677/events | https://github.com/huggingface/datasets/issues/5677 | 1,644,828,606 | I_kwDODunzps5iChe- | 5,677 | Dataset.map() crashes when any column contains more than 1000 empty dictionaries | {
"login": "destigres",
"id": 7139344,
"node_id": "MDQ6VXNlcjcxMzkzNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7139344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/destigres",
"html_url": "https://github.com/destigres",
"followers_url": "https://api.github.com/users/destigres/followers",
"following_url": "https://api.github.com/users/destigres/following{/other_user}",
"gists_url": "https://api.github.com/users/destigres/gists{/gist_id}",
"starred_url": "https://api.github.com/users/destigres/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/destigres/subscriptions",
"organizations_url": "https://api.github.com/users/destigres/orgs",
"repos_url": "https://api.github.com/users/destigres/repos",
"events_url": "https://api.github.com/users/destigres/events{/privacy}",
"received_events_url": "https://api.github.com/users/destigres/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2023-03-29T00:01:31 | 2023-07-07T14:01:14 | 2023-07-07T14:01:14 | NONE | null | ### Describe the bug
`Dataset.map()` crashes any time any column contains more than `writer_batch_size` (default 1000) empty dictionaries, regardless of whether the column is being operated on. The error does not occur if the dictionaries are non-empty.
### Steps to reproduce the bug
Example:
```
import datasets
def add_one(example):
example["col2"] += 1
return example
n = 1001 # crashes
# n = 999 # works
ds = datasets.Dataset.from_dict({"col1": [{}] * n, "col2": [1] * n})
ds = ds.map(add_one, writer_batch_size=1000)
```
### Expected behavior
Above code should not crash
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.10
- Python version: 3.8.15
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5677/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5675 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5675/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5675/comments | https://api.github.com/repos/huggingface/datasets/issues/5675/events | https://github.com/huggingface/datasets/issues/5675 | 1,641,763,478 | I_kwDODunzps5h21KW | 5,675 | Filter datasets by language code | {
"login": "named-entity",
"id": 5658496,
"node_id": "MDQ6VXNlcjU2NTg0OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5658496?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/named-entity",
"html_url": "https://github.com/named-entity",
"followers_url": "https://api.github.com/users/named-entity/followers",
"following_url": "https://api.github.com/users/named-entity/following{/other_user}",
"gists_url": "https://api.github.com/users/named-entity/gists{/gist_id}",
"starred_url": "https://api.github.com/users/named-entity/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/named-entity/subscriptions",
"organizations_url": "https://api.github.com/users/named-entity/orgs",
"repos_url": "https://api.github.com/users/named-entity/repos",
"events_url": "https://api.github.com/users/named-entity/events{/privacy}",
"received_events_url": "https://api.github.com/users/named-entity/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The dataset still can be found, if instead of using the search form you just enter the language code in the url, like https://huggingface.co/datasets?language=language:myv. \r\n\r\nBut of course having a more complete list of languages in the search form (or just a fallback to the language codes, if they are missing from the code=>language mapping) would be much more convenient!",
"Hi! I've opened a PR to make these languages searchable on the Hub.",
"Thanks @mariosasko!\r\nDo you think it is possible to turn this into a more scalable pipeline? Such as:\r\n1. Looping through all the datasets on the hub and collecting the set of all their language codes;\r\n2. Selecting the codes not covered yet in `Language.ts`\r\n3. Looking up their codes at https://iso639-3.sil.org/code_tables/639/data\r\n4. Adding all the newly found language codes to `Language.ts`",
"@avidale This has been discussed in https://github.com/huggingface/datasets/issues/4881, so also feel free to share your opinion there."
] | 2023-03-27T09:42:28 | 2023-03-30T08:08:15 | 2023-03-30T08:08:15 | NONE | null | Hi! I use the language search field on https://huggingface.co/datasets
However, some of the datasets tagged by ISO language code are not accessible by this search form.
For example, [myv_ru_2022](https://huggingface.co/datasets/slone/myv_ru_2022) is has `myv` language tag but it is not included in Languages search form.
I've also noticed the same problem with `mhr` (see https://huggingface.co/datasets/AigizK/mari-russian-parallel-corpora) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5675/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5675/timeline | null | completed | null | null | false |