url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
2.56B
2.68B
node_id
stringlengths
18
19
number
int64
7.19k
7.3k
title
stringlengths
7
121
user
dict
labels
listlengths
0
1
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
1
milestone
null
comments
sequencelengths
0
16
created_at
timestamp[ms]
updated_at
timestamp[ms]
closed_at
timestamp[ms]
author_association
stringclasses
4 values
active_lock_reason
null
body
stringlengths
17
47.9k
closed_by
dict
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
2 values
draft
bool
1 class
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/7297
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7297/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7297/comments
https://api.github.com/repos/huggingface/datasets/issues/7297/events
https://github.com/huggingface/datasets/issues/7297
2,683,977,430
I_kwDODunzps6f-j7W
7,297
wrong return type for `IterableDataset.shard()`
{ "login": "ysngshn", "id": 47225236, "node_id": "MDQ6VXNlcjQ3MjI1MjM2", "avatar_url": "https://avatars.githubusercontent.com/u/47225236?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ysngshn", "html_url": "https://github.com/ysngshn", "followers_url": "https://api.github.com/users/ysngshn/followers", "following_url": "https://api.github.com/users/ysngshn/following{/other_user}", "gists_url": "https://api.github.com/users/ysngshn/gists{/gist_id}", "starred_url": "https://api.github.com/users/ysngshn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ysngshn/subscriptions", "organizations_url": "https://api.github.com/users/ysngshn/orgs", "repos_url": "https://api.github.com/users/ysngshn/repos", "events_url": "https://api.github.com/users/ysngshn/events{/privacy}", "received_events_url": "https://api.github.com/users/ysngshn/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-11-22T17:25:46
2024-11-22T17:25:46
null
NONE
null
### Describe the bug `IterableDataset.shard()` has the wrong typing for its return as `"Dataset"`. It should be `"IterableDataset"`. Makes my IDE unhappy. ### Steps to reproduce the bug look at [the source code](https://github.com/huggingface/datasets/blob/main/src/datasets/iterable_dataset.py#L2668)? ### Expected behavior Correct return type as `"IterableDataset"` ### Environment info datasets==3.1.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7297/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7296
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7296/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7296/comments
https://api.github.com/repos/huggingface/datasets/issues/7296/events
https://github.com/huggingface/datasets/pull/7296
2,675,573,974
PR_kwDODunzps6ChJIJ
7,296
Remove upper version limit of fsspec[http]
{ "login": "cyyever", "id": 17618148, "node_id": "MDQ6VXNlcjE3NjE4MTQ4", "avatar_url": "https://avatars.githubusercontent.com/u/17618148?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cyyever", "html_url": "https://github.com/cyyever", "followers_url": "https://api.github.com/users/cyyever/followers", "following_url": "https://api.github.com/users/cyyever/following{/other_user}", "gists_url": "https://api.github.com/users/cyyever/gists{/gist_id}", "starred_url": "https://api.github.com/users/cyyever/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyyever/subscriptions", "organizations_url": "https://api.github.com/users/cyyever/orgs", "repos_url": "https://api.github.com/users/cyyever/repos", "events_url": "https://api.github.com/users/cyyever/events{/privacy}", "received_events_url": "https://api.github.com/users/cyyever/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-11-20T11:29:16
2024-11-20T11:29:16
null
NONE
null
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7296/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7296", "html_url": "https://github.com/huggingface/datasets/pull/7296", "diff_url": "https://github.com/huggingface/datasets/pull/7296.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7296.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/7295
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7295/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7295/comments
https://api.github.com/repos/huggingface/datasets/issues/7295/events
https://github.com/huggingface/datasets/issues/7295
2,672,003,384
I_kwDODunzps6fQ4k4
7,295
[BUG]: Streaming from S3 triggers `unexpected keyword argument 'requote_redirect_url'`
{ "login": "casper-hansen", "id": 27340033, "node_id": "MDQ6VXNlcjI3MzQwMDMz", "avatar_url": "https://avatars.githubusercontent.com/u/27340033?v=4", "gravatar_id": "", "url": "https://api.github.com/users/casper-hansen", "html_url": "https://github.com/casper-hansen", "followers_url": "https://api.github.com/users/casper-hansen/followers", "following_url": "https://api.github.com/users/casper-hansen/following{/other_user}", "gists_url": "https://api.github.com/users/casper-hansen/gists{/gist_id}", "starred_url": "https://api.github.com/users/casper-hansen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/casper-hansen/subscriptions", "organizations_url": "https://api.github.com/users/casper-hansen/orgs", "repos_url": "https://api.github.com/users/casper-hansen/repos", "events_url": "https://api.github.com/users/casper-hansen/events{/privacy}", "received_events_url": "https://api.github.com/users/casper-hansen/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-11-19T12:23:36
2024-11-19T13:01:53
null
NONE
null
### Describe the bug Note that this bug is only triggered when `streaming=True`. #5459 introduced always calling fsspec with `client_kwargs={"requote_redirect_url": False}`, which seems to have incompatibility issues even in the newest versions. Analysis of what's happening: 1. `datasets` passes the `client_kwargs` through `fsspec` 2. `fsspec` passes the `client_kwargs` through `s3fs` 3. `s3fs` passes the `client_kwargs` to `aiobotocore` which uses `aiohttp` ``` s3creator = self.session.create_client( "s3", config=conf, **init_kwargs, **client_kwargs ) ``` 4. The `session` tries to create an `aiohttp` session but the `**kwargs` are not just kept as unfolded `**kwargs` but passed in as individual variables (`requote_redirect_url` and `trust_env`). Error: ``` Traceback (most recent call last): File "/Users/cxrh/Documents/GitHub/nlp_foundation/nlp_train/test.py", line 14, in <module> batch = next(iter(ds)) File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 1353, in __iter__ for key, example in ex_iterable: File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/iterable_dataset.py", line 255, in __iter__ for key, pa_table in self.generate_tables_fn(**self.kwargs): File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py", line 78, in _generate_tables for file_idx, file in enumerate(itertools.chain.from_iterable(files)): File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 840, in __iter__ yield from self.generator(*self.args, **self.kwargs) File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 921, in _iter_from_urlpaths elif xisdir(urlpath, download_config=download_config): File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py", line 305, in xisdir return fs.isdir(inner_path) File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/spec.py", line 721, in isdir return self.info(path)["type"] == "directory" File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/archive.py", line 38, in info self._get_dirs() File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/datasets/filesystems/compression.py", line 64, in _get_dirs f = {**self.file.fs.info(self.file.path), "name": self.uncompressed_name} File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 118, in wrapper return sync(self.loop, func, *args, **kwargs) File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 103, in sync raise return_result File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/fsspec/asyn.py", line 56, in _runner result[0] = await coro File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 1302, in _info out = await self._call_s3( File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 341, in _call_s3 await self.set_session() File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/s3fs/core.py", line 524, in set_session s3creator = self.session.create_client( File "/Users/cxrh/miniconda3/envs/s3_data_loader/lib/python3.10/site-packages/aiobotocore/session.py", line 114, in create_client return ClientCreatorContext(self._create_client(*args, **kwargs)) TypeError: AioSession._create_client() got an unexpected keyword argument 'requote_redirect_url' ``` ### Steps to reproduce the bug 1. Install the necessary libraries, datasets having a requirement for being at least 2.19.0: ``` pip install s3fs fsspec aiohttp aiobotocore botocore 'datasets>=2.19.0' ``` 2. Run this code: ``` from datasets import load_dataset ds = load_dataset( "json", data_files="s3://your_path/*.jsonl.gz", streaming=True, split="train", ) batch = next(iter(ds)) print(batch) ``` 3. You get the `unexpected keyword argument 'requote_redirect_url'` error. ### Expected behavior The datasets is able to load a batch from the dataset stored on S3, without triggering this `requote_redirect_url` error. Fix: I could fix this by directly removing the `requote_redirect_url` and `trust_env` - then it loads properly. <img width="1127" alt="image" src="https://github.com/user-attachments/assets/4c40efa9-8787-4919-b613-e4908c3d1ab2"> ### Environment info - `datasets` version: 3.1.0 - Platform: macOS-15.1-arm64-arm-64bit - Python version: 3.10.15 - `huggingface_hub` version: 0.26.2 - PyArrow version: 18.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.9.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7295/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7294
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7294/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7294/comments
https://api.github.com/repos/huggingface/datasets/issues/7294/events
https://github.com/huggingface/datasets/pull/7294
2,668,663,130
PR_kwDODunzps6CQKTy
7,294
Remove `aiohttp` from direct dependencies
{ "login": "akx", "id": 58669, "node_id": "MDQ6VXNlcjU4NjY5", "avatar_url": "https://avatars.githubusercontent.com/u/58669?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akx", "html_url": "https://github.com/akx", "followers_url": "https://api.github.com/users/akx/followers", "following_url": "https://api.github.com/users/akx/following{/other_user}", "gists_url": "https://api.github.com/users/akx/gists{/gist_id}", "starred_url": "https://api.github.com/users/akx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akx/subscriptions", "organizations_url": "https://api.github.com/users/akx/orgs", "repos_url": "https://api.github.com/users/akx/repos", "events_url": "https://api.github.com/users/akx/events{/privacy}", "received_events_url": "https://api.github.com/users/akx/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-11-18T14:00:59
2024-11-18T14:00:59
null
NONE
null
The dependency is only used for catching an exception from other code. That can be done with an import guard.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7294/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7294", "html_url": "https://github.com/huggingface/datasets/pull/7294", "diff_url": "https://github.com/huggingface/datasets/pull/7294.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7294.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/7293
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7293/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7293/comments
https://api.github.com/repos/huggingface/datasets/issues/7293/events
https://github.com/huggingface/datasets/pull/7293
2,664,592,054
PR_kwDODunzps6CIjS-
7,293
Updated inconsistent output in documentation examples for `ClassLabel`
{ "login": "sergiopaniego", "id": 17179696, "node_id": "MDQ6VXNlcjE3MTc5Njk2", "avatar_url": "https://avatars.githubusercontent.com/u/17179696?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sergiopaniego", "html_url": "https://github.com/sergiopaniego", "followers_url": "https://api.github.com/users/sergiopaniego/followers", "following_url": "https://api.github.com/users/sergiopaniego/following{/other_user}", "gists_url": "https://api.github.com/users/sergiopaniego/gists{/gist_id}", "starred_url": "https://api.github.com/users/sergiopaniego/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sergiopaniego/subscriptions", "organizations_url": "https://api.github.com/users/sergiopaniego/orgs", "repos_url": "https://api.github.com/users/sergiopaniego/repos", "events_url": "https://api.github.com/users/sergiopaniego/events{/privacy}", "received_events_url": "https://api.github.com/users/sergiopaniego/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "Updated! 😄 ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7293). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "@lhoestq, can you help with this failing test please? 🙏 " ]
2024-11-16T16:20:57
2024-11-18T18:34:37
null
NONE
null
fix #7129 @stevhliu
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7293/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7293", "html_url": "https://github.com/huggingface/datasets/pull/7293", "diff_url": "https://github.com/huggingface/datasets/pull/7293.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7293.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/7292
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7292/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7292/comments
https://api.github.com/repos/huggingface/datasets/issues/7292/events
https://github.com/huggingface/datasets/issues/7292
2,664,250,855
I_kwDODunzps6ezT3n
7,292
DataFilesNotFoundError for datasets `OpenMol/PubChemSFT`
{ "login": "xnuohz", "id": 17878022, "node_id": "MDQ6VXNlcjE3ODc4MDIy", "avatar_url": "https://avatars.githubusercontent.com/u/17878022?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xnuohz", "html_url": "https://github.com/xnuohz", "followers_url": "https://api.github.com/users/xnuohz/followers", "following_url": "https://api.github.com/users/xnuohz/following{/other_user}", "gists_url": "https://api.github.com/users/xnuohz/gists{/gist_id}", "starred_url": "https://api.github.com/users/xnuohz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xnuohz/subscriptions", "organizations_url": "https://api.github.com/users/xnuohz/orgs", "repos_url": "https://api.github.com/users/xnuohz/repos", "events_url": "https://api.github.com/users/xnuohz/events{/privacy}", "received_events_url": "https://api.github.com/users/xnuohz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! If the dataset owner uses `push_to_hub()` instead of `save_to_disk()` and upload the local files it will fix the issue.\r\nRight now `datasets` sees the train/test/valid pickle files but they are not supported file formats.", "Alternatively you can load the arrow file instead:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('OpenMol/PubChemSFT', data_files='stage1/*.arrow')\r\n```", "Thanks! I'll have a try." ]
2024-11-16T11:54:31
2024-11-19T00:53:00
2024-11-19T00:52:59
NONE
null
### Describe the bug Cannot load the dataset https://huggingface.co/datasets/OpenMol/PubChemSFT ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('OpenMol/PubChemSFT') ``` ### Expected behavior ``` --------------------------------------------------------------------------- DataFilesNotFoundError Traceback (most recent call last) Cell In[7], [line 2](vscode-notebook-cell:?execution_count=7&line=2) [1](vscode-notebook-cell:?execution_count=7&line=1) from datasets import load_dataset ----> [2](vscode-notebook-cell:?execution_count=7&line=2) dataset = load_dataset('OpenMol/PubChemSFT') File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2587, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) [2582](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2582) verification_mode = VerificationMode( [2583](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2583) (verification_mode or VerificationMode.BASIC_CHECKS) if not save_infos else VerificationMode.ALL_CHECKS [2584](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2584) ) [2586](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2586) # Create a dataset builder -> [2587](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2587) builder_instance = load_dataset_builder( [2588](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2588) path=path, [2589](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2589) name=name, [2590](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2590) data_dir=data_dir, [2591](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2591) data_files=data_files, [2592](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2592) cache_dir=cache_dir, [2593](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2593) features=features, [2594](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2594) download_config=download_config, [2595](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2595) download_mode=download_mode, [2596](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2596) revision=revision, [2597](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2597) token=token, [2598](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2598) storage_options=storage_options, [2599](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2599) trust_remote_code=trust_remote_code, [2600](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2600) _require_default_config_name=name is None, [2601](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2601) **config_kwargs, [2602](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2602) ) [2604](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2604) # Return iterable dataset in case of streaming [2605](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2605) if streaming: File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2259, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs) [2257](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2257) download_config = download_config.copy() if download_config else DownloadConfig() [2258](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2258) download_config.storage_options.update(storage_options) -> [2259](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2259) dataset_module = dataset_module_factory( [2260](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2260) path, [2261](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2261) revision=revision, [2262](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2262) download_config=download_config, [2263](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2263) download_mode=download_mode, [2264](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2264) data_dir=data_dir, [2265](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2265) data_files=data_files, [2266](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2266) cache_dir=cache_dir, [2267](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2267) trust_remote_code=trust_remote_code, [2268](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2268) _require_default_config_name=_require_default_config_name, [2269](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2269) _require_custom_configs=bool(config_kwargs), [2270](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2270) ) [2271](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2271) # Get dataset builder class from the processing script [2272](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:2272) builder_kwargs = dataset_module.builder_kwargs File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1904, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) [1902](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1902) raise ConnectionError(f"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}") from None [1903](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1903) if isinstance(e1, (DataFilesNotFoundError, DatasetNotFoundError, EmptyDatasetError)): -> [1904](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1904) raise e1 from None [1905](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1905) if isinstance(e1, FileNotFoundError): [1906](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1906) raise FileNotFoundError( [1907](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1907) f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. " [1908](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1908) f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" [1909](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1909) ) from None File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1885, in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs) [1876](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1876) return HubDatasetModuleFactoryWithScript( [1877](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1877) path, [1878](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1878) revision=revision, (...) [1882](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1882) trust_remote_code=trust_remote_code, [1883](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1883) ).get_module() [1884](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1884) else: -> [1885](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1885) return HubDatasetModuleFactoryWithoutScript( [1886](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1886) path, [1887](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1887) revision=revision, [1888](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1888) data_dir=data_dir, [1889](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1889) data_files=data_files, [1890](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1890) download_config=download_config, [1891](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1891) download_mode=download_mode, [1892](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1892) ).get_module() [1893](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1893) except Exception as e1: [1894](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1894) # All the attempts failed, before raising the error we should check if the module is already cached [1895](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1895) try: File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1270, in HubDatasetModuleFactoryWithoutScript.get_module(self) [1263](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1263) patterns = get_data_patterns(base_path, download_config=self.download_config) [1264](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1264) data_files = DataFilesDict.from_patterns( [1265](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1265) patterns, [1266](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1266) base_path=base_path, [1267](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1267) allowed_extensions=ALL_ALLOWED_EXTENSIONS, [1268](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1268) download_config=self.download_config, [1269](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1269) ) -> [1270](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1270) module_name, default_builder_kwargs = infer_module_for_data_files( [1271](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1271) data_files=data_files, [1272](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1272) path=self.name, [1273](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1273) download_config=self.download_config, [1274](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1274) ) [1275](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1275) data_files = data_files.filter_extensions(_MODULE_TO_EXTENSIONS[module_name]) [1276](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:1276) # Collect metadata files if the module supports them File ~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:597, in infer_module_for_data_files(data_files, path, download_config) [595](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:595) raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") [596](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:596) if not module_name: --> [597](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:597) raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else "")) [598](https://file+.vscode-resource.vscode-cdn.net/home/ubuntu/Projects/notebook/~/Softwares/anaconda3/envs/pyg-dev/lib/python3.9/site-packages/datasets/load.py:598) return module_name, default_builder_kwargs DataFilesNotFoundError: No (supported) data files found in OpenMol/PubChemSFT ``` ### Environment info ``` - `datasets` version: 3.1.0 - Platform: Linux-5.15.0-125-generic-x86_64-with-glibc2.31 - Python version: 3.9.18 - `huggingface_hub` version: 0.25.2 - PyArrow version: 18.0.0 - Pandas version: 2.0.3 - `fsspec` version: 2023.9.2 ```
{ "login": "xnuohz", "id": 17878022, "node_id": "MDQ6VXNlcjE3ODc4MDIy", "avatar_url": "https://avatars.githubusercontent.com/u/17878022?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xnuohz", "html_url": "https://github.com/xnuohz", "followers_url": "https://api.github.com/users/xnuohz/followers", "following_url": "https://api.github.com/users/xnuohz/following{/other_user}", "gists_url": "https://api.github.com/users/xnuohz/gists{/gist_id}", "starred_url": "https://api.github.com/users/xnuohz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xnuohz/subscriptions", "organizations_url": "https://api.github.com/users/xnuohz/orgs", "repos_url": "https://api.github.com/users/xnuohz/repos", "events_url": "https://api.github.com/users/xnuohz/events{/privacy}", "received_events_url": "https://api.github.com/users/xnuohz/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7292/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7291
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7291/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7291/comments
https://api.github.com/repos/huggingface/datasets/issues/7291/events
https://github.com/huggingface/datasets/issues/7291
2,662,244,643
I_kwDODunzps6erqEj
7,291
Why return_tensors='pt' doesn't work?
{ "login": "bw-wang19", "id": 86752851, "node_id": "MDQ6VXNlcjg2NzUyODUx", "avatar_url": "https://avatars.githubusercontent.com/u/86752851?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bw-wang19", "html_url": "https://github.com/bw-wang19", "followers_url": "https://api.github.com/users/bw-wang19/followers", "following_url": "https://api.github.com/users/bw-wang19/following{/other_user}", "gists_url": "https://api.github.com/users/bw-wang19/gists{/gist_id}", "starred_url": "https://api.github.com/users/bw-wang19/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bw-wang19/subscriptions", "organizations_url": "https://api.github.com/users/bw-wang19/orgs", "repos_url": "https://api.github.com/users/bw-wang19/repos", "events_url": "https://api.github.com/users/bw-wang19/events{/privacy}", "received_events_url": "https://api.github.com/users/bw-wang19/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! `datasets` uses Arrow as storage backend which is agnostic to deep learning frameworks like torch. If you want to get torch tensors back, you need to do `dataset = dataset.with_format(\"torch\")`", "> Hi ! `datasets` uses Arrow as storage backend which is agnostic to deep learning frameworks like torch. If you want to get torch tensors back, you need to do `dataset = dataset.with_format(\"torch\")`\r\n\r\nIt does work! Thanks for your suggestion!" ]
2024-11-15T15:01:23
2024-11-18T13:47:08
null
NONE
null
### Describe the bug I tried to add input_ids to dataset with map(), and I used the return_tensors='pt', but why I got the callback with the type of List? ![image](https://github.com/user-attachments/assets/ab046e20-2174-4e91-9cd6-4a296a43e83c) ### Steps to reproduce the bug ![image](https://github.com/user-attachments/assets/5d504d4c-22c7-4742-99a1-9cab78739b17) ### Expected behavior Sorry for this silly question, I'm noob on using this tool. But I think it should return a tensor value as I have used the protocol? When I tokenize only one sentence using tokenized_input=tokenizer(input, return_tensors='pt' ),it does return in tensor type. Why doesn't it work in map()? ### Environment info transformers>=4.41.2,<=4.45.0 datasets>=2.16.0,<=2.21.0 accelerate>=0.30.1,<=0.34.2 peft>=0.11.1,<=0.12.0 trl>=0.8.6,<=0.9.6 gradio>=4.0.0 pandas>=2.0.0 scipy einops sentencepiece tiktoken protobuf uvicorn pydantic fastapi sse-starlette matplotlib>=3.7.0 fire packaging pyyaml numpy<2.0.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7291/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7290
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7290/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7290/comments
https://api.github.com/repos/huggingface/datasets/issues/7290/events
https://github.com/huggingface/datasets/issues/7290
2,657,620,816
I_kwDODunzps6eaBNQ
7,290
`Dataset.save_to_disk` hangs when using num_proc > 1
{ "login": "JohannesAck", "id": 22243463, "node_id": "MDQ6VXNlcjIyMjQzNDYz", "avatar_url": "https://avatars.githubusercontent.com/u/22243463?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JohannesAck", "html_url": "https://github.com/JohannesAck", "followers_url": "https://api.github.com/users/JohannesAck/followers", "following_url": "https://api.github.com/users/JohannesAck/following{/other_user}", "gists_url": "https://api.github.com/users/JohannesAck/gists{/gist_id}", "starred_url": "https://api.github.com/users/JohannesAck/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JohannesAck/subscriptions", "organizations_url": "https://api.github.com/users/JohannesAck/orgs", "repos_url": "https://api.github.com/users/JohannesAck/repos", "events_url": "https://api.github.com/users/JohannesAck/events{/privacy}", "received_events_url": "https://api.github.com/users/JohannesAck/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-11-14T05:25:13
2024-11-14T05:25:13
null
NONE
null
### Describe the bug Hi, I'm encountered a small issue when saving datasets that led to the saving taking up to multiple hours. Specifically, [`Dataset.save_to_disk`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.save_to_disk) is a lot slower when using `num_proc>1` than when using `num_proc=1` The documentation mentions that "Multiprocessing is disabled by default.", but there is no explanation on how to enable it. ### Steps to reproduce the bug ``` import numpy as np from datasets import Dataset n_samples = int(4e6) n_tokens_sample = 100 data_dict = { 'tokens' : np.random.randint(0, 100, (n_samples, n_tokens_sample)), } dataset = Dataset.from_dict(data_dict) dataset.save_to_disk('test_dataset', num_proc=1) dataset.save_to_disk('test_dataset', num_proc=4) dataset.save_to_disk('test_dataset', num_proc=8) ``` This results in: ``` >>> dataset.save_to_disk('test_dataset', num_proc=1) Saving the dataset (7/7 shards): 100%|██████████████| 4000000/4000000 [00:17<00:00, 228075.15 examples/s] >>> dataset.save_to_disk('test_dataset', num_proc=4) Saving the dataset (7/7 shards): 100%|██████████████| 4000000/4000000 [01:49<00:00, 36583.75 examples/s] >>> dataset.save_to_disk('test_dataset', num_proc=8) Saving the dataset (8/8 shards): 100%|██████████████| 4000000/4000000 [02:11<00:00, 30518.43 examples/s] ``` With larger datasets it can take hours, but I didn't benchmark that for this bug report. ### Expected behavior I would expect using `num_proc>1` to be faster instead of slower than `num_proc=1`. ### Environment info - `datasets` version: 3.1.0 - Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.26.2 - PyArrow version: 18.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.6.1
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7290/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7290/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7289
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7289/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7289/comments
https://api.github.com/repos/huggingface/datasets/issues/7289/events
https://github.com/huggingface/datasets/issues/7289
2,648,019,507
I_kwDODunzps6d1ZIz
7,289
Dataset viewer displays wrong statists
{ "login": "speedcell4", "id": 3585459, "node_id": "MDQ6VXNlcjM1ODU0NTk=", "avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/speedcell4", "html_url": "https://github.com/speedcell4", "followers_url": "https://api.github.com/users/speedcell4/followers", "following_url": "https://api.github.com/users/speedcell4/following{/other_user}", "gists_url": "https://api.github.com/users/speedcell4/gists{/gist_id}", "starred_url": "https://api.github.com/users/speedcell4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/speedcell4/subscriptions", "organizations_url": "https://api.github.com/users/speedcell4/orgs", "repos_url": "https://api.github.com/users/speedcell4/repos", "events_url": "https://api.github.com/users/speedcell4/events{/privacy}", "received_events_url": "https://api.github.com/users/speedcell4/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "i think this issue is more for https://github.com/huggingface/dataset-viewer" ]
2024-11-11T03:29:27
2024-11-13T13:02:25
2024-11-13T13:02:25
NONE
null
### Describe the bug In [my dataset](https://huggingface.co/datasets/speedcell4/opus-unigram2), there is a column called `lang2`, and there are 94 different classes in total, but the viewer says there are 83 values only. This issue only arises in the `train` split. The total number of values is also 94 in the `test` and `dev` columns, viewer tells the correct number of them. <img width="177" alt="image" src="https://github.com/user-attachments/assets/78d76ef2-fe0e-4fa3-85e0-fb2552813d1c"> ### Steps to reproduce the bug ```python3 from datasets import load_dataset ds = load_dataset('speedcell4/opus-unigram2').unique('lang2') for key, lang2 in ds.items(): print(key, len(lang2)) ``` This script returns the following and tells that the `train` split has 94 values in the `lang2` column. ``` train 94 dev 94 test 94 zero 5 ``` ### Expected behavior 94 in the reviewer. ### Environment info Collecting environment information... PyTorch version: 2.4.1+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: CentOS Linux release 8.2.2004 (Core) (x86_64) GCC version: (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5) Clang version: Could not collect CMake version: version 3.11.4 Libc version: glibc-2.28 Python version: 3.9.20 (main, Oct 3 2024, 07:27:41) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.28 Is CUDA available: True CUDA runtime version: 12.2.140 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-40GB GPU 1: NVIDIA A100-SXM4-40GB GPU 2: NVIDIA A100-SXM4-40GB GPU 3: NVIDIA A100-SXM4-40GB GPU 4: NVIDIA A100-SXM4-40GB GPU 5: NVIDIA A100-SXM4-40GB GPU 6: NVIDIA A100-SXM4-40GB GPU 7: NVIDIA A100-SXM4-40GB Nvidia driver version: 525.85.05 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 1 Core(s) per socket: 32 Socket(s): 2 NUMA node(s): 4 Vendor ID: AuthenticAMD CPU family: 23 Model: 49 Model name: AMD EPYC 7542 32-Core Processor Stepping: 0 CPU MHz: 3389.114 BogoMIPS: 5789.40 Virtualization: AMD-V L1d cache: 32K L1i cache: 32K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-15 NUMA node1 CPU(s): 16-31 NUMA node2 CPU(s): 32-47 NUMA node3 CPU(s): 48-63 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif umip rdpid overflow_recov succor smca Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] torch==2.4.1+cu121 [pip3] torchaudio==2.4.1+cu121 [pip3] torchdevice==0.1.1 [pip3] torchglyph==0.3.2 [pip3] torchmetrics==1.5.0 [pip3] torchrua==0.5.1 [pip3] torchvision==0.19.1+cu121 [pip3] triton==3.0.0 [pip3] datasets==3.0.1 [conda] numpy 1.26.4 pypi_0 pypi [conda] torch 2.4.1+cu121 pypi_0 pypi [conda] torchaudio 2.4.1+cu121 pypi_0 pypi [conda] torchdevice 0.1.1 pypi_0 pypi [conda] torchglyph 0.3.2 pypi_0 pypi [conda] torchmetrics 1.5.0 pypi_0 pypi [conda] torchrua 0.5.1 pypi_0 pypi [conda] torchvision 0.19.1+cu121 pypi_0 pypi [conda] triton 3.0.0 pypi_0 pypi
{ "login": "speedcell4", "id": 3585459, "node_id": "MDQ6VXNlcjM1ODU0NTk=", "avatar_url": "https://avatars.githubusercontent.com/u/3585459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/speedcell4", "html_url": "https://github.com/speedcell4", "followers_url": "https://api.github.com/users/speedcell4/followers", "following_url": "https://api.github.com/users/speedcell4/following{/other_user}", "gists_url": "https://api.github.com/users/speedcell4/gists{/gist_id}", "starred_url": "https://api.github.com/users/speedcell4/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/speedcell4/subscriptions", "organizations_url": "https://api.github.com/users/speedcell4/orgs", "repos_url": "https://api.github.com/users/speedcell4/repos", "events_url": "https://api.github.com/users/speedcell4/events{/privacy}", "received_events_url": "https://api.github.com/users/speedcell4/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7289/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7289/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7288/comments
https://api.github.com/repos/huggingface/datasets/issues/7288/events
https://github.com/huggingface/datasets/pull/7288
2,647,052,280
PR_kwDODunzps6BbIpz
7,288
Release v3.1.1
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[]
2024-11-10T09:38:15
2024-11-10T09:38:48
2024-11-10T09:38:48
CONTRIBUTOR
null
null
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7288/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7288", "html_url": "https://github.com/huggingface/datasets/pull/7288", "diff_url": "https://github.com/huggingface/datasets/pull/7288.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7288.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/7287
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7287/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7287/comments
https://api.github.com/repos/huggingface/datasets/issues/7287/events
https://github.com/huggingface/datasets/issues/7287
2,646,958,393
I_kwDODunzps6dxWE5
7,287
Support for identifier-based automated split construction
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi ! You can already configure the README.md to have multiple sets of splits, e.g.\r\n\r\n```yaml\r\nconfigs:\r\n- config_name: my_first_set_of_split\r\n data_files:\r\n - split: train\r\n path: *.csv\r\n- config_name: my_second_set_of_split\r\n data_files:\r\n - split: train\r\n path: train-*.csv\r\n - split: test\r\n path: test-*.csv\r\n```", "Hi - I had something slightly different in mind:\r\n\r\nCurrently the yaml splits specified like this only allow specifying which filenames to pass to each split.\r\nBut what if I have a situation where I know which individual *training examples* I want to put in each split.\r\n\r\nI could build split-specific files, however for large datasets with overlapping (e.g. multiple sets of) splits this could result in significant duplication of data.\r\n\r\nI can see that this could actually be very much intended (i.e. to discourage overlapping splits), but wondered whether some support for handling splits based on individual identifiers is something that could be considered. ", "This is not supported right now :/ Though you can load the data in two steps like this\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nfull_dataset = load_dataset(\"username/dataset\", split=\"train\")\r\nmy_first_set_indices = load_dataset(\"username/dataset\", \"my_first_set_of_split\", split=\"train\")\r\n\r\nmy_first_set = full_dataset.select(my_first_set_indices[\"indices\"])\r\n```\r\n\r\nyou can create such a dataset by adapting this code for example\r\n```python\r\n# upload the full dataset\r\nfull_dataset.push_to_hub(\"username/dataset\")\r\n# then upload the indices for each set\r\nDatasetDict({\r\n \"train\": Dataset.from_dict({\"indices\": [0, 1, 2, 3]}),\r\n \"test\": Dataset.from_dict({\"indices\": [4, 5]}),\r\n}).push_to_hub(\"username/dataset\", \"my_first_set_of_split\")" ]
2024-11-10T07:45:19
2024-11-19T14:37:02
null
CONTRIBUTOR
null
### Feature request As far as I understand, automated construction of splits for hub datasets is currently based on either file names or directory structure ([as described here](https://huggingface.co/docs/datasets/en/repository_structure)) It would seem to be pretty useful to also allow splits to be based on identifiers of individual examples This could be configured like {"split_name": {"column_name": [column values in split]}} (This in turn requires unique 'index' columns, which could be explicitly supported or just assumed to be defined appropriately by the user). I guess a potential downside would be that shards would end up spanning different splits - is this something that can be handled somehow? Would this only affect streaming from hub? ### Motivation The main motivation would be that all data files could be stored in a single directory, and multiple sets of splits could be generated from the same data. This is often useful for large datasets with multiple distinct sets of splits. This could all be configured via the README.md yaml configs ### Your contribution May be able to contribute if it seems like a good idea
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7287/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7286
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7286/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7286/comments
https://api.github.com/repos/huggingface/datasets/issues/7286/events
https://github.com/huggingface/datasets/issues/7286
2,645,350,151
I_kwDODunzps6drNcH
7,286
Concurrent loading in `load_from_disk` - `num_proc` as a param
{ "login": "unography", "id": 5240449, "node_id": "MDQ6VXNlcjUyNDA0NDk=", "avatar_url": "https://avatars.githubusercontent.com/u/5240449?v=4", "gravatar_id": "", "url": "https://api.github.com/users/unography", "html_url": "https://github.com/unography", "followers_url": "https://api.github.com/users/unography/followers", "following_url": "https://api.github.com/users/unography/following{/other_user}", "gists_url": "https://api.github.com/users/unography/gists{/gist_id}", "starred_url": "https://api.github.com/users/unography/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unography/subscriptions", "organizations_url": "https://api.github.com/users/unography/orgs", "repos_url": "https://api.github.com/users/unography/repos", "events_url": "https://api.github.com/users/unography/events{/privacy}", "received_events_url": "https://api.github.com/users/unography/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[]
2024-11-08T23:21:40
2024-11-09T16:14:37
2024-11-09T16:14:37
NONE
null
### Feature request https://github.com/huggingface/datasets/pull/6464 mentions a `num_proc` param while loading dataset from disk, but can't find that in the documentation and code anywhere ### Motivation Make loading large datasets from disk faster ### Your contribution Happy to contribute if given pointers
{ "login": "unography", "id": 5240449, "node_id": "MDQ6VXNlcjUyNDA0NDk=", "avatar_url": "https://avatars.githubusercontent.com/u/5240449?v=4", "gravatar_id": "", "url": "https://api.github.com/users/unography", "html_url": "https://github.com/unography", "followers_url": "https://api.github.com/users/unography/followers", "following_url": "https://api.github.com/users/unography/following{/other_user}", "gists_url": "https://api.github.com/users/unography/gists{/gist_id}", "starred_url": "https://api.github.com/users/unography/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/unography/subscriptions", "organizations_url": "https://api.github.com/users/unography/orgs", "repos_url": "https://api.github.com/users/unography/repos", "events_url": "https://api.github.com/users/unography/events{/privacy}", "received_events_url": "https://api.github.com/users/unography/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7286/timeline
null
not_planned
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7285
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7285/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7285/comments
https://api.github.com/repos/huggingface/datasets/issues/7285/events
https://github.com/huggingface/datasets/pull/7285
2,644,488,598
PR_kwDODunzps6BV3Gu
7,285
Release v3.1.0
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[]
2024-11-08T16:17:58
2024-11-08T16:18:05
2024-11-08T16:18:05
CONTRIBUTOR
null
null
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7285/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7285", "html_url": "https://github.com/huggingface/datasets/pull/7285", "diff_url": "https://github.com/huggingface/datasets/pull/7285.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7285.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/7284
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7284/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7284/comments
https://api.github.com/repos/huggingface/datasets/issues/7284/events
https://github.com/huggingface/datasets/pull/7284
2,644,302,386
PR_kwDODunzps6BVUSh
7,284
support for custom feature encoding/decoding
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7284). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-11-08T15:04:08
2024-11-21T16:09:47
2024-11-21T16:09:47
CONTRIBUTOR
null
Fix for https://github.com/huggingface/datasets/issues/7220 as suggested in discussion, in preference to #7221 (only concern would be on effect on type checking with custom feature types that aren't covered by FeatureType?)
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7284/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7284", "html_url": "https://github.com/huggingface/datasets/pull/7284", "diff_url": "https://github.com/huggingface/datasets/pull/7284.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7284.patch", "merged_at": "2024-11-21T16:09:47" }
true
https://api.github.com/repos/huggingface/datasets/issues/7283
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7283/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7283/comments
https://api.github.com/repos/huggingface/datasets/issues/7283/events
https://github.com/huggingface/datasets/pull/7283
2,642,537,708
PR_kwDODunzps6BQUgH
7,283
Allow for variation in metadata file names as per issue #7123
{ "login": "egrace479", "id": 38985481, "node_id": "MDQ6VXNlcjM4OTg1NDgx", "avatar_url": "https://avatars.githubusercontent.com/u/38985481?v=4", "gravatar_id": "", "url": "https://api.github.com/users/egrace479", "html_url": "https://github.com/egrace479", "followers_url": "https://api.github.com/users/egrace479/followers", "following_url": "https://api.github.com/users/egrace479/following{/other_user}", "gists_url": "https://api.github.com/users/egrace479/gists{/gist_id}", "starred_url": "https://api.github.com/users/egrace479/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/egrace479/subscriptions", "organizations_url": "https://api.github.com/users/egrace479/orgs", "repos_url": "https://api.github.com/users/egrace479/repos", "events_url": "https://api.github.com/users/egrace479/events{/privacy}", "received_events_url": "https://api.github.com/users/egrace479/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-11-08T00:44:47
2024-11-08T00:44:47
null
NONE
null
Allow metadata files to have an identifying preface. Specifically, it will recognize files with `-metadata.csv` or `_metadata.csv` as metadata files for the purposes of the dataset viewer functionality. Resolves #7123.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7283/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7283", "html_url": "https://github.com/huggingface/datasets/pull/7283", "diff_url": "https://github.com/huggingface/datasets/pull/7283.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7283.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/7282
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7282/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7282/comments
https://api.github.com/repos/huggingface/datasets/issues/7282/events
https://github.com/huggingface/datasets/issues/7282
2,642,075,491
I_kwDODunzps6det9j
7,282
Faulty datasets.exceptions.ExpectedMoreSplitsError
{ "login": "meg-huggingface", "id": 90473723, "node_id": "MDQ6VXNlcjkwNDczNzIz", "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "gravatar_id": "", "url": "https://api.github.com/users/meg-huggingface", "html_url": "https://github.com/meg-huggingface", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-11-07T20:15:01
2024-11-07T20:15:42
null
CONTRIBUTOR
null
### Describe the bug Trying to download only the 'validation' split of my dataset; instead hit the error `datasets.exceptions.ExpectedMoreSplitsError`. Appears to be the same undesired behavior as reported in [#6939](https://github.com/huggingface/datasets/issues/6939), but with `data_files`, not `data_dir`. Here is the Traceback: ``` Traceback (most recent call last): File "/home/user/app/app.py", line 12, in <module> ds = load_dataset('datacomp/imagenet-1k-random0.0', token=GATED_IMAGENET, data_files={'validation': 'data/val*'}, split='validation', trust_remote_code=True) File "/usr/local/lib/python3.10/site-packages/datasets/load.py", line 2154, in load_dataset builder_instance.download_and_prepare( File "/usr/local/lib/python3.10/site-packages/datasets/builder.py", line 924, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.10/site-packages/datasets/builder.py", line 1018, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/usr/local/lib/python3.10/site-packages/datasets/utils/info_utils.py", line 68, in verify_splits raise ExpectedMoreSplitsError(str(set(expected_splits) - set(recorded_splits))) datasets.exceptions.ExpectedMoreSplitsError: {'train', 'test'} ``` Note: I am using the `data_files` argument only because I am trying to specify that I only want the 'validation' split, and the whole dataset will be downloaded even when the `split='validation'` argument is specified, unless you also specify `data_files`, as described here: https://discuss.huggingface.co/t/how-can-i-download-a-specific-split-of-a-dataset/79027 ### Steps to reproduce the bug 1. Create a Space with the default blank 'gradio' SDK https://huggingface.co/new-space 2. Create a file 'app.py' that loads a dataset to only extract a 'validation' split: `ds = load_dataset('datacomp/imagenet-1k-random0.0', token=GATED_IMAGENET, data_files={'validation': 'data/val*'}, split='validation', trust_remote_code=True)` ### Expected behavior Downloading validation split. ### Environment info Default environment for creating a new Space. Relevant to this bug, that is: ``` FROM docker.io/library/python:3.10@sha256:fd0fa50d997eb56ce560c6e5ca6a1f5cf8fdff87572a16ac07fb1f5ca01eb608 --> RUN pip install --no-cache-dir pip==22.3.1 && pip install --no-cache-dir datasets "huggingface-hub>=0.19" "hf-transfer>=0.1.4" "protobuf<4" "click<8.1" ```
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7282/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7282/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7281/comments
https://api.github.com/repos/huggingface/datasets/issues/7281/events
https://github.com/huggingface/datasets/issues/7281
2,640,346,339
I_kwDODunzps6dYHzj
7,281
File not found error
{ "login": "MichielBontenbal", "id": 37507786, "node_id": "MDQ6VXNlcjM3NTA3Nzg2", "avatar_url": "https://avatars.githubusercontent.com/u/37507786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MichielBontenbal", "html_url": "https://github.com/MichielBontenbal", "followers_url": "https://api.github.com/users/MichielBontenbal/followers", "following_url": "https://api.github.com/users/MichielBontenbal/following{/other_user}", "gists_url": "https://api.github.com/users/MichielBontenbal/gists{/gist_id}", "starred_url": "https://api.github.com/users/MichielBontenbal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MichielBontenbal/subscriptions", "organizations_url": "https://api.github.com/users/MichielBontenbal/orgs", "repos_url": "https://api.github.com/users/MichielBontenbal/repos", "events_url": "https://api.github.com/users/MichielBontenbal/events{/privacy}", "received_events_url": "https://api.github.com/users/MichielBontenbal/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "Link to the dataset: https://huggingface.co/datasets/MichielBontenbal/UrbanSounds " ]
2024-11-07T09:04:49
2024-11-07T09:22:43
null
NONE
null
### Describe the bug I get a FileNotFoundError: <img width="944" alt="image" src="https://github.com/user-attachments/assets/1336bc08-06f6-4682-a3c0-071ff65efa87"> ### Steps to reproduce the bug See screenshot. ### Expected behavior I want to load one audiofile from the dataset. ### Environment info MacOs Intel 14.6.1 (23G93) Python 3.10.9 Numpy 1.23 Datasets latest version
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7281/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7280/comments
https://api.github.com/repos/huggingface/datasets/issues/7280/events
https://github.com/huggingface/datasets/issues/7280
2,639,977,077
I_kwDODunzps6dWtp1
7,280
Add filename in error message when ReadError or similar occur
{ "login": "elisa-aleman", "id": 37046039, "node_id": "MDQ6VXNlcjM3MDQ2MDM5", "avatar_url": "https://avatars.githubusercontent.com/u/37046039?v=4", "gravatar_id": "", "url": "https://api.github.com/users/elisa-aleman", "html_url": "https://github.com/elisa-aleman", "followers_url": "https://api.github.com/users/elisa-aleman/followers", "following_url": "https://api.github.com/users/elisa-aleman/following{/other_user}", "gists_url": "https://api.github.com/users/elisa-aleman/gists{/gist_id}", "starred_url": "https://api.github.com/users/elisa-aleman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elisa-aleman/subscriptions", "organizations_url": "https://api.github.com/users/elisa-aleman/orgs", "repos_url": "https://api.github.com/users/elisa-aleman/repos", "events_url": "https://api.github.com/users/elisa-aleman/events{/privacy}", "received_events_url": "https://api.github.com/users/elisa-aleman/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi Elisa, please share the error traceback here, and if you manage to find the location in the `datasets` code where the error occurs, feel free to open a PR to add the necessary logging / improve the error message.", "> please share the error traceback\n\nI don't have access to it but it should be during [this exception](https://github.com/huggingface/datasets/blob/2049c00921c59cdeb835137a1c49639cf175af07/src/datasets/builder.py#L1643) which happens during the loading of a dataset. If one of the downloaded files is corrupted, the for loop will not yield correctly, and the error will come from, say, in the case of tar files, [this iterable](https://github.com/huggingface/datasets/blob/2049c00921c59cdeb835137a1c49639cf175af07/src/datasets/utils/file_utils.py#L1293) which has no explicit error handling that leaves clues as to which file has failed.\n\nI only know the case for tar files, but I consider this issue could be happening across different file types too.", "I think having a better error handling for this tar iterable would be useful already, maybe a simple try/except in `_iter_from_urlpath` that checks for `tarfile.ReadError` and raises an error with the `urlpath` mentioned in the error ?", "I think not just from higher calls like the `_iter_from_urlpath` but directly wherever a file is attempted to be opened would be the best case, as the traceback would simply lead to that.", "so maybe there should be better error messages in each dataset builder definition ? e.g. in https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/webdataset/webdataset.py for webdataset TAR archives" ]
2024-11-07T06:00:53
2024-11-20T13:23:12
null
NONE
null
Please update error messages to include relevant information for debugging when loading datasets with `load_dataset()` that may have a few corrupted files. Whenever downloading a full dataset, some files might be corrupted (either at the source or from downloading corruption). However the errors often only let me know it was a tar file if `tarfile.ReadError` appears on the traceback, and I imagine similarly for other file types. This makes it really hard to debug which file is corrupted, and when dealing with very large datasets, it shouldn't be necessary to force download everything again.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7280/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7279
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7279/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7279/comments
https://api.github.com/repos/huggingface/datasets/issues/7279/events
https://github.com/huggingface/datasets/pull/7279
2,635,813,932
PR_kwDODunzps6A8pTJ
7,279
Feature proposal: Stacking, potentially heterogeneous, datasets
{ "login": "TimCares", "id": 96243987, "node_id": "U_kgDOBbyREw", "avatar_url": "https://avatars.githubusercontent.com/u/96243987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TimCares", "html_url": "https://github.com/TimCares", "followers_url": "https://api.github.com/users/TimCares/followers", "following_url": "https://api.github.com/users/TimCares/following{/other_user}", "gists_url": "https://api.github.com/users/TimCares/gists{/gist_id}", "starred_url": "https://api.github.com/users/TimCares/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TimCares/subscriptions", "organizations_url": "https://api.github.com/users/TimCares/orgs", "repos_url": "https://api.github.com/users/TimCares/repos", "events_url": "https://api.github.com/users/TimCares/events{/privacy}", "received_events_url": "https://api.github.com/users/TimCares/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-11-05T15:40:50
2024-11-05T15:40:50
null
NONE
null
### Introduction Hello there, I noticed that there are two ways to combine multiple datasets: Either through `datasets.concatenate_datasets` or `datasets.interleave_datasets`. However, to my knowledge (please correct me if I am wrong) both approaches require the datasets that are combined to have the same features. I think it would be a great idea to add support for combining multiple datasets that might not follow the same schema (i.e. have different features), for example an image and text dataset. That is why I propose a third function of the `datasets.combine` module called `stack_datasets`, which can be used to combine a list of datasets with (potentially) different features. This would look as follows: ```python >>> from datasets import stack_datasets >>> image_dataset = ... >>> next(iter(image_dataset)) {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=555x416 at 0x313E79CD0> } >>> text_dataset = ... >>> next(iter(text_dataset)) {'text': "This is a test."} >>> stacked = stack_datasets(datasets={'i_ds': image_dataset, 't_ds': text_dataset}, stopping_strategy='all_exhausted') >>> next(iter(stacked)) { 'i_ds': {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=555x416 at 0x313E79CD0> } 't_ds': {'text': "This is a test."} } ``` <br /> ### Motivation I motivate this by: **A**: The fact that Pytorch offers a similar functionality under `torch.utils.data.StackDataset` ([link](https://pytorch.org/docs/stable/data.html#torch.utils.data.StackDataset)). **B**: In settings where one would like to e.g. train a Vision-Language model using an image-text dataset, an image dataset, and a text dataset, this functionality would offer a clean and intuitive solution to create multimodal datasets. I am aware that the aforementioned is also feasible without my proposed function, but I believe this offers a nice approach that aligns with existing functionality and is directly provided within the `datasets` package. ### API `stack_datasets` has two arguments: `datasets` and `stopping_strategy `. <br /> `datasets` is a dictionary of either type `Dict[str, Dataset]` or `Dict[str, IterableDatasets]`, a mixture is not allowed. It contains the names of the datasets (the keys) and the datasets themselves (the values) that should be stacked. Each item returned is a dictionary with one key-value pair for each dataset. The keys are the names of the datasets as provided in the argument `datasets`, and the values are the respective examples from the datasets. <br /> `stopping_strategy` is the same as for `interleave_datasets`. If it is `first_exhausted` we stop if the smallest dataset runs out of examples, if it is `all_exhausted` we stop if all datasets ran out of examples at least once. For `all_exhausted` that means that we may visit examples from datasets multiple times. ### Docs I saw that there are multiple documentations and guides on the HuggingFace website that introduce `concatenate_datasets` and `interleave_datasets`, for example [here](https://huggingface.co/docs/datasets/process#concatenate). If this request is merged I would be willing to add the new functionality at the appropriate points in the documentation (if desired). ### Tests I also added some tests to ensure correctness. Some tests I wrote in [tests/test_iterable_dataset.py](https://github.com/TimCares/datasets/blob/fadc1159debf2a65d44e40cbf7758f2bd2cc8b08/tests/test_iterable_dataset.py#L2169) run for both `Dataset` and `IterableDataset` even though tests for `Dataset` technically do not belong in this script, but I found that this was a nice way to cover more cases with mostly the same code. ### Additional information I tried to write the code in a way so that it is similar to that of `concatenate_datasets` and `interleave_datasets`. I’m open to feedback and willing to make adjustments based on your suggestions, so feel free to give me your take. :)
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7279/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7279", "html_url": "https://github.com/huggingface/datasets/pull/7279", "diff_url": "https://github.com/huggingface/datasets/pull/7279.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7279.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/7278
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7278/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7278/comments
https://api.github.com/repos/huggingface/datasets/issues/7278/events
https://github.com/huggingface/datasets/pull/7278
2,633,436,151
PR_kwDODunzps6A1ORG
7,278
Let soundfile directly read local audio files
{ "login": "fawazahmed0", "id": 20347013, "node_id": "MDQ6VXNlcjIwMzQ3MDEz", "avatar_url": "https://avatars.githubusercontent.com/u/20347013?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fawazahmed0", "html_url": "https://github.com/fawazahmed0", "followers_url": "https://api.github.com/users/fawazahmed0/followers", "following_url": "https://api.github.com/users/fawazahmed0/following{/other_user}", "gists_url": "https://api.github.com/users/fawazahmed0/gists{/gist_id}", "starred_url": "https://api.github.com/users/fawazahmed0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fawazahmed0/subscriptions", "organizations_url": "https://api.github.com/users/fawazahmed0/orgs", "repos_url": "https://api.github.com/users/fawazahmed0/repos", "events_url": "https://api.github.com/users/fawazahmed0/events{/privacy}", "received_events_url": "https://api.github.com/users/fawazahmed0/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-11-04T17:41:13
2024-11-18T14:01:25
null
NONE
null
- [x] Fixes #7276
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7278/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7278", "html_url": "https://github.com/huggingface/datasets/pull/7278", "diff_url": "https://github.com/huggingface/datasets/pull/7278.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7278.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/7277
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7277/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7277/comments
https://api.github.com/repos/huggingface/datasets/issues/7277/events
https://github.com/huggingface/datasets/pull/7277
2,632,459,184
PR_kwDODunzps6AyB7O
7,277
Add link to video dataset
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7277). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-11-04T10:45:12
2024-11-04T17:05:06
2024-11-04T17:05:06
CONTRIBUTOR
null
This PR updates https://huggingface.co/docs/datasets/loading to also link to the new video loading docs. cc @mfarre
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7277/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7277/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7277", "html_url": "https://github.com/huggingface/datasets/pull/7277", "diff_url": "https://github.com/huggingface/datasets/pull/7277.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7277.patch", "merged_at": "2024-11-04T17:05:06" }
true
https://api.github.com/repos/huggingface/datasets/issues/7276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7276/comments
https://api.github.com/repos/huggingface/datasets/issues/7276/events
https://github.com/huggingface/datasets/issues/7276
2,631,917,431
I_kwDODunzps6c3993
7,276
Accessing audio dataset value throws Format not recognised error
{ "login": "fawazahmed0", "id": 20347013, "node_id": "MDQ6VXNlcjIwMzQ3MDEz", "avatar_url": "https://avatars.githubusercontent.com/u/20347013?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fawazahmed0", "html_url": "https://github.com/fawazahmed0", "followers_url": "https://api.github.com/users/fawazahmed0/followers", "following_url": "https://api.github.com/users/fawazahmed0/following{/other_user}", "gists_url": "https://api.github.com/users/fawazahmed0/gists{/gist_id}", "starred_url": "https://api.github.com/users/fawazahmed0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fawazahmed0/subscriptions", "organizations_url": "https://api.github.com/users/fawazahmed0/orgs", "repos_url": "https://api.github.com/users/fawazahmed0/repos", "events_url": "https://api.github.com/users/fawazahmed0/events{/privacy}", "received_events_url": "https://api.github.com/users/fawazahmed0/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! can you try if this works ?\r\n\r\n```python\r\nimport soundfile as sf\r\n\r\nwith open('C:\\\\Users\\\\Nawaz-Server\\\\.cache\\\\huggingface\\\\hub\\\\datasets--fawazahmed0--bug-audio\\\\snapshots\\\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\\\data\\\\Ghamadi\\\\037136.mp3', 'rb') as f:\r\n print(sf.read(f))\r\n```", "@lhoestq Same error, here is the output:\r\n\r\n```bash\r\n(mypy) C:\\Users\\Nawaz-Server\\Documents\\ml>python myest.py\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Nawaz-Server\\Documents\\ml\\myest.py\", line 5, in <module>\r\n print(sf.read(f))\r\n ^^^^^^^^^^\r\n File \"C:\\Users\\Nawaz-Server\\.conda\\envs\\mypy\\Lib\\site-packages\\soundfile.py\", line 285, in read\r\n with SoundFile(file, 'r', samplerate, channels,\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\Nawaz-Server\\.conda\\envs\\mypy\\Lib\\site-packages\\soundfile.py\", line 658, in __init__\r\n self._file = self._open(file, mode_int, closefd)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\Nawaz-Server\\.conda\\envs\\mypy\\Lib\\site-packages\\soundfile.py\", line 1216, in _open\r\n raise LibsndfileError(err, prefix=\"Error opening {0!r}: \".format(self.name))\r\nsoundfile.LibsndfileError: Error opening <_io.BufferedReader name='C:\\\\Users\\\\Nawaz-Server\\\\.cache\\\\huggingface\\\\hub\\\\datasets--fawazahmed0--bug-audio\\\\snapshots\\\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\\\data\\\\Ghamadi\\\\037136.mp3'>: Format not recognised.\r\n\r\n```", "upstream bug: https://github.com/bastibe/python-soundfile/issues/439" ]
2024-11-04T05:59:13
2024-11-09T18:51:52
null
NONE
null
### Describe the bug Accessing audio dataset value throws `Format not recognised error` ### Steps to reproduce the bug **code:** ```py from datasets import load_dataset dataset = load_dataset("fawazahmed0/bug-audio") for data in dataset["train"]: print(data) ``` **output:** ```bash (mypy) C:\Users\Nawaz-Server\Documents\ml>python myest.py [C:\vcpkg\buildtrees\mpg123\src\0d8db63f9b-3db975bc05.clean\src\libmpg123\layer3.c:INT123_do_layer3():1801] error: dequantization failed! {'audio': {'path': 'C:\\Users\\Nawaz-Server\\.cache\\huggingface\\hub\\datasets--fawazahmed0--bug-audio\\snapshots\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\data\\Ghamadi\\037135.mp3', 'array': array([ 0.00000000e+00, -2.86519935e-22, -2.56504911e-21, ..., -1.94239747e-02, -2.42924765e-02, -2.99104657e-02]), 'sampling_rate': 22050}, 'reciter': 'Ghamadi', 'transcription': 'الا عجوز ا في الغبرين', 'line': 3923, 'chapter': 37, 'verse': 135, 'text': 'إِلَّا عَجُوزࣰ ا فِي ٱلۡغَٰبِرِينَ'} Traceback (most recent call last): File "C:\Users\Nawaz-Server\Documents\ml\myest.py", line 5, in <module> for data in dataset["train"]: ~~~~~~~^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\arrow_dataset.py", line 2372, in __iter__ formatted_output = format_table( ^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 639, in format_table return formatter(pa_table, query_type=query_type) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 403, in __call__ return self.format_row(pa_table) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 444, in format_row row = self.python_features_decoder.decode_row(row) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\formatting\formatting.py", line 222, in decode_row return self.features.decode_example(row) if self.features else row ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\features\features.py", line 2042, in decode_example column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\features\features.py", line 1403, in decode_nested_example return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\datasets\features\audio.py", line 184, in decode_example array, sampling_rate = sf.read(f) ^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\soundfile.py", line 285, in read with SoundFile(file, 'r', samplerate, channels, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\soundfile.py", line 658, in __init__ self._file = self._open(file, mode_int, closefd) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\Nawaz-Server\.conda\envs\mypy\Lib\site-packages\soundfile.py", line 1216, in _open raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name)) soundfile.LibsndfileError: Error opening <_io.BufferedReader name='C:\\Users\\Nawaz-Server\\.cache\\huggingface\\hub\\datasets--fawazahmed0--bug-audio\\snapshots\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\data\\Ghamadi\\037136.mp3'>: Format not recognised. ``` ### Expected behavior Everything should work fine, as loading the problematic audio file directly with soundfile package works fine **code:** ``` import soundfile as sf print(sf.read('C:\\Users\\Nawaz-Server\\.cache\\huggingface\\hub\\datasets--fawazahmed0--bug-audio\\snapshots\\fab1398431fed1c0a2a7bff0945465bab8b5daef\\data\\Ghamadi\\037136.mp3')) ``` **output:** ```bash (mypy) C:\Users\Nawaz-Server\Documents\ml>python myest.py [C:\vcpkg\buildtrees\mpg123\src\0d8db63f9b-3db975bc05.clean\src\libmpg123\layer3.c:INT123_do_layer3():1801] error: dequantization failed! (array([ 0.00000000e+00, -8.43723821e-22, -2.45370628e-22, ..., -7.71464454e-03, -6.90496899e-03, -8.63333419e-03]), 22050) ``` ### Environment info - `datasets` version: 3.0.2 - Platform: Windows-11-10.0.22621-SP0 - Python version: 3.12.7 - `huggingface_hub` version: 0.26.2 - PyArrow version: 17.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.10.0 - soundfile: 0.12.1
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7276/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7275/comments
https://api.github.com/repos/huggingface/datasets/issues/7275/events
https://github.com/huggingface/datasets/issues/7275
2,631,713,397
I_kwDODunzps6c3MJ1
7,275
load_dataset
{ "login": "santiagobp99", "id": 46941974, "node_id": "MDQ6VXNlcjQ2OTQxOTc0", "avatar_url": "https://avatars.githubusercontent.com/u/46941974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/santiagobp99", "html_url": "https://github.com/santiagobp99", "followers_url": "https://api.github.com/users/santiagobp99/followers", "following_url": "https://api.github.com/users/santiagobp99/following{/other_user}", "gists_url": "https://api.github.com/users/santiagobp99/gists{/gist_id}", "starred_url": "https://api.github.com/users/santiagobp99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/santiagobp99/subscriptions", "organizations_url": "https://api.github.com/users/santiagobp99/orgs", "repos_url": "https://api.github.com/users/santiagobp99/repos", "events_url": "https://api.github.com/users/santiagobp99/events{/privacy}", "received_events_url": "https://api.github.com/users/santiagobp99/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-11-04T03:01:44
2024-11-04T03:01:44
null
NONE
null
### Describe the bug I am performing two operations I see on a hugging face tutorial (Fine-tune a language model), and I am defining every aspect inside the mapped functions, also some imports of the library because it doesnt identify anything not defined outside that function where the dataset elements are being mapped: https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb#scrollTo=iaAJy5Hu3l_B `- lm_datasets = tokenized_datasets.map( group_texts, batched=True, batch_size=batch_size, num_proc=4, ) - tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"]) def tokenize_function(examples): model_checkpoint = 'gpt2' from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True) return tokenizer(examples["text"])` ### Steps to reproduce the bug Currently handle all the imports inside the function ### Expected behavior The code must work es expected in the notebook, but currently this is not happening. https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb#scrollTo=iaAJy5Hu3l_B ### Environment info print(transformers.__version__) 4.46.1
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7275/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7274/comments
https://api.github.com/repos/huggingface/datasets/issues/7274/events
https://github.com/huggingface/datasets/pull/7274
2,629,882,821
PR_kwDODunzps6ArEt-
7,274
[MINOR:TYPO] Fix typo in exception text
{ "login": "cakiki", "id": 3664563, "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cakiki", "html_url": "https://github.com/cakiki", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "organizations_url": "https://api.github.com/users/cakiki/orgs", "repos_url": "https://api.github.com/users/cakiki/repos", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "received_events_url": "https://api.github.com/users/cakiki/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-11-01T21:15:29
2024-11-01T21:15:54
null
CONTRIBUTOR
null
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7274/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7274", "html_url": "https://github.com/huggingface/datasets/pull/7274", "diff_url": "https://github.com/huggingface/datasets/pull/7274.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7274.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/7273
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7273/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7273/comments
https://api.github.com/repos/huggingface/datasets/issues/7273/events
https://github.com/huggingface/datasets/pull/7273
2,628,896,492
PR_kwDODunzps6An6n8
7,273
Raise error for incorrect JSON serialization
{ "login": "varadhbhatnagar", "id": 20443618, "node_id": "MDQ6VXNlcjIwNDQzNjE4", "avatar_url": "https://avatars.githubusercontent.com/u/20443618?v=4", "gravatar_id": "", "url": "https://api.github.com/users/varadhbhatnagar", "html_url": "https://github.com/varadhbhatnagar", "followers_url": "https://api.github.com/users/varadhbhatnagar/followers", "following_url": "https://api.github.com/users/varadhbhatnagar/following{/other_user}", "gists_url": "https://api.github.com/users/varadhbhatnagar/gists{/gist_id}", "starred_url": "https://api.github.com/users/varadhbhatnagar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/varadhbhatnagar/subscriptions", "organizations_url": "https://api.github.com/users/varadhbhatnagar/orgs", "repos_url": "https://api.github.com/users/varadhbhatnagar/repos", "events_url": "https://api.github.com/users/varadhbhatnagar/events{/privacy}", "received_events_url": "https://api.github.com/users/varadhbhatnagar/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "PTAL @lhoestq @albertvillanova ", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7273). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-11-01T11:54:35
2024-11-18T11:25:01
2024-11-18T11:25:01
CONTRIBUTOR
null
Raise error when `lines = False` and `batch_size < Dataset.num_rows` in `Dataset.to_json()`. Issue: #7037 Related PRs: #7039 #7181
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7273/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7273", "html_url": "https://github.com/huggingface/datasets/pull/7273", "diff_url": "https://github.com/huggingface/datasets/pull/7273.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7273.patch", "merged_at": "2024-11-18T11:25:01" }
true
https://api.github.com/repos/huggingface/datasets/issues/7272
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7272/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7272/comments
https://api.github.com/repos/huggingface/datasets/issues/7272/events
https://github.com/huggingface/datasets/pull/7272
2,627,223,390
PR_kwDODunzps6AirL2
7,272
fix conda release worlflow
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7272). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-31T15:56:19
2024-10-31T15:58:35
2024-10-31T15:57:29
MEMBER
null
null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7272/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7272", "html_url": "https://github.com/huggingface/datasets/pull/7272", "diff_url": "https://github.com/huggingface/datasets/pull/7272.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7272.patch", "merged_at": "2024-10-31T15:57:29" }
true
https://api.github.com/repos/huggingface/datasets/issues/7271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7271/comments
https://api.github.com/repos/huggingface/datasets/issues/7271/events
https://github.com/huggingface/datasets/pull/7271
2,627,135,540
PR_kwDODunzps6AiZaj
7,271
Set dev version
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7271). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-31T15:22:51
2024-10-31T15:25:27
2024-10-31T15:22:59
MEMBER
null
null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7271/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7271", "html_url": "https://github.com/huggingface/datasets/pull/7271", "diff_url": "https://github.com/huggingface/datasets/pull/7271.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7271.patch", "merged_at": "2024-10-31T15:22:59" }
true
https://api.github.com/repos/huggingface/datasets/issues/7270
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7270/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7270/comments
https://api.github.com/repos/huggingface/datasets/issues/7270/events
https://github.com/huggingface/datasets/pull/7270
2,627,107,016
PR_kwDODunzps6AiTJm
7,270
Release: 3.1.0
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7270). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-31T15:10:01
2024-10-31T15:14:23
2024-10-31T15:14:20
MEMBER
null
null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7270/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7270", "html_url": "https://github.com/huggingface/datasets/pull/7270", "diff_url": "https://github.com/huggingface/datasets/pull/7270.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7270.patch", "merged_at": "2024-10-31T15:14:20" }
true
https://api.github.com/repos/huggingface/datasets/issues/7269
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7269/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7269/comments
https://api.github.com/repos/huggingface/datasets/issues/7269/events
https://github.com/huggingface/datasets/issues/7269
2,626,873,843
I_kwDODunzps6ckunz
7,269
Memory leak when streaming
{ "login": "Jourdelune", "id": 64205064, "node_id": "MDQ6VXNlcjY0MjA1MDY0", "avatar_url": "https://avatars.githubusercontent.com/u/64205064?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jourdelune", "html_url": "https://github.com/Jourdelune", "followers_url": "https://api.github.com/users/Jourdelune/followers", "following_url": "https://api.github.com/users/Jourdelune/following{/other_user}", "gists_url": "https://api.github.com/users/Jourdelune/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jourdelune/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jourdelune/subscriptions", "organizations_url": "https://api.github.com/users/Jourdelune/orgs", "repos_url": "https://api.github.com/users/Jourdelune/repos", "events_url": "https://api.github.com/users/Jourdelune/events{/privacy}", "received_events_url": "https://api.github.com/users/Jourdelune/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "I seem to have encountered the same problem when loading non streaming datasets. load_from_disk. Causing hundreds of GB of memory, but the dataset actually only has 50GB", "FYI when streaming parquet data, only one row group per worker is loaded in memory at a time.\r\n\r\nBtw for datasets of embeddings you can surely optimize your RAM by reading the data as torch tensors directly instead of the default python lists\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nfrom torch.utils.data import DataLoader\r\n\r\ndataset = load_dataset(\"WaveGenAI/dataset\", streaming=True).with_format(\"torch\")\r\n\r\ndataloader = DataLoader(dataset[\"train\"], num_workers=3)\r\n```" ]
2024-10-31T13:33:52
2024-11-18T11:46:07
null
NONE
null
### Describe the bug I try to use a dataset with streaming=True, the issue I have is that the RAM usage becomes higher and higher until it is no longer sustainable. I understand that huggingface store data in ram during the streaming, and more worker in dataloader there are, more a lot of shard will be stored in ram, but the issue I have is that the ram usage is not constant. So after each new shard loaded, the ram usage will be higher and higher. ### Steps to reproduce the bug You can run this code and see you ram usage, after each shard of 255 examples, your ram usage will be extended. ```py from datasets import load_dataset from torch.utils.data import DataLoader dataset = load_dataset("WaveGenAI/dataset", streaming=True) dataloader = DataLoader(dataset["train"], num_workers=3) for i, data in enumerate(dataloader): print(i, end="\r") ``` ### Expected behavior The Ram usage should be always the same (just 3 shards loaded in the ram). ### Environment info - `datasets` version: 3.0.1 - Platform: Linux-6.10.5-arch1-1-x86_64-with-glibc2.40 - Python version: 3.12.4 - `huggingface_hub` version: 0.26.0 - PyArrow version: 17.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.6.1
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7269/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7268
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7268/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7268/comments
https://api.github.com/repos/huggingface/datasets/issues/7268/events
https://github.com/huggingface/datasets/issues/7268
2,626,664,687
I_kwDODunzps6cj7jv
7,268
load_from_disk
{ "login": "ghaith-mq", "id": 71670961, "node_id": "MDQ6VXNlcjcxNjcwOTYx", "avatar_url": "https://avatars.githubusercontent.com/u/71670961?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghaith-mq", "html_url": "https://github.com/ghaith-mq", "followers_url": "https://api.github.com/users/ghaith-mq/followers", "following_url": "https://api.github.com/users/ghaith-mq/following{/other_user}", "gists_url": "https://api.github.com/users/ghaith-mq/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghaith-mq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghaith-mq/subscriptions", "organizations_url": "https://api.github.com/users/ghaith-mq/orgs", "repos_url": "https://api.github.com/users/ghaith-mq/repos", "events_url": "https://api.github.com/users/ghaith-mq/events{/privacy}", "received_events_url": "https://api.github.com/users/ghaith-mq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "Hello, It's an interesting issue here. I have the same problem, I have a local dataset and I want to push the dataset to the hub but huggingface does a copy of it.\r\n\r\n```py\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"webdataset\", data_files=\"/media/works/data/*.tar\") # copy here\r\ndataset.push_to_hub(\"WaveGenAI/audios2\")\r\n```\r\n\r\nEdit: I can use HfApi for my use case\r\n" ]
2024-10-31T11:51:56
2024-10-31T14:43:47
null
NONE
null
### Describe the bug I have data saved with save_to_disk. The data is big (700Gb). When I try loading it, the only option is load_from_disk, and this function copies the data to a tmp directory, causing me to run out of disk space. Is there an alternative solution to that? ### Steps to reproduce the bug when trying to load data using load_From_disk after being saved using save_to_disk ### Expected behavior run out of disk space ### Environment info lateest version
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7268/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7268/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7267
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7267/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7267/comments
https://api.github.com/repos/huggingface/datasets/issues/7267/events
https://github.com/huggingface/datasets/issues/7267
2,626,490,029
I_kwDODunzps6cjQ6t
7,267
Source installation fails on Macintosh with python 3.10
{ "login": "mayankagarwals", "id": 39498938, "node_id": "MDQ6VXNlcjM5NDk4OTM4", "avatar_url": "https://avatars.githubusercontent.com/u/39498938?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mayankagarwals", "html_url": "https://github.com/mayankagarwals", "followers_url": "https://api.github.com/users/mayankagarwals/followers", "following_url": "https://api.github.com/users/mayankagarwals/following{/other_user}", "gists_url": "https://api.github.com/users/mayankagarwals/gists{/gist_id}", "starred_url": "https://api.github.com/users/mayankagarwals/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mayankagarwals/subscriptions", "organizations_url": "https://api.github.com/users/mayankagarwals/orgs", "repos_url": "https://api.github.com/users/mayankagarwals/repos", "events_url": "https://api.github.com/users/mayankagarwals/events{/privacy}", "received_events_url": "https://api.github.com/users/mayankagarwals/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "I encountered the same problem on M1, a workaround I did was to simply comment out the dependency:\r\n\r\n```python\r\n...\r\n \"zstandard\",\r\n \"polars[timezone]>=0.20.0\",\r\n # \"decord==0.6.0\",\r\n]\r\n```\r\n\r\nThis worked for me as the adjustments I did to the code do not use the dependency, but I do not know if the same holds for you.\r\n\r\nI also do not think it is a good idea to rely on a dependency (I mean decord) that has not been maintained for 2 years, but I saw that even eva-decord hasn't been maintained since last year.\r\n\r\nDid you get it to work with eva-decord?" ]
2024-10-31T10:18:45
2024-11-04T22:18:06
null
NONE
null
### Describe the bug Hi, Decord is a dev dependency not maintained since couple years. It does not have an ARM package available rendering it uninstallable on non-intel based macs Suggestion is to move to eva-decord (https://github.com/georgia-tech-db/eva-decord) which doesnt have this problem. Happy to raise a PR ### Steps to reproduce the bug Source installation as mentioned in contributinog.md ### Expected behavior Installation without decord failing to be installed. ### Environment info python=3.10, M3 Mac
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7267/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7266/comments
https://api.github.com/repos/huggingface/datasets/issues/7266/events
https://github.com/huggingface/datasets/issues/7266
2,624,666,087
I_kwDODunzps6ccTnn
7,266
The dataset viewer should be available soon. Please retry later.
{ "login": "viiika", "id": 39821659, "node_id": "MDQ6VXNlcjM5ODIxNjU5", "avatar_url": "https://avatars.githubusercontent.com/u/39821659?v=4", "gravatar_id": "", "url": "https://api.github.com/users/viiika", "html_url": "https://github.com/viiika", "followers_url": "https://api.github.com/users/viiika/followers", "following_url": "https://api.github.com/users/viiika/following{/other_user}", "gists_url": "https://api.github.com/users/viiika/gists{/gist_id}", "starred_url": "https://api.github.com/users/viiika/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/viiika/subscriptions", "organizations_url": "https://api.github.com/users/viiika/orgs", "repos_url": "https://api.github.com/users/viiika/repos", "events_url": "https://api.github.com/users/viiika/events{/privacy}", "received_events_url": "https://api.github.com/users/viiika/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "Waiting is all you need. 10 hours later, it works." ]
2024-10-30T16:32:00
2024-10-31T03:48:11
2024-10-31T03:48:10
NONE
null
### Describe the bug After waiting for 2 hours, it still presents ``The dataset viewer should be available soon. Please retry later.'' ### Steps to reproduce the bug dataset link: https://huggingface.co/datasets/BryanW/HI_EDIT ### Expected behavior Present the dataset viewer. ### Environment info NA
{ "login": "viiika", "id": 39821659, "node_id": "MDQ6VXNlcjM5ODIxNjU5", "avatar_url": "https://avatars.githubusercontent.com/u/39821659?v=4", "gravatar_id": "", "url": "https://api.github.com/users/viiika", "html_url": "https://github.com/viiika", "followers_url": "https://api.github.com/users/viiika/followers", "following_url": "https://api.github.com/users/viiika/following{/other_user}", "gists_url": "https://api.github.com/users/viiika/gists{/gist_id}", "starred_url": "https://api.github.com/users/viiika/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/viiika/subscriptions", "organizations_url": "https://api.github.com/users/viiika/orgs", "repos_url": "https://api.github.com/users/viiika/repos", "events_url": "https://api.github.com/users/viiika/events{/privacy}", "received_events_url": "https://api.github.com/users/viiika/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7266/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7265
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7265/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7265/comments
https://api.github.com/repos/huggingface/datasets/issues/7265/events
https://github.com/huggingface/datasets/pull/7265
2,624,090,418
PR_kwDODunzps6AYofJ
7,265
Disallow video push_to_hub
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7265). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-30T13:21:55
2024-10-30T13:36:05
2024-10-30T13:36:02
MEMBER
null
null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7265/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7265", "html_url": "https://github.com/huggingface/datasets/pull/7265", "diff_url": "https://github.com/huggingface/datasets/pull/7265.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7265.patch", "merged_at": "2024-10-30T13:36:02" }
true
https://api.github.com/repos/huggingface/datasets/issues/7264
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7264/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7264/comments
https://api.github.com/repos/huggingface/datasets/issues/7264/events
https://github.com/huggingface/datasets/pull/7264
2,624,047,640
PR_kwDODunzps6AYfwL
7,264
fix docs relative links
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7264). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-30T13:07:34
2024-10-30T13:10:13
2024-10-30T13:09:02
MEMBER
null
null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7264/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7264/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7264", "html_url": "https://github.com/huggingface/datasets/pull/7264", "diff_url": "https://github.com/huggingface/datasets/pull/7264.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7264.patch", "merged_at": "2024-10-30T13:09:02" }
true
https://api.github.com/repos/huggingface/datasets/issues/7263
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7263/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7263/comments
https://api.github.com/repos/huggingface/datasets/issues/7263/events
https://github.com/huggingface/datasets/pull/7263
2,621,844,054
PR_kwDODunzps6ARg7m
7,263
Small addition to video docs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7263). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-29T16:58:37
2024-10-29T17:01:05
2024-10-29T16:59:10
MEMBER
null
null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7263/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7263", "html_url": "https://github.com/huggingface/datasets/pull/7263", "diff_url": "https://github.com/huggingface/datasets/pull/7263.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7263.patch", "merged_at": "2024-10-29T16:59:10" }
true
https://api.github.com/repos/huggingface/datasets/issues/7262
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7262/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7262/comments
https://api.github.com/repos/huggingface/datasets/issues/7262/events
https://github.com/huggingface/datasets/pull/7262
2,620,879,059
PR_kwDODunzps6AOWI8
7,262
Allow video with disabeld decoding without decord
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7262). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-29T10:54:04
2024-10-29T10:56:19
2024-10-29T10:55:37
MEMBER
null
for the viewer, this way it can use Video(decode=False) and doesn't need decord (which causes segfaults)
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7262/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7262", "html_url": "https://github.com/huggingface/datasets/pull/7262", "diff_url": "https://github.com/huggingface/datasets/pull/7262.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7262.patch", "merged_at": "2024-10-29T10:55:37" }
true
https://api.github.com/repos/huggingface/datasets/issues/7261
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7261/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7261/comments
https://api.github.com/repos/huggingface/datasets/issues/7261/events
https://github.com/huggingface/datasets/issues/7261
2,620,510,840
I_kwDODunzps6cMdJ4
7,261
Cannot load the cache when mapping the dataset
{ "login": "zhangn77", "id": 43033959, "node_id": "MDQ6VXNlcjQzMDMzOTU5", "avatar_url": "https://avatars.githubusercontent.com/u/43033959?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhangn77", "html_url": "https://github.com/zhangn77", "followers_url": "https://api.github.com/users/zhangn77/followers", "following_url": "https://api.github.com/users/zhangn77/following{/other_user}", "gists_url": "https://api.github.com/users/zhangn77/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhangn77/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhangn77/subscriptions", "organizations_url": "https://api.github.com/users/zhangn77/orgs", "repos_url": "https://api.github.com/users/zhangn77/repos", "events_url": "https://api.github.com/users/zhangn77/events{/privacy}", "received_events_url": "https://api.github.com/users/zhangn77/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-10-29T08:29:40
2024-10-29T08:29:40
null
NONE
null
### Describe the bug I'm training the flux controlnet. The train_dataset.map() takes long time to finish. However, when I killed one training process and want to restart a new training with the same dataset. I can't reuse the mapped result even I defined the cache dir for the dataset. with accelerator.main_process_first(): from datasets.fingerprint import Hasher # fingerprint used by the cache for the other processes to load the result # details: https://github.com/huggingface/diffusers/pull/4038#discussion_r1266078401 new_fingerprint = Hasher.hash(args) train_dataset = train_dataset.map( compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint, batch_size=10, ) ### Steps to reproduce the bug train flux controlnet and start again ### Expected behavior will not map again ### Environment info latest diffusers
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7261/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7260
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7260/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7260/comments
https://api.github.com/repos/huggingface/datasets/issues/7260/events
https://github.com/huggingface/datasets/issues/7260
2,620,014,285
I_kwDODunzps6cKj7N
7,260
cache can't cleaned or disabled
{ "login": "charliedream1", "id": 15007828, "node_id": "MDQ6VXNlcjE1MDA3ODI4", "avatar_url": "https://avatars.githubusercontent.com/u/15007828?v=4", "gravatar_id": "", "url": "https://api.github.com/users/charliedream1", "html_url": "https://github.com/charliedream1", "followers_url": "https://api.github.com/users/charliedream1/followers", "following_url": "https://api.github.com/users/charliedream1/following{/other_user}", "gists_url": "https://api.github.com/users/charliedream1/gists{/gist_id}", "starred_url": "https://api.github.com/users/charliedream1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/charliedream1/subscriptions", "organizations_url": "https://api.github.com/users/charliedream1/orgs", "repos_url": "https://api.github.com/users/charliedream1/repos", "events_url": "https://api.github.com/users/charliedream1/events{/privacy}", "received_events_url": "https://api.github.com/users/charliedream1/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-10-29T03:15:28
2024-10-29T03:18:22
null
NONE
null
### Describe the bug I tried following ways, the cache can't be disabled. I got 2T data, but I also got more than 2T cache file. I got pressure on storage. I need to diable cache or cleaned immediately after processed. Following ways are all not working, please give some help! ```python from datasets import disable_caching from transformers import AutoTokenizer disable_caching() tokenizer = AutoTokenizer.from_pretrained(args.tokenizer_path) def tokenization_fn(examples): column_name = 'text' if 'text' in examples else 'data' tokenized_inputs = tokenizer( examples[column_name], return_special_tokens_mask=True, truncation=False, max_length=tokenizer.model_max_length ) return tokenized_inputs data = load_dataset('json', data_files=save_local_path, split='train', cache_dir=None) data.cleanup_cache_files() updated_dataset = data.map(tokenization_fn, load_from_cache_file=False) updated_dataset .cleanup_cache_files() ``` ### Expected behavior no cache file generated ### Environment info Ubuntu 20.04.6 LTS datasets 3.0.2
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7260/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7259
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7259/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7259/comments
https://api.github.com/repos/huggingface/datasets/issues/7259/events
https://github.com/huggingface/datasets/pull/7259
2,618,909,241
PR_kwDODunzps6AIEY-
7,259
Don't embed videos
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7259). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-28T16:25:10
2024-10-28T16:27:34
2024-10-28T16:26:01
MEMBER
null
don't include video bytes when running download_and_prepare(format="parquet") this also affects push_to_hub which will just upload the local paths of the videos though
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7259/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7259", "html_url": "https://github.com/huggingface/datasets/pull/7259", "diff_url": "https://github.com/huggingface/datasets/pull/7259.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7259.patch", "merged_at": "2024-10-28T16:26:01" }
true
https://api.github.com/repos/huggingface/datasets/issues/7258
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7258/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7258/comments
https://api.github.com/repos/huggingface/datasets/issues/7258/events
https://github.com/huggingface/datasets/pull/7258
2,618,758,399
PR_kwDODunzps6AHlK1
7,258
Always set non-null writer batch size
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7258). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-28T15:26:14
2024-10-28T15:28:41
2024-10-28T15:26:29
MEMBER
null
bug introduced in #7230, it was preventing the Viewer limit writes to work
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7258/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7258", "html_url": "https://github.com/huggingface/datasets/pull/7258", "diff_url": "https://github.com/huggingface/datasets/pull/7258.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7258.patch", "merged_at": "2024-10-28T15:26:29" }
true
https://api.github.com/repos/huggingface/datasets/issues/7257
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7257/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7257/comments
https://api.github.com/repos/huggingface/datasets/issues/7257/events
https://github.com/huggingface/datasets/pull/7257
2,618,602,173
PR_kwDODunzps6AHEfy
7,257
fix ci for pyarrow 18
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7257). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-28T14:31:34
2024-10-28T14:34:05
2024-10-28T14:31:44
MEMBER
null
null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7257/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7257/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7257", "html_url": "https://github.com/huggingface/datasets/pull/7257", "diff_url": "https://github.com/huggingface/datasets/pull/7257.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7257.patch", "merged_at": "2024-10-28T14:31:44" }
true
https://api.github.com/repos/huggingface/datasets/issues/7256
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7256/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7256/comments
https://api.github.com/repos/huggingface/datasets/issues/7256/events
https://github.com/huggingface/datasets/pull/7256
2,618,580,188
PR_kwDODunzps6AG_qk
7,256
Retry all requests timeouts
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7256). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-28T14:23:16
2024-10-28T14:56:28
2024-10-28T14:56:26
MEMBER
null
as reported in https://github.com/huggingface/datasets/issues/6843
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7256/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7256/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7256", "html_url": "https://github.com/huggingface/datasets/pull/7256", "diff_url": "https://github.com/huggingface/datasets/pull/7256.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7256.patch", "merged_at": "2024-10-28T14:56:26" }
true
https://api.github.com/repos/huggingface/datasets/issues/7255
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7255/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7255/comments
https://api.github.com/repos/huggingface/datasets/issues/7255/events
https://github.com/huggingface/datasets/pull/7255
2,618,540,355
PR_kwDODunzps6AG25R
7,255
fix decord import
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7255). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-28T14:08:19
2024-10-28T14:10:43
2024-10-28T14:09:14
MEMBER
null
delay the import until Video() is instantiated + also import duckdb first (otherwise importing duckdb later causes a segfault)
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7255/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7255/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7255", "html_url": "https://github.com/huggingface/datasets/pull/7255", "diff_url": "https://github.com/huggingface/datasets/pull/7255.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7255.patch", "merged_at": "2024-10-28T14:09:14" }
true
https://api.github.com/repos/huggingface/datasets/issues/7254
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7254/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7254/comments
https://api.github.com/repos/huggingface/datasets/issues/7254/events
https://github.com/huggingface/datasets/issues/7254
2,616,174,996
I_kwDODunzps6b76mU
7,254
mismatch for datatypes when providing `Features` with `Array2D` and user specified `dtype` and using with_format("numpy")
{ "login": "Akhil-CM", "id": 97193607, "node_id": "U_kgDOBcsOhw", "avatar_url": "https://avatars.githubusercontent.com/u/97193607?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Akhil-CM", "html_url": "https://github.com/Akhil-CM", "followers_url": "https://api.github.com/users/Akhil-CM/followers", "following_url": "https://api.github.com/users/Akhil-CM/following{/other_user}", "gists_url": "https://api.github.com/users/Akhil-CM/gists{/gist_id}", "starred_url": "https://api.github.com/users/Akhil-CM/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Akhil-CM/subscriptions", "organizations_url": "https://api.github.com/users/Akhil-CM/orgs", "repos_url": "https://api.github.com/users/Akhil-CM/repos", "events_url": "https://api.github.com/users/Akhil-CM/events{/privacy}", "received_events_url": "https://api.github.com/users/Akhil-CM/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "It seems that https://github.com/huggingface/datasets/issues/5517 is exactly the same issue.\r\n\r\nIt was mentioned there that this would be fixed in version 3.x" ]
2024-10-26T22:06:27
2024-10-26T22:07:37
null
NONE
null
### Describe the bug If the user provides a `Features` type value to `datasets.Dataset` with members having `Array2D` with a value for `dtype`, it is not respected during `with_format("numpy")` which should return a `np.array` with `dtype` that the user provided for `Array2D`. It seems for floats, it will be set to `float32` and for ints it will be set to `int64` ### Steps to reproduce the bug ```python import numpy as np import datasets from datasets import Dataset, Features, Array2D print(f"datasets version: {datasets.__version__}") data_info = { "arr_float" : "float64", "arr_int" : "int32" } sample = {key : [np.zeros([4, 5], dtype=dtype)] for key, dtype in data_info.items()} features = {key : Array2D(shape=(None, 5), dtype=dtype) for key, dtype in data_info.items()} features = Features(features) dataset = Dataset.from_dict(sample, features=features) ds = dataset.with_format("numpy") for key in features: print(f"{key} feature dtype: ", ds.features[key].dtype) print(f"{key} dtype:", ds[key].dtype) ``` Output: ```bash datasets version: 3.0.2 arr_float feature dtype: float64 arr_float dtype: float32 arr_int feature dtype: int32 arr_int dtype: int64 ``` ### Expected behavior It should return a `np.array` with `dtype` that the user provided for the corresponding member in the `Features` type value ### Environment info - `datasets` version: 3.0.2 - Platform: Linux-6.11.5-arch1-1-x86_64-with-glibc2.40 - Python version: 3.12.7 - `huggingface_hub` version: 0.26.1 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.5.0
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7254/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7253
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7253/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7253/comments
https://api.github.com/repos/huggingface/datasets/issues/7253/events
https://github.com/huggingface/datasets/issues/7253
2,615,862,202
I_kwDODunzps6b6uO6
7,253
Unable to upload a large dataset zip either from command line or UI
{ "login": "vakyansh", "id": 159609047, "node_id": "U_kgDOCYNw1w", "avatar_url": "https://avatars.githubusercontent.com/u/159609047?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vakyansh", "html_url": "https://github.com/vakyansh", "followers_url": "https://api.github.com/users/vakyansh/followers", "following_url": "https://api.github.com/users/vakyansh/following{/other_user}", "gists_url": "https://api.github.com/users/vakyansh/gists{/gist_id}", "starred_url": "https://api.github.com/users/vakyansh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vakyansh/subscriptions", "organizations_url": "https://api.github.com/users/vakyansh/orgs", "repos_url": "https://api.github.com/users/vakyansh/repos", "events_url": "https://api.github.com/users/vakyansh/events{/privacy}", "received_events_url": "https://api.github.com/users/vakyansh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-10-26T13:17:06
2024-10-26T13:17:06
null
NONE
null
### Describe the bug Unable to upload a large dataset zip from command line or UI. UI simply says error. I am trying to a upload a tar.gz file of 17GB. <img width="550" alt="image" src="https://github.com/user-attachments/assets/f9d29024-06c8-49c4-a109-0492cff79d34"> <img width="755" alt="image" src="https://github.com/user-attachments/assets/a8d4acda-7f02-4279-9c2d-b2e0282b4faa"> ### Steps to reproduce the bug Upload a large file ### Expected behavior The file should upload without any issue. ### Environment info None
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7253/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7252
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7252/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7252/comments
https://api.github.com/repos/huggingface/datasets/issues/7252/events
https://github.com/huggingface/datasets/pull/7252
2,613,795,544
PR_kwDODunzps5_41s7
7,252
Add IterableDataset.shard()
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7252). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-25T11:07:12
2024-10-25T15:45:24
2024-10-25T15:45:22
MEMBER
null
Will be useful to distribute a dataset across workers (other than pytorch) like spark I also renamed `.n_shards` -> `.num_shards` for consistency and kept the old name for backward compatibility. And a few changes in internal functions for consistency as well (rank, world_size -> num_shards, index) Breaking change: the new default for `contiguous` in `Dataset.shard()` is `True`, but imo not a big deal since I couldn't find any usage of `contiguous=False` internally (we always do contiguous=True for map-style datasets since its more optimized) or in the wild
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7252/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7252/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7252", "html_url": "https://github.com/huggingface/datasets/pull/7252", "diff_url": "https://github.com/huggingface/datasets/pull/7252.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7252.patch", "merged_at": "2024-10-25T15:45:21" }
true
https://api.github.com/repos/huggingface/datasets/issues/7251
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7251/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7251/comments
https://api.github.com/repos/huggingface/datasets/issues/7251/events
https://github.com/huggingface/datasets/pull/7251
2,612,097,435
PR_kwDODunzps5_zPTt
7,251
Missing video docs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7251). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-24T16:45:12
2024-10-24T16:48:29
2024-10-24T16:48:27
MEMBER
null
null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7251/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7251", "html_url": "https://github.com/huggingface/datasets/pull/7251", "diff_url": "https://github.com/huggingface/datasets/pull/7251.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7251.patch", "merged_at": "2024-10-24T16:48:27" }
true
https://api.github.com/repos/huggingface/datasets/issues/7250
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7250/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7250/comments
https://api.github.com/repos/huggingface/datasets/issues/7250/events
https://github.com/huggingface/datasets/pull/7250
2,612,041,969
PR_kwDODunzps5_zDPS
7,250
Basic XML support (mostly copy pasted from text)
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7250). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-24T16:14:50
2024-10-24T16:19:18
2024-10-24T16:19:16
MEMBER
null
enable the viewer for datasets like https://huggingface.co/datasets/FrancophonIA/e-calm (there will be more and more apparently)
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7250/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7250/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7250", "html_url": "https://github.com/huggingface/datasets/pull/7250", "diff_url": "https://github.com/huggingface/datasets/pull/7250.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7250.patch", "merged_at": "2024-10-24T16:19:16" }
true
https://api.github.com/repos/huggingface/datasets/issues/7249
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7249/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7249/comments
https://api.github.com/repos/huggingface/datasets/issues/7249/events
https://github.com/huggingface/datasets/issues/7249
2,610,136,636
I_kwDODunzps6bk4Y8
7,249
How to debugging
{ "login": "ShDdu", "id": 49576595, "node_id": "MDQ6VXNlcjQ5NTc2NTk1", "avatar_url": "https://avatars.githubusercontent.com/u/49576595?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ShDdu", "html_url": "https://github.com/ShDdu", "followers_url": "https://api.github.com/users/ShDdu/followers", "following_url": "https://api.github.com/users/ShDdu/following{/other_user}", "gists_url": "https://api.github.com/users/ShDdu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ShDdu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ShDdu/subscriptions", "organizations_url": "https://api.github.com/users/ShDdu/orgs", "repos_url": "https://api.github.com/users/ShDdu/repos", "events_url": "https://api.github.com/users/ShDdu/events{/privacy}", "received_events_url": "https://api.github.com/users/ShDdu/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-10-24T01:03:51
2024-10-24T01:03:51
null
NONE
null
### Describe the bug I wanted to use my own script to handle the processing, and followed the tutorial documentation by rewriting the MyDatasetConfig and MyDatasetBuilder (which contains the _info,_split_generators and _generate_examples methods) classes. Testing with simple data was able to output the results of the processing, but when I wished to do more complex processing, I found that I was unable to debug (even the simple samples were inaccessible). There are no errors reported, and I am able to print the _info,_split_generators and _generate_examples messages, but I am unable to access the breakpoints. ### Steps to reproduce the bug # my_dataset.py import json import datasets class MyDatasetConfig(datasets.BuilderConfig): def __init__(self, **kwargs): super(MyDatasetConfig, self).__init__(**kwargs) class MyDataset(datasets.GeneratorBasedBuilder): VERSION = datasets.Version("1.0.0") BUILDER_CONFIGS = [ MyDatasetConfig( name="default", version=VERSION, description="myDATASET" ), ] def _info(self): print("info") # breakpoints return datasets.DatasetInfo( description="myDATASET", features=datasets.Features( { "id": datasets.Value("int32"), "text": datasets.Value("string"), "label": datasets.ClassLabel(names=["negative", "positive"]), } ), supervised_keys=("text", "label"), ) def _split_generators(self, dl_manager): print("generate") # breakpoints data_file = "data.json" return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_file} ), ] def _generate_examples(self, filepath): print("example") # breakpoints with open(filepath, encoding="utf-8") as f: data = json.load(f) for idx, sample in enumerate(data): yield idx, { "id": sample["id"], "text": sample["text"], "label": sample["label"], } #main.py import os os.environ["TRANSFORMERS_NO_MULTIPROCESSING"] = "1" from datasets import load_dataset dataset = load_dataset("my_dataset.py", split="train", cache_dir=None) print(dataset[:5]) ### Expected behavior Pause at breakpoints while running debugging ### Environment info pycharm
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7249/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7249/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7248
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7248/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7248/comments
https://api.github.com/repos/huggingface/datasets/issues/7248/events
https://github.com/huggingface/datasets/issues/7248
2,609,926,089
I_kwDODunzps6bkE_J
7,248
ModuleNotFoundError: No module named 'datasets.tasks'
{ "login": "shoowadoo", "id": 93593941, "node_id": "U_kgDOBZQhVQ", "avatar_url": "https://avatars.githubusercontent.com/u/93593941?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shoowadoo", "html_url": "https://github.com/shoowadoo", "followers_url": "https://api.github.com/users/shoowadoo/followers", "following_url": "https://api.github.com/users/shoowadoo/following{/other_user}", "gists_url": "https://api.github.com/users/shoowadoo/gists{/gist_id}", "starred_url": "https://api.github.com/users/shoowadoo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shoowadoo/subscriptions", "organizations_url": "https://api.github.com/users/shoowadoo/orgs", "repos_url": "https://api.github.com/users/shoowadoo/repos", "events_url": "https://api.github.com/users/shoowadoo/events{/privacy}", "received_events_url": "https://api.github.com/users/shoowadoo/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "tasks was removed in v3: #6999 \r\n\r\nI also don't see why TextClassification is imported, since it's not used after. So the fix is simple: delete this line.", "I opened https://huggingface.co/datasets/knowledgator/events_classification_biotech/discussions/7 to remove the line, hopefully the dataset owner will merge it soon" ]
2024-10-23T21:58:25
2024-10-24T17:00:19
null
NONE
null
### Describe the bug --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) [<ipython-input-9-13b5f31bd391>](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in <cell line: 1>() ----> 1 dataset = load_dataset('knowledgator/events_classification_biotech') 11 frames [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2130 2131 # Create a dataset builder -> 2132 builder_instance = load_dataset_builder( 2133 path=path, 2134 name=name, [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs) 1886 raise ValueError(error_msg) 1887 -> 1888 builder_cls = get_dataset_builder_class(dataset_module, dataset_name=dataset_name) 1889 # Instantiate the dataset builder 1890 builder_instance: DatasetBuilder = builder_cls( [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in get_dataset_builder_class(dataset_module, dataset_name) 246 dataset_module.importable_file_path 247 ) if dataset_module.importable_file_path else nullcontext(): --> 248 builder_cls = import_main_class(dataset_module.module_path) 249 if dataset_module.builder_configs_parameters.builder_configs: 250 dataset_name = dataset_name or dataset_module.builder_kwargs.get("dataset_name") [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in import_main_class(module_path) 167 def import_main_class(module_path) -> Optional[Type[DatasetBuilder]]: 168 """Import a module at module_path and return its main class: a DatasetBuilder""" --> 169 module = importlib.import_module(module_path) 170 # Find the main class in our imported module 171 module_main_cls = None [/usr/lib/python3.10/importlib/__init__.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in import_module(name, package) 124 break 125 level += 1 --> 126 return _bootstrap._gcd_import(name[level:], package, level) 127 128 /usr/lib/python3.10/importlib/_bootstrap.py in _gcd_import(name, package, level) /usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load(name, import_) /usr/lib/python3.10/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_) /usr/lib/python3.10/importlib/_bootstrap.py in _load_unlocked(spec) /usr/lib/python3.10/importlib/_bootstrap_external.py in exec_module(self, module) /usr/lib/python3.10/importlib/_bootstrap.py in _call_with_frames_removed(f, *args, **kwds) [~/.cache/huggingface/modules/datasets_modules/datasets/knowledgator--events_classification_biotech/9c8086d498c3104de3a3c5b6640837e18ccd829dcaca49f1cdffe3eb5c4a6361/events_classification_biotech.py](https://bcb6shpazyu-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20241022-060119_RC00_688494744#) in <module> 1 import datasets 2 from datasets import load_dataset ----> 3 from datasets.tasks import TextClassification 4 5 DESCRIPTION = """ ModuleNotFoundError: No module named 'datasets.tasks' --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. --------------------------------------------------------------------------- ### Steps to reproduce the bug !pip install datasets from datasets import load_dataset dataset = load_dataset('knowledgator/events_classification_biotech') ### Expected behavior no ModuleNotFoundError ### Environment info google colab
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7248/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7248/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7247
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7247/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7247/comments
https://api.github.com/repos/huggingface/datasets/issues/7247/events
https://github.com/huggingface/datasets/issues/7247
2,606,230,029
I_kwDODunzps6bV-oN
7,247
Adding column with dict struction when mapping lead to wrong order
{ "login": "chchch0109", "id": 114604968, "node_id": "U_kgDOBtS7qA", "avatar_url": "https://avatars.githubusercontent.com/u/114604968?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chchch0109", "html_url": "https://github.com/chchch0109", "followers_url": "https://api.github.com/users/chchch0109/followers", "following_url": "https://api.github.com/users/chchch0109/following{/other_user}", "gists_url": "https://api.github.com/users/chchch0109/gists{/gist_id}", "starred_url": "https://api.github.com/users/chchch0109/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chchch0109/subscriptions", "organizations_url": "https://api.github.com/users/chchch0109/orgs", "repos_url": "https://api.github.com/users/chchch0109/repos", "events_url": "https://api.github.com/users/chchch0109/events{/privacy}", "received_events_url": "https://api.github.com/users/chchch0109/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-10-22T18:55:11
2024-10-22T18:55:23
null
NONE
null
### Describe the bug in `map()` function, I want to add a new column with a dict structure. ``` def map_fn(example): example['text'] = {'user': ..., 'assistant': ...} return example ``` However this leads to a wrong order `{'assistant':..., 'user':...}` in the dataset. Thus I can't concatenate two datasets due to the different feature structures. [Here](https://colab.research.google.com/drive/1zeaWq9Ith4DKWP_EiBNyLfc8S8I68LyY?usp=sharing) is a minimal reproducible example This seems an issue in low level pyarrow library instead of datasets, however, I think datasets should allow concatenate two datasets actually in the same structure. ### Steps to reproduce the bug [Here](https://colab.research.google.com/drive/1zeaWq9Ith4DKWP_EiBNyLfc8S8I68LyY?usp=sharing) is a minimal reproducible example ### Expected behavior two datasets could be concatenated. ### Environment info N/A
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7247/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7246
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7246/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7246/comments
https://api.github.com/repos/huggingface/datasets/issues/7246/events
https://github.com/huggingface/datasets/pull/7246
2,605,734,447
PR_kwDODunzps5_ehPi
7,246
Set dev version
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7246). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-22T15:04:47
2024-10-22T15:07:31
2024-10-22T15:04:58
MEMBER
null
null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7246/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7246", "html_url": "https://github.com/huggingface/datasets/pull/7246", "diff_url": "https://github.com/huggingface/datasets/pull/7246.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7246.patch", "merged_at": "2024-10-22T15:04:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/7245
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7245/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7245/comments
https://api.github.com/repos/huggingface/datasets/issues/7245/events
https://github.com/huggingface/datasets/pull/7245
2,605,701,235
PR_kwDODunzps5_eaiE
7,245
Release: 3.0.2
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7245). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-22T14:53:34
2024-10-22T15:01:50
2024-10-22T15:01:47
MEMBER
null
null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7245/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7245/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7245", "html_url": "https://github.com/huggingface/datasets/pull/7245", "diff_url": "https://github.com/huggingface/datasets/pull/7245.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7245.patch", "merged_at": "2024-10-22T15:01:47" }
true
https://api.github.com/repos/huggingface/datasets/issues/7244
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7244/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7244/comments
https://api.github.com/repos/huggingface/datasets/issues/7244/events
https://github.com/huggingface/datasets/pull/7244
2,605,461,515
PR_kwDODunzps5_dqWP
7,244
use huggingface_hub offline mode
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7244). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-22T13:27:16
2024-10-22T14:10:45
2024-10-22T14:10:20
MEMBER
null
and better handling of LocalEntryNotfoundError cc @Wauplin follow up to #7234
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7244/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7244/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7244", "html_url": "https://github.com/huggingface/datasets/pull/7244", "diff_url": "https://github.com/huggingface/datasets/pull/7244.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7244.patch", "merged_at": "2024-10-22T14:10:20" }
true
https://api.github.com/repos/huggingface/datasets/issues/7243
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7243/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7243/comments
https://api.github.com/repos/huggingface/datasets/issues/7243/events
https://github.com/huggingface/datasets/issues/7243
2,602,853,172
I_kwDODunzps6bJGM0
7,243
ArrayXD with None as leading dim incompatible with DatasetCardData
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "It looks like `CardData` in `huggingface_hub` removes None values where it shouldn't. Indeed it calls `_remove_none` on the return of `to_dict()`:\r\n\r\n```python\r\n def to_dict(self) -> Dict[str, Any]:\r\n \"\"\"Converts CardData to a dict.\r\n\r\n Returns:\r\n `dict`: CardData represented as a dictionary ready to be dumped to a YAML\r\n block for inclusion in a README.md file.\r\n \"\"\"\r\n\r\n data_dict = copy.deepcopy(self.__dict__)\r\n self._to_dict(data_dict)\r\n return _remove_none(data_dict)\r\n```\r\n\r\nWould it be ok to remove `list()` from being scanned in `_remove_none` ? it could also be a specific behavior to DatasetCardData if necessary @Wauplin ", "I have actually no idea why none values are removed in model and dataset card data... :see_no_evil:\r\nLooks like `_remove_none` has been introduced at the same time as the entire repocard module (see https://github.com/huggingface/huggingface_hub/pull/940). I would be tempted to remove `_remove_none` entirely actually and only remove \"top-level\" None values (i.e. if something like `pipeline_tag=None` due to a default value in kwargs => we remove it). Hard to tell what could be the side effects but I'm not against trying.\r\n\r\n\r\nHowever, I'm not really in favor in making an exception only for lists. It would mean that tuples, sets and dicts are filtered but not lists, which is pretty inconsistent.", "let's do it for top level attributes yes", "I opened https://github.com/huggingface/huggingface_hub/pull/2626 to address it :)", "thanks !" ]
2024-10-21T15:08:13
2024-10-22T14:18:10
null
CONTRIBUTOR
null
### Describe the bug Creating a dataset with ArrayXD features leads to errors when downloading from hub due to DatasetCardData removing the Nones @lhoestq ### Steps to reproduce the bug ```python import numpy as np from datasets import Array2D, Dataset, Features, load_dataset def examples_generator(): for i in range(4): yield { "array_1d": np.zeros((10,1), dtype="uint16"), "array_2d": np.zeros((10, 1), dtype="uint16"), } features = Features(array_1d=Array2D((None,1), "uint16"), array_2d=Array2D((None, 1), "uint16")) dataset = Dataset.from_generator(examples_generator, features=features) dataset.push_to_hub("alex-hh/test_array_1d2d") ds = load_dataset("alex-hh/test_array_1d2d") ``` Source of error appears to be DatasetCardData.to_dict invoking DatasetCardData._remove_none ```python from huggingface_hub import DatasetCardData from datasets.info import DatasetInfosDict dataset_card_data = DatasetCardData() DatasetInfosDict({"default": dataset.info.copy()}).to_dataset_card_data(dataset_card_data) print(dataset_card_data.to_dict()) # removes Nones in shape ``` ### Expected behavior Should be possible to load datasets saved with shape None in leading dimension ### Environment info 3.0.2 and latest huggingface_hub
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7243/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7241
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7241/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7241/comments
https://api.github.com/repos/huggingface/datasets/issues/7241/events
https://github.com/huggingface/datasets/issues/7241
2,599,899,156
I_kwDODunzps6a91AU
7,241
`push_to_hub` overwrite argument
{ "login": "ceferisbarov", "id": 60838378, "node_id": "MDQ6VXNlcjYwODM4Mzc4", "avatar_url": "https://avatars.githubusercontent.com/u/60838378?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ceferisbarov", "html_url": "https://github.com/ceferisbarov", "followers_url": "https://api.github.com/users/ceferisbarov/followers", "following_url": "https://api.github.com/users/ceferisbarov/following{/other_user}", "gists_url": "https://api.github.com/users/ceferisbarov/gists{/gist_id}", "starred_url": "https://api.github.com/users/ceferisbarov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ceferisbarov/subscriptions", "organizations_url": "https://api.github.com/users/ceferisbarov/orgs", "repos_url": "https://api.github.com/users/ceferisbarov/repos", "events_url": "https://api.github.com/users/ceferisbarov/events{/privacy}", "received_events_url": "https://api.github.com/users/ceferisbarov/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi ! Do you mean deleting all the files ? or erasing the repository git history before push_to_hub ?", "Hi! I meant the latter.", "I don't think there is a `huggingface_hub` utility to erase the git history, cc @Wauplin maybe ?", "What is the goal exactly of deleting all the git history without deleting the repo? ", "You can use [`super_squash_commit`](https://huggingface.co/docs/huggingface_hub/main/en/package_reference/hf_api#huggingface_hub.HfApi.super_squash_history) to squash all the commits into a single one, hence deleting the git history. This is not exactly what you asked for since it squashes the commits for a specific revision (example: \"all commits on main\"). This means that if other branches exists, they are kept the same. Also if some PRs are already opened on the repo, they will become unmergeable since the commits will have diverted.", "So the solution is:\r\n\r\n```python\r\nfrom huggingface_hub import HfApi\r\nrepo_id = \"username/dataset_name\"\r\nds.push_to_hub(repo_id)\r\nHfApi().super_squash_commit(repo_id)\r\n```\r\n\r\nThis way you erase previous git history to end up with only 1 commit containing your dataset.\r\nStill, I'd be curious why it's important in your case. Is it to save storage space ? or to disallow loading old versions of the data ?", "Thanks, everyone! I am building a new dataset and playing around with column names, splits, etc. Sometimes I push to the hub to share it with other teammates, I don't want those variations to be part of the repo. Deleting the repo from the website takes a little time, but it also loses repo settings that I have set, since I always set it to public with manually approved requests.\r\n\r\nBTW, I had to write `HfApi().super_squash_history(repo_id, repo_type=\"dataset\")`, but otherwise it works.", "@ceferisbarov just to let you know, recreating a gated repo + granting access to your teammates is something that you can automate with something like this (not fully tested but should work):\r\n\r\n```py\r\nfrom huggingface_hub import HfApi\r\n\r\napi = HfApi()\r\napi.delete_repo(repo_id, repo_type=\"dataset\", missing_ok=True)\r\napi.create_repo(repo_id, repo_type=\"dataset\", private=False)\r\napi.update_repo_settings(repo_id, repo_type=\"dataset\", gated=\"manual\")\r\nfor user in [\"user1\", \"user2\"] # list of teammates\r\n api.grant_access(repo_id, user, repo_type=\"dataset\")\r\n```\r\n\r\nI think it'd be a better solution than squashing commits (which is more of a hack), typically if you are using the dataset viewer.", "This is great, @Wauplin. If we can achieve this with HfApi, then we probably don't need to add another parameter to push_to_hub. I am closing the issue." ]
2024-10-20T03:23:26
2024-10-24T17:39:08
2024-10-24T17:39:08
NONE
null
### Feature request Add an `overwrite` argument to the `push_to_hub` method. ### Motivation I want to overwrite a repo without deleting it on Hugging Face. Is this possible? I couldn't find anything in the documentation or tutorials. ### Your contribution I can create a PR.
{ "login": "ceferisbarov", "id": 60838378, "node_id": "MDQ6VXNlcjYwODM4Mzc4", "avatar_url": "https://avatars.githubusercontent.com/u/60838378?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ceferisbarov", "html_url": "https://github.com/ceferisbarov", "followers_url": "https://api.github.com/users/ceferisbarov/followers", "following_url": "https://api.github.com/users/ceferisbarov/following{/other_user}", "gists_url": "https://api.github.com/users/ceferisbarov/gists{/gist_id}", "starred_url": "https://api.github.com/users/ceferisbarov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ceferisbarov/subscriptions", "organizations_url": "https://api.github.com/users/ceferisbarov/orgs", "repos_url": "https://api.github.com/users/ceferisbarov/repos", "events_url": "https://api.github.com/users/ceferisbarov/events{/privacy}", "received_events_url": "https://api.github.com/users/ceferisbarov/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7241/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7240
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7240/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7240/comments
https://api.github.com/repos/huggingface/datasets/issues/7240/events
https://github.com/huggingface/datasets/pull/7240
2,598,980,027
PR_kwDODunzps5_KxSL
7,240
Feature Request: Add functionality to pass split types like train, test in DatasetDict.map
{ "login": "jp1924", "id": 93233241, "node_id": "U_kgDOBY6gWQ", "avatar_url": "https://avatars.githubusercontent.com/u/93233241?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jp1924", "html_url": "https://github.com/jp1924", "followers_url": "https://api.github.com/users/jp1924/followers", "following_url": "https://api.github.com/users/jp1924/following{/other_user}", "gists_url": "https://api.github.com/users/jp1924/gists{/gist_id}", "starred_url": "https://api.github.com/users/jp1924/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jp1924/subscriptions", "organizations_url": "https://api.github.com/users/jp1924/orgs", "repos_url": "https://api.github.com/users/jp1924/repos", "events_url": "https://api.github.com/users/jp1924/events{/privacy}", "received_events_url": "https://api.github.com/users/jp1924/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-10-19T09:59:12
2024-10-19T09:59:12
null
NONE
null
Hello datasets! We often encounter situations where we need to preprocess data differently depending on split types such as train, valid, and test. However, while DatasetDict.map has features to pass rank or index, there's no functionality to pass split types. Therefore, I propose adding a 'with_splits' parameter to DatasetDict, which would allow passing the split type through fn_kwargs.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7240/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7240/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7240", "html_url": "https://github.com/huggingface/datasets/pull/7240", "diff_url": "https://github.com/huggingface/datasets/pull/7240.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7240.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/7238
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7238/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7238/comments
https://api.github.com/repos/huggingface/datasets/issues/7238/events
https://github.com/huggingface/datasets/issues/7238
2,598,409,993
I_kwDODunzps6a4JcJ
7,238
incompatibily issue when using load_dataset with datasets==3.0.1
{ "login": "jupiterMJM", "id": 74985234, "node_id": "MDQ6VXNlcjc0OTg1MjM0", "avatar_url": "https://avatars.githubusercontent.com/u/74985234?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jupiterMJM", "html_url": "https://github.com/jupiterMJM", "followers_url": "https://api.github.com/users/jupiterMJM/followers", "following_url": "https://api.github.com/users/jupiterMJM/following{/other_user}", "gists_url": "https://api.github.com/users/jupiterMJM/gists{/gist_id}", "starred_url": "https://api.github.com/users/jupiterMJM/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jupiterMJM/subscriptions", "organizations_url": "https://api.github.com/users/jupiterMJM/orgs", "repos_url": "https://api.github.com/users/jupiterMJM/repos", "events_url": "https://api.github.com/users/jupiterMJM/events{/privacy}", "received_events_url": "https://api.github.com/users/jupiterMJM/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-10-18T21:25:23
2024-10-18T21:25:23
null
NONE
null
### Describe the bug There is a bug when using load_dataset with dataset version at 3.0.1 . Please see below in the "steps to reproduce the bug". To resolve the bug, I had to downgrade to version 2.21.0 OS: Ubuntu 24 (AWS instance) Python: same bug under 3.12 and 3.10 The error I had was: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/load.py", line 2096, in load_dataset builder_instance.download_and_prepare( File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/builder.py", line 924, in download_and_prepare self._download_and_prepare( File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/builder.py", line 1647, in _download_and_prepare super()._download_and_prepare( File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/builder.py", line 977, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/ubuntu/.cache/huggingface/modules/datasets_modules/datasets/mozilla-foundation--common_voice_6_0/cb17afd34f5799f97e8f48398748f83006335b702bd785f9880797838d541b81/common_voice_6_0.py", line 159, in _split_generators archive_path = dl_manager.download(self._get_bundle_url(self.config.name, bundle_url_template)) File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/download/download_manager.py", line 150, in download download_config = self.download_config.copy() File "/home/ubuntu/miniconda3/envs/maxence_env/lib/python3.10/site-packages/datasets/download/download_config.py", line 73, in copy return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()}) TypeError: DownloadConfig.__init__() got an unexpected keyword argument 'ignore_url_params' ### Steps to reproduce the bug 1. install dataset with ```pip install datasets --upgrade``` 2. launch python; from datasets import loaad_dataset 3. run load_dataset("mozilla-foundation/common_voice_6_0") 4. exit python 5. uninstall datasets; then ```pip install datasets==2.21.0``` 6. launch python; from datasets import loaad_dataset 7. run load_dataset("mozilla-foundation/common_voice_6_0") 8. Everything runs great now ### Expected behavior Be able to download a dataset without error ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 3.0.1 - Platform: Linux-6.8.0-1017-aws-x86_64-with-glibc2.39 - Python version: 3.12.4 - `huggingface_hub` version: 0.26.0 - PyArrow version: 17.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.6.1
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7238/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7238/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7236
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7236/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7236/comments
https://api.github.com/repos/huggingface/datasets/issues/7236/events
https://github.com/huggingface/datasets/pull/7236
2,597,358,525
PR_kwDODunzps5_GIvw
7,236
[MINOR:TYPO] Update arrow_dataset.py
{ "login": "cakiki", "id": 3664563, "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cakiki", "html_url": "https://github.com/cakiki", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "organizations_url": "https://api.github.com/users/cakiki/orgs", "repos_url": "https://api.github.com/users/cakiki/repos", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "received_events_url": "https://api.github.com/users/cakiki/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[]
2024-10-18T12:10:03
2024-10-24T15:06:43
2024-10-24T15:06:43
CONTRIBUTOR
null
Fix wrong link. csv kwargs docstring link was pointing to pandas json docs.
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7236/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7236/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7236", "html_url": "https://github.com/huggingface/datasets/pull/7236", "diff_url": "https://github.com/huggingface/datasets/pull/7236.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7236.patch", "merged_at": "2024-10-24T15:06:43" }
true
https://api.github.com/repos/huggingface/datasets/issues/7234
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7234/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7234/comments
https://api.github.com/repos/huggingface/datasets/issues/7234/events
https://github.com/huggingface/datasets/pull/7234
2,594,220,624
PR_kwDODunzps5-8Z_-
7,234
No need for dataset_info
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7234). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "merging this one for now, let me know if you'd like to see additional changes for error handling (I'll take care of them before doing a release)" ]
2024-10-17T09:54:03
2024-10-22T12:30:40
2024-10-21T16:44:34
MEMBER
null
save a useless call to /api/datasets/repo_id
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7234/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7234/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7234", "html_url": "https://github.com/huggingface/datasets/pull/7234", "diff_url": "https://github.com/huggingface/datasets/pull/7234.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7234.patch", "merged_at": "2024-10-21T16:44:34" }
true
https://api.github.com/repos/huggingface/datasets/issues/7233
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7233/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7233/comments
https://api.github.com/repos/huggingface/datasets/issues/7233/events
https://github.com/huggingface/datasets/issues/7233
2,593,903,113
I_kwDODunzps6am9IJ
7,233
数据集数量问题
{ "login": "want-well", "id": 180297268, "node_id": "U_kgDOCr8eNA", "avatar_url": "https://avatars.githubusercontent.com/u/180297268?v=4", "gravatar_id": "", "url": "https://api.github.com/users/want-well", "html_url": "https://github.com/want-well", "followers_url": "https://api.github.com/users/want-well/followers", "following_url": "https://api.github.com/users/want-well/following{/other_user}", "gists_url": "https://api.github.com/users/want-well/gists{/gist_id}", "starred_url": "https://api.github.com/users/want-well/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/want-well/subscriptions", "organizations_url": "https://api.github.com/users/want-well/orgs", "repos_url": "https://api.github.com/users/want-well/repos", "events_url": "https://api.github.com/users/want-well/events{/privacy}", "received_events_url": "https://api.github.com/users/want-well/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-10-17T07:41:44
2024-10-17T07:41:44
null
NONE
null
### Describe the bug 这里我进行大模型微调,当数据集数量为718时,模型可以正常微调,但是当我添加一个在前718个数据集中的数据或者新增一个数据就会报错 ### Steps to reproduce the bug 1. 这里我的数据集可以微调的最后两个数据集是: { "messages": [ { "role": "user", "content": "完成校正装置设计后需要进行哪些工作?" }, { "role": "assistant", "content": "一旦完成校正装置设计后,需要进行系统实际调校工作,或者进行MATLAB仿真以检查系统的时间响应特性。这时,需将系统建模时省略的部分尽可能加入系统,以保证仿真结果的逼真度。如果由于系统各种固有非线性因素影响,或者由于系统噪声和负载效应等因素的影响,使已校正系统不能满足全部性能指标要求,则需要适当调整校正装置的形式或参数,直到已校正系统满足全部性能指标为止。" } ] } { "messages": [ { "role": "user", "content": "为什么闭环带宽要求会影响串联超前校正的效果?" }, { "role": "assistant", "content": "如果待校正系统不稳定,为了满足相角裕度的要求,超前网络需要提供很大的相角超前量。这会导致超前网络的a值必须选得很大,从而使已校正系统的带宽过大。带宽过大会使通过系统的高频噪声电平变高,可能导致系统失控。因此,闭环带宽的要求会影响串联超前校正的效果。" } ] } 2. 当我把倒数第二个数据集再次添加到数据集中时,此时我的最后三个数据集是 { "messages": [ { "role": "user", "content": "完成校正装置设计后需要进行哪些工作?" }, { "role": "assistant", "content": "一旦完成校正装置设计后,需要进行系统实际调校工作,或者进行MATLAB仿真以检查系统的时间响应特性。这时,需将系统建模时省略的部分尽可能加入系统,以保证仿真结果的逼真度。如果由于系统各种固有非线性因素影响,或者由于系统噪声和负载效应等因素的影响,使已校正系统不能满足全部性能指标要求,则需要适当调整校正装置的形式或参数,直到已校正系统满足全部性能指标为止。" } ] } { "messages": [ { "role": "user", "content": "为什么闭环带宽要求会影响串联超前校正的效果?" }, { "role": "assistant", "content": "如果待校正系统不稳定,为了满足相角裕度的要求,超前网络需要提供很大的相角超前量。这会导致超前网络的a值必须选得很大,从而使已校正系统的带宽过大。带宽过大会使通过系统的高频噪声电平变高,可能导致系统失控。因此,闭环带宽的要求会影响串联超前校正的效果。" } ] } { "messages": [ { "role": "user", "content": "完成校正装置设计后需要进行哪些工作?" }, { "role": "assistant", "content": "一旦完成校正装置设计后,需要进行系统实际调校工作,或者进行MATLAB仿真以检查系统的时间响应特性。这时,需将系统建模时省略的部分尽可能加入系统,以保证仿真结果的逼真度。如果由于系统各种固有非线性因素影响,或者由于系统噪声和负载效应等因素的影响,使已校正系统不能满足全部性能指标要求,则需要适当调整校正装置的形式或参数,直到已校正系统满足全部性能指标为止。" } ] } 这时系统会显示bug: root@autodl-container-027f4cad3d-6baf4e64:~/autodl-tmp# python GLM-4/finetune_demo/finetune.py datasets/ ZhipuAI/glm-4-9b-chat GLM-4/finetune_demo/configs/lora.yaml Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:02<00:00, 4.04it/s] The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. trainable params: 2,785,280 || all params: 9,402,736,640 || trainable%: 0.0296 Generating train split: 0 examples [00:00, ? examples/s]Failed to load JSON from file '/root/autodl-tmp/datasets/train.jsonl' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Missing a name for object member. in row 718 Generating train split: 0 examples [00:00, ? examples/s] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ /root/miniconda3/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py:153 in _generate_tables │ │ │ │ 150 │ │ │ │ │ │ │ │ with open( │ │ 151 │ │ │ │ │ │ │ │ │ file, encoding=self.config.encoding, errors=self.con │ │ 152 │ │ │ │ │ │ │ │ ) as f: │ │ ❱ 153 │ │ │ │ │ │ │ │ │ df = pd.read_json(f, dtype_backend="pyarrow") │ │ 154 │ │ │ │ │ │ │ except ValueError: │ │ 155 │ │ │ │ │ │ │ │ logger.error(f"Failed to load JSON from file '{file}' wi │ │ 156 │ │ │ │ │ │ │ │ raise e │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:815 in read_json │ │ │ │ 812 │ if chunksize: │ │ 813 │ │ return json_reader │ │ 814 │ else: │ │ ❱ 815 │ │ return json_reader.read() │ │ 816 │ │ 817 │ │ 818 class JsonReader(abc.Iterator, Generic[FrameSeriesStrT]): │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1025 in read │ │ │ │ 1022 │ │ │ │ │ │ data_lines = data.split("\n") │ │ 1023 │ │ │ │ │ │ obj = self._get_object_parser(self._combine_lines(data_lines)) │ │ 1024 │ │ │ │ else: │ │ ❱ 1025 │ │ │ │ │ obj = self._get_object_parser(self.data) │ │ 1026 │ │ │ │ if self.dtype_backend is not lib.no_default: │ │ 1027 │ │ │ │ │ return obj.convert_dtypes( │ │ 1028 │ │ │ │ │ │ infer_objects=False, dtype_backend=self.dtype_backend │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1051 in _get_object_parser │ │ │ │ 1048 │ │ } │ │ 1049 │ │ obj = None │ │ 1050 │ │ if typ == "frame": │ │ ❱ 1051 │ │ │ obj = FrameParser(json, **kwargs).parse() │ │ 1052 │ │ │ │ 1053 │ │ if typ == "series" or obj is None: │ │ 1054 │ │ │ if not isinstance(dtype, bool): │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1187 in parse │ │ │ │ 1184 │ │ │ 1185 │ @final │ │ 1186 │ def parse(self): │ │ ❱ 1187 │ │ self._parse() │ │ 1188 │ │ │ │ 1189 │ │ if self.obj is None: │ │ 1190 │ │ │ return None │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/pandas/io/json/_json.py:1403 in _parse │ │ │ │ 1400 │ │ │ │ 1401 │ │ if orient == "columns": │ │ 1402 │ │ │ self.obj = DataFrame( │ │ ❱ 1403 │ │ │ │ ujson_loads(json, precise_float=self.precise_float), dtype=None │ │ 1404 │ │ │ ) │ │ 1405 │ │ elif orient == "split": │ │ 1406 │ │ │ decoded = { │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ValueError: Trailing data During handling of the above exception, another exception occurred: ╭──────────────────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1997 in _prepare_split_single │ │ │ │ 1994 │ │ │ ) │ │ 1995 │ │ │ try: │ │ 1996 │ │ │ │ _time = time.time() │ │ ❱ 1997 │ │ │ │ for _, table in generator: │ │ 1998 │ │ │ │ │ if max_shard_size is not None and writer._num_bytes > max_shard_size │ │ 1999 │ │ │ │ │ │ num_examples, num_bytes = writer.finalize() │ │ 2000 │ │ │ │ │ │ writer.close() │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py:156 in _generate_tables │ │ │ │ 153 │ │ │ │ │ │ │ │ │ df = pd.read_json(f, dtype_backend="pyarrow") │ │ 154 │ │ │ │ │ │ │ except ValueError: │ │ 155 │ │ │ │ │ │ │ │ logger.error(f"Failed to load JSON from file '{file}' wi │ │ ❱ 156 │ │ │ │ │ │ │ │ raise e │ │ 157 │ │ │ │ │ │ │ if df.columns.tolist() == [0]: │ │ 158 │ │ │ │ │ │ │ │ df.columns = list(self.config.features) if self.config.f │ │ 159 │ │ │ │ │ │ │ try: │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/datasets/packaged_modules/json/json.py:130 in _generate_tables │ │ │ │ 127 │ │ │ │ │ │ try: │ │ 128 │ │ │ │ │ │ │ while True: │ │ 129 │ │ │ │ │ │ │ │ try: │ │ ❱ 130 │ │ │ │ │ │ │ │ │ pa_table = paj.read_json( │ │ 131 │ │ │ │ │ │ │ │ │ │ io.BytesIO(batch), read_options=paj.ReadOptions( │ │ 132 │ │ │ │ │ │ │ │ │ ) │ │ 133 │ │ │ │ │ │ │ │ │ break │ │ │ │ in pyarrow._json.read_json:308 │ │ │ │ in pyarrow.lib.pyarrow_internal_check_status:154 │ │ │ │ in pyarrow.lib.check_status:91 │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ArrowInvalid: JSON parse error: Missing a name for object member. in row 718 The above exception was the direct cause of the following exception: ╭──────────────────────────────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ /root/autodl-tmp/GLM-4/finetune_demo/finetune.py:406 in main │ │ │ │ 403 ): │ │ 404 │ ft_config = FinetuningConfig.from_file(config_file) │ │ 405 │ tokenizer, model = load_tokenizer_and_model(model_dir, peft_config=ft_config.peft_co │ │ ❱ 406 │ data_manager = DataManager(data_dir, ft_config.data_config) │ │ 407 │ │ │ 408 │ train_dataset = data_manager.get_dataset( │ │ 409 │ │ Split.TRAIN, │ │ │ │ /root/autodl-tmp/GLM-4/finetune_demo/finetune.py:204 in __init__ │ │ │ │ 201 │ def __init__(self, data_dir: str, data_config: DataConfig): │ │ 202 │ │ self._num_proc = data_config.num_proc │ │ 203 │ │ │ │ ❱ 204 │ │ self._dataset_dct = _load_datasets( │ │ 205 │ │ │ data_dir, │ │ 206 │ │ │ data_config.data_format, │ │ 207 │ │ │ data_config.data_files, │ │ │ │ /root/autodl-tmp/GLM-4/finetune_demo/finetune.py:189 in _load_datasets │ │ │ │ 186 │ │ num_proc: Optional[int], │ │ 187 ) -> DatasetDict: │ │ 188 │ if data_format == '.jsonl': │ │ ❱ 189 │ │ dataset_dct = load_dataset( │ │ 190 │ │ │ data_dir, │ │ 191 │ │ │ data_files=data_files, │ │ 192 │ │ │ split=None, │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/datasets/load.py:2616 in load_dataset │ │ │ │ 2613 │ │ return builder_instance.as_streaming_dataset(split=split) │ │ 2614 │ │ │ 2615 │ # Download and prepare data │ │ ❱ 2616 │ builder_instance.download_and_prepare( │ │ 2617 │ │ download_config=download_config, │ │ 2618 │ │ download_mode=download_mode, │ │ 2619 │ │ verification_mode=verification_mode, │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1029 in download_and_prepare │ │ │ │ 1026 │ │ │ │ │ │ │ prepare_split_kwargs["max_shard_size"] = max_shard_size │ │ 1027 │ │ │ │ │ │ if num_proc is not None: │ │ 1028 │ │ │ │ │ │ │ prepare_split_kwargs["num_proc"] = num_proc │ │ ❱ 1029 │ │ │ │ │ │ self._download_and_prepare( │ │ 1030 │ │ │ │ │ │ │ dl_manager=dl_manager, │ │ 1031 │ │ │ │ │ │ │ verification_mode=verification_mode, │ │ 1032 │ │ │ │ │ │ │ **prepare_split_kwargs, │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1124 in _download_and_prepare │ │ │ │ 1121 │ │ │ │ │ 1122 │ │ │ try: │ │ 1123 │ │ │ │ # Prepare split will record examples associated to the split │ │ ❱ 1124 │ │ │ │ self._prepare_split(split_generator, **prepare_split_kwargs) │ │ 1125 │ │ │ except OSError as e: │ │ 1126 │ │ │ │ raise OSError( │ │ 1127 │ │ │ │ │ "Cannot find data file. " │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:1884 in _prepare_split │ │ │ │ 1881 │ │ │ gen_kwargs = split_generator.gen_kwargs │ │ 1882 │ │ │ job_id = 0 │ │ 1883 │ │ │ with pbar: │ │ ❱ 1884 │ │ │ │ for job_id, done, content in self._prepare_split_single( │ │ 1885 │ │ │ │ │ gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args │ │ 1886 │ │ │ │ ): │ │ 1887 │ │ │ │ │ if done: │ │ │ │ /root/miniconda3/lib/python3.10/site-packages/datasets/builder.py:2040 in _prepare_split_single │ │ │ │ 2037 │ │ │ │ e = e.__context__ │ │ 2038 │ │ │ if isinstance(e, DatasetGenerationError): │ │ 2039 │ │ │ │ raise │ │ ❱ 2040 │ │ │ raise DatasetGenerationError("An error occurred while generating the dataset │ │ 2041 │ │ │ │ 2042 │ │ yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_ │ │ 2043 │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ DatasetGenerationError: An error occurred while generating the dataset 3.请问是否可以帮我解决 ### Expected behavior 希望问题可以得到解决 ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.20.0 - Platform: Linux-4.19.90-2107.6.0.0192.8.oe1.bclinux.x86_64-x86_64-with-glibc2.35 - Python version: 3.10.8 - `huggingface_hub` version: 0.24.6 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2023.12.2
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7233/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7233/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7232
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7232/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7232/comments
https://api.github.com/repos/huggingface/datasets/issues/7232/events
https://github.com/huggingface/datasets/pull/7232
2,593,720,548
PR_kwDODunzps5-62rY
7,232
(Super tiny doc update) Mention to_polars
{ "login": "fzyzcjy", "id": 5236035, "node_id": "MDQ6VXNlcjUyMzYwMzU=", "avatar_url": "https://avatars.githubusercontent.com/u/5236035?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fzyzcjy", "html_url": "https://github.com/fzyzcjy", "followers_url": "https://api.github.com/users/fzyzcjy/followers", "following_url": "https://api.github.com/users/fzyzcjy/following{/other_user}", "gists_url": "https://api.github.com/users/fzyzcjy/gists{/gist_id}", "starred_url": "https://api.github.com/users/fzyzcjy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fzyzcjy/subscriptions", "organizations_url": "https://api.github.com/users/fzyzcjy/orgs", "repos_url": "https://api.github.com/users/fzyzcjy/repos", "events_url": "https://api.github.com/users/fzyzcjy/events{/privacy}", "received_events_url": "https://api.github.com/users/fzyzcjy/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "You are welcome!" ]
2024-10-17T06:08:53
2024-10-24T23:11:05
2024-10-24T15:06:16
CONTRIBUTOR
null
polars is also quite popular now, thus this tiny update can tell users polars is supported
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7232/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7232/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7232", "html_url": "https://github.com/huggingface/datasets/pull/7232", "diff_url": "https://github.com/huggingface/datasets/pull/7232.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7232.patch", "merged_at": "2024-10-24T15:06:16" }
true
https://api.github.com/repos/huggingface/datasets/issues/7231
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7231/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7231/comments
https://api.github.com/repos/huggingface/datasets/issues/7231/events
https://github.com/huggingface/datasets/pull/7231
2,592,011,737
PR_kwDODunzps5-1jP8
7,231
Fix typo in image dataset docs
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7231). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-16T14:05:46
2024-10-16T17:06:21
2024-10-16T17:06:19
MEMBER
null
Fix typo in image dataset docs. Typo reported by @datavistics.
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7231/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7231/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7231", "html_url": "https://github.com/huggingface/datasets/pull/7231", "diff_url": "https://github.com/huggingface/datasets/pull/7231.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7231.patch", "merged_at": "2024-10-16T17:06:19" }
true
https://api.github.com/repos/huggingface/datasets/issues/7230
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7230/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7230/comments
https://api.github.com/repos/huggingface/datasets/issues/7230/events
https://github.com/huggingface/datasets/pull/7230
2,589,531,942
PR_kwDODunzps5-ttUV
7,230
Video support
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7230). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-15T18:17:29
2024-10-24T16:39:51
2024-10-24T16:39:50
MEMBER
null
(wip and experimental) adding the `Video` type based on `VideoReader` from `decord` ```python >>>from datasets import load_dataset >>> ds = load_dataset("path/to/videos", split="train").with_format("torch") >>> print(ds[0]["video"]) <decord.video_reader.VideoReader object at 0x337a47910> >>> print(ds[0]["video"][0]) tensor([[[73, 73, 73], [73, 73, 73], [73, 73, 73], ..., [23, 23, 23], [23, 23, 23], [23, 23, 23]]], dtype=torch.uint8) ``` the storage is the same as for audio and images: `{"path": pa.string(), "bytes": pa.binary()}` and I did a small to keep the hf:// URL in the "path" field if possible, this way the viewer can link to fiels on the hub if possible
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7230/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7230/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7230", "html_url": "https://github.com/huggingface/datasets/pull/7230", "diff_url": "https://github.com/huggingface/datasets/pull/7230.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7230.patch", "merged_at": "2024-10-24T16:39:50" }
true
https://api.github.com/repos/huggingface/datasets/issues/7229
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7229/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7229/comments
https://api.github.com/repos/huggingface/datasets/issues/7229/events
https://github.com/huggingface/datasets/pull/7229
2,588,847,398
PR_kwDODunzps5-rgrx
7,229
handle config_name=None in push_to_hub
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "not sure it's a good idea, we always need a config name so better have the correct default and not support None (which could lead to think it doesn't have a config name, while it does)" ]
2024-10-15T13:48:57
2024-10-24T17:51:52
2024-10-24T17:51:52
CONTRIBUTOR
null
This caught me out - thought it might be better to explicitly handle None?
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7229/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7229/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7229", "html_url": "https://github.com/huggingface/datasets/pull/7229", "diff_url": "https://github.com/huggingface/datasets/pull/7229.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7229.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/7228
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7228/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7228/comments
https://api.github.com/repos/huggingface/datasets/issues/7228/events
https://github.com/huggingface/datasets/issues/7228
2,587,310,094
I_kwDODunzps6aNzgO
7,228
Composite (multi-column) features
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2024-10-14T23:59:19
2024-10-15T11:17:15
null
CONTRIBUTOR
null
### Feature request Structured data types (graphs etc.) might often be most efficiently stored as multiple columns, which then need to be combined during feature decoding Although it is currently possible to nest features as structs, my impression is that in particular when dealing with e.g. a feature composed of multiple numpy array / ArrayXD's, it would be more efficient to store each ArrayXD as a separate column (though I'm not sure by how much) Perhaps specification / implementation could be supported by something like: ``` features=Features(**{("feature0", "feature1")=Features(feature0=Array2D((None,10), dtype="float32"), feature1=Array2D((None,10), dtype="float32")) ``` ### Motivation Defining efficient composite feature types based on numpy arrays for representing data such as graphs with multiple node and edge attributes is currently challenging. ### Your contribution Possibly able to contribute
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7228/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7228/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7227
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7227/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7227/comments
https://api.github.com/repos/huggingface/datasets/issues/7227/events
https://github.com/huggingface/datasets/pull/7227
2,587,048,312
PR_kwDODunzps5-l-OE
7,227
fast array extraction
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7227). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I've updated the most straightforward failing test cases - lmk if you agree with those.\r\n\r\nMight need some help / pointers on the remaining new failing tests, which seem a little bit more subtle.", "@lhoestq I've had a go at fixing a few more test cases but getting quite uncertain about the remaining ones (as well as about some of the array writing ones that I tried to fix in my last commit). There are still 27 failures vs 21 on main. I'm not completely sure in some cases what intended behaviour is and my understanding of the flow for typed writing is a bit vague." ]
2024-10-14T20:51:32
2024-11-08T15:29:01
null
CONTRIBUTOR
null
Implements #7210 using method suggested in https://github.com/huggingface/datasets/pull/7207#issuecomment-2411789307 ```python import numpy as np from datasets import Dataset, Features, Array3D features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float32")}) dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,10,10), dtype=np.float32) for x in [2000,1000]*25] for i in range(2)}, features=features) ``` ~0.02 s vs 0.9s on main ```python ds = dataset.to_iterable_dataset() t0 = time.time() for ex in ds: pass t1 = time.time() ``` < 0.01 s vs 1.3 s on main @lhoestq I can see this breaks a bunch of array-related tests but can update the test cases if you would support making this change? I also added an Array1D feature which will always be decoded into a numpy array and likewise improves extraction performance: ```python from datasets import Dataset, Features, Array1D, Sequence, Value array_features=Features(**{"array0": Array1D((None,), dtype="float32"), "array1": Array1D((None,), dtype="float32")}) sequence_features=Features(**{"array0": Sequence(feature=Value("float32"), length=-1), "array1": Sequence(feature=Value("float32"), length=-1)}) array_dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,), dtype=np.float32) for x in [20000,10000]*25] for i in range(2)}, features=array_features) sequence_dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,), dtype=np.float32) for x in [20000,10000]*25] for i in range(2)}, features=sequence_features) ```python t0 = time.time() for ex in array_dataset.to_iterable_dataset(): pass t1 = time.time() ``` < 0.01 s ```python t0 = time.time() for ex in sequence_dataset.to_iterable_dataset(): pass t1 = time.time() ``` ~1.1s And also added support for extracting structs of arrays as dicts of numpy arrays: ```python import numpy as np from datasets import Dataset, Features, Array3D, Sequence features=Features(struct={"array0": Array3D((None,10,10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float32")}, _list=Sequence(feature=Array3D((None,10,10), dtype="float32"))) dataset = Dataset.from_dict({"struct": [{f"array{i}": np.zeros((x,10,10), dtype=np.float32) for i in range(2)} for x in [2000,1000]*25], "_list": [[np.zeros((x,10,10), dtype=np.float32) for i in range(2)] for x in [2000,1000]*25]}, features=features) ``` ```python t0 = time.time() for ex in dataset.to_iterable_dataset(): pass t1 = time.time() assert isinstance(ex["struct"]["array0"], np.ndarray) and ex["struct"]["array0"].ndim == 3 ``` ~0.02 s and no exception vs ~7s with an exception on main
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7227/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7227/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7227", "html_url": "https://github.com/huggingface/datasets/pull/7227", "diff_url": "https://github.com/huggingface/datasets/pull/7227.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7227.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/7226
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7226/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7226/comments
https://api.github.com/repos/huggingface/datasets/issues/7226/events
https://github.com/huggingface/datasets/issues/7226
2,586,920,351
I_kwDODunzps6aMUWf
7,226
Add R as a How to use from the Polars (R) Library as an option
{ "login": "ran-codes", "id": 45013044, "node_id": "MDQ6VXNlcjQ1MDEzMDQ0", "avatar_url": "https://avatars.githubusercontent.com/u/45013044?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ran-codes", "html_url": "https://github.com/ran-codes", "followers_url": "https://api.github.com/users/ran-codes/followers", "following_url": "https://api.github.com/users/ran-codes/following{/other_user}", "gists_url": "https://api.github.com/users/ran-codes/gists{/gist_id}", "starred_url": "https://api.github.com/users/ran-codes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ran-codes/subscriptions", "organizations_url": "https://api.github.com/users/ran-codes/orgs", "repos_url": "https://api.github.com/users/ran-codes/repos", "events_url": "https://api.github.com/users/ran-codes/events{/privacy}", "received_events_url": "https://api.github.com/users/ran-codes/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2024-10-14T19:56:07
2024-10-14T19:57:13
null
NONE
null
### Feature request The boiler plate code to access a dataset via the hugging face file system is very useful. Please addd ## Add Polars (R) option The equivailent code works, because the [Polars-R](https://github.com/pola-rs/r-polars) wrapper has hugging faces funcitonaliy as well. ```r library(polars) df <- pl$read_parquet("hf://datasets/SALURBAL/core__admin_cube_public/core__admin_cube_public.parquet") ``` ## Polars (python) option ![image](https://github.com/user-attachments/assets/8f1bcd19-e578-4b18-b324-7cc00b80ac0a) ## Libraries Currently ![image](https://github.com/user-attachments/assets/0cf50063-f9db-443c-97b4-3ef0664b6e6e) ### Motivation There are many data/analysis/research/statistics teams (particularly in academia and pharma) that use R as the default language. R has great integration with most of the newer data techs (arrow, parquet, polars) and having this included could really help in bringing this community into the hugging faces ecosystem. **This is a small/low-hanging-fruit front end change but would make a big impact expanding the community** ### Your contribution I am not sure which repositroy this should be in, but I have experience in R, Python and JS and happy to submit a PR in the appropriate repository.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7226/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7225
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7225/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7225/comments
https://api.github.com/repos/huggingface/datasets/issues/7225/events
https://github.com/huggingface/datasets/issues/7225
2,586,229,216
I_kwDODunzps6aJrng
7,225
Huggingface GIT returns null as Content-Type instead of application/x-git-receive-pack-result
{ "login": "padmalcom", "id": 3961950, "node_id": "MDQ6VXNlcjM5NjE5NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4", "gravatar_id": "", "url": "https://api.github.com/users/padmalcom", "html_url": "https://github.com/padmalcom", "followers_url": "https://api.github.com/users/padmalcom/followers", "following_url": "https://api.github.com/users/padmalcom/following{/other_user}", "gists_url": "https://api.github.com/users/padmalcom/gists{/gist_id}", "starred_url": "https://api.github.com/users/padmalcom/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/padmalcom/subscriptions", "organizations_url": "https://api.github.com/users/padmalcom/orgs", "repos_url": "https://api.github.com/users/padmalcom/repos", "events_url": "https://api.github.com/users/padmalcom/events{/privacy}", "received_events_url": "https://api.github.com/users/padmalcom/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-10-14T14:33:06
2024-10-14T14:33:06
null
NONE
null
### Describe the bug We push changes to our datasets programmatically. Our git client jGit reports that the hf git server returns null as Content-Type after a push. ### Steps to reproduce the bug A basic kotlin application: ``` val person = PersonIdent( "padmalcom", "padmalcom@sth.com" ) val cp = UsernamePasswordCredentialsProvider( "padmalcom", "mysecrettoken" ) val git = KGit.cloneRepository { setURI("https://huggingface.co/datasets/sth/images") setTimeout(60) setProgressMonitor(TextProgressMonitor()) setCredentialsProvider(cp) } FileOutputStream("./images/images.csv").apply { writeCsv(images) } git.add { addFilepattern("images.csv") } for (i in images) { FileUtils.copyFile( File("./files/${i.id}"), File("./images/${i.id + File(i.fileName).extension }") ) git.add { addFilepattern("${i.id + File(i.fileName).extension }") } } val revCommit = git.commit { author = person message = "Uploading images at " + LocalDateTime.now() .format(DateTimeFormatter.ISO_DATE_TIME) setCredentialsProvider(cp) } val push = git.push { setCredentialsProvider(cp) } ``` ### Expected behavior The git server is expected to return the Content-Type _application/x-git-receive-pack-result_. ### Environment info It is independent from the datasets library.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7225/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7225/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7224
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7224/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7224/comments
https://api.github.com/repos/huggingface/datasets/issues/7224/events
https://github.com/huggingface/datasets/pull/7224
2,583,233,980
PR_kwDODunzps5-bLNR
7,224
fallback to default feature casting in case custom features not available during dataset loading
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-10-12T16:13:56
2024-10-12T16:13:56
null
CONTRIBUTOR
null
a fix for #7223 in case datasets is happy to support this kind of extensibility! seems cool / powerful for allowing sharing of datasets with potentially different feature types
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7224/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7224/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7224", "html_url": "https://github.com/huggingface/datasets/pull/7224", "diff_url": "https://github.com/huggingface/datasets/pull/7224.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7224.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/7223
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7223/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7223/comments
https://api.github.com/repos/huggingface/datasets/issues/7223/events
https://github.com/huggingface/datasets/issues/7223
2,583,231,590
I_kwDODunzps6Z-Pxm
7,223
Fallback to arrow defaults when loading dataset with custom features that aren't registered locally
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-10-12T16:08:20
2024-10-12T16:08:20
null
CONTRIBUTOR
null
### Describe the bug Datasets allows users to create and register custom features. However if datasets are then pushed to the hub, this means that anyone calling load_dataset without registering the custom Features in the same way as the dataset creator will get an error message. It would be nice to offer a fallback in this case. ### Steps to reproduce the bug ```python load_dataset("alex-hh/custom-features-example") ``` (Dataset creation process - must be run in separate session so that NewFeature isn't registered in session in which download is attempted:) ```python from dataclasses import dataclass, field import pyarrow as pa from datasets.features.features import register_feature from datasets import Dataset, Features, Value, load_dataset from datasets import Feature @dataclass class NewFeature(Feature): _type: str = field(default="NewFeature", init=False, repr=False) def __call__(self): return pa.int32() def examples_generator(): for i in range(5): yield {"feature": i} ds = Dataset.from_generator(examples_generator, features=Features(feature=NewFeature())) ds.push_to_hub("alex-hh/custom-features-example") register_feature(NewFeature, "NewFeature") ``` ### Expected behavior It would be nice, and offer greater extensibility, if there was some kind of graceful fallback mechanism in place for cases where user-defined features are stored in the dataset but not available locally. ### Environment info 3.0.2
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7223/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7222
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7222/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7222/comments
https://api.github.com/repos/huggingface/datasets/issues/7222/events
https://github.com/huggingface/datasets/issues/7222
2,582,678,033
I_kwDODunzps6Z8IoR
7,222
TypeError: Couldn't cast array of type string to null in long json
{ "login": "nokados", "id": 5142577, "node_id": "MDQ6VXNlcjUxNDI1Nzc=", "avatar_url": "https://avatars.githubusercontent.com/u/5142577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nokados", "html_url": "https://github.com/nokados", "followers_url": "https://api.github.com/users/nokados/followers", "following_url": "https://api.github.com/users/nokados/following{/other_user}", "gists_url": "https://api.github.com/users/nokados/gists{/gist_id}", "starred_url": "https://api.github.com/users/nokados/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nokados/subscriptions", "organizations_url": "https://api.github.com/users/nokados/orgs", "repos_url": "https://api.github.com/users/nokados/repos", "events_url": "https://api.github.com/users/nokados/events{/privacy}", "received_events_url": "https://api.github.com/users/nokados/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-10-12T08:14:59
2024-10-12T08:14:59
null
NONE
null
### Describe the bug In general, changing the type from string to null is allowed within a dataset — there are even examples of this in the documentation. However, if the dataset is large and unevenly distributed, this allowance stops working. The schema gets locked in after reading a chunk. Consequently, if all values in the first chunk of a field are, for example, null, the field will be locked as type null, and if a string appears in that field in the second chunk, it will trigger this error: <details> <summary>Traceback </summary> ``` TypeError Traceback (most recent call last) [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1868 try: -> 1869 writer.write_table(table) 1870 except CastError as cast_error: 14 frames [/usr/local/lib/python3.10/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in write_table(self, pa_table, writer_batch_size) 579 pa_table = pa_table.combine_chunks() --> 580 pa_table = table_cast(pa_table, self._schema) 581 if self.embed_local_files: [/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in table_cast(table, schema) 2291 if table.schema != schema: -> 2292 return cast_table_to_schema(table, schema) 2293 elif table.schema.metadata != schema.metadata: [/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in cast_table_to_schema(table, schema) 2244 ) -> 2245 arrays = [ 2246 cast_array_to_feature( [/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in <listcomp>(.0) 2245 arrays = [ -> 2246 cast_array_to_feature( 2247 table[name] if name in table_column_names else pa.array([None] * len(table), type=schema.field(name).type), [/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in wrapper(array, *args, **kwargs) 1794 if isinstance(array, pa.ChunkedArray): -> 1795 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1796 else: [/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in <listcomp>(.0) 1794 if isinstance(array, pa.ChunkedArray): -> 1795 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1796 else: [/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in cast_array_to_feature(array, feature, allow_primitive_to_str, allow_decimal_to_str) 2101 elif not isinstance(feature, (Sequence, dict, list, tuple)): -> 2102 return array_cast( 2103 array, [/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in wrapper(array, *args, **kwargs) 1796 else: -> 1797 return func(array, *args, **kwargs) 1798 [/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in array_cast(array, pa_type, allow_primitive_to_str, allow_decimal_to_str) 1947 if pa.types.is_null(pa_type) and not pa.types.is_null(array.type): -> 1948 raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}") 1949 return array.cast(pa_type) TypeError: Couldn't cast array of type string to null The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) [<ipython-input-353-e02f83980611>](https://localhost:8080/#) in <cell line: 1>() ----> 1 dd = load_dataset("json", data_files=["TEST.json"]) [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, keep_in_memory, save_infos, revision, token, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs) 2094 2095 # Download and prepare data -> 2096 builder_instance.download_and_prepare( 2097 download_config=download_config, 2098 download_mode=download_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, dl_manager, base_path, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 922 if num_proc is not None: 923 prepare_split_kwargs["num_proc"] = num_proc --> 924 self._download_and_prepare( 925 dl_manager=dl_manager, 926 verification_mode=verification_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 997 try: 998 # Prepare split will record examples associated to the split --> 999 self._prepare_split(split_generator, **prepare_split_kwargs) 1000 except OSError as e: 1001 raise OSError( [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, file_format, num_proc, max_shard_size) 1738 job_id = 0 1739 with pbar: -> 1740 for job_id, done, content in self._prepare_split_single( 1741 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args 1742 ): [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1894 if isinstance(e, DatasetGenerationError): 1895 raise -> 1896 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1897 1898 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ``` </details> ### Steps to reproduce the bug ```python import json from datasets import load_dataset with open("TEST.json", "w") as f: row = {"ballast": "qwerty" * 1000, "b": None} row_str = json.dumps(row) + "\n" line_size = len(row_str) chunk_size = 10 << 20 lines_in_chunk = chunk_size // line_size + 1 print(f"Writing {lines_in_chunk} lines") for i in range(lines_in_chunk): f.write(row_str) null_row = {"ballast": "Gotcha", "b": "Not Null"} f.write(json.dumps(null_row) + "\n") load_dataset("json", data_files=["TEST.json"]) ``` ### Expected behavior Concatenation of the chunks without errors ### Environment info - `datasets` version: 3.0.1 - Platform: Linux-6.1.85+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.24.7 - PyArrow version: 16.1.0 - Pandas version: 2.2.2 - `fsspec` version: 2024.6.1
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7222/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7221
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7221/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7221/comments
https://api.github.com/repos/huggingface/datasets/issues/7221/events
https://github.com/huggingface/datasets/pull/7221
2,582,114,631
PR_kwDODunzps5-YNI8
7,221
add CustomFeature base class to support user-defined features with encoding/decoding logic
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "@lhoestq would you be open to supporting this kind of extensibility?", "I suggested a fix in https://github.com/huggingface/datasets/issues/7220 that would not necessarily require a parent class for custom features, lmk what you think" ]
2024-10-11T20:10:27
2024-11-05T15:16:56
null
CONTRIBUTOR
null
intended as fix for #7220 if this kind of extensibility is something that datasets is willing to support! ```python from datasets.features.features import CustomFeature class ListOfStrs(CustomFeature): requires_encoding = True def _encode_example(self, value): if isinstance(value, str): return [str] else: return value feats = Features(strlist=ListOfStrs()) feats.encode_example({"strlist": "a"})["strlist"] == feats["strlist"].encode_example("a") ```
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7221/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7221/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7221", "html_url": "https://github.com/huggingface/datasets/pull/7221", "diff_url": "https://github.com/huggingface/datasets/pull/7221.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7221.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/7220
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7220/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7220/comments
https://api.github.com/repos/huggingface/datasets/issues/7220/events
https://github.com/huggingface/datasets/issues/7220
2,582,036,110
I_kwDODunzps6Z5r6O
7,220
Custom features not compatible with special encoding/decoding logic
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "I think you can fix this simply by replacing the line with hardcoded features with `hastattr(schema, \"encode_example\")` actually", "#7284 " ]
2024-10-11T19:20:11
2024-11-08T15:10:58
null
CONTRIBUTOR
null
### Describe the bug It is possible to register custom features using datasets.features.features.register_feature (https://github.com/huggingface/datasets/pull/6727) However such features are not compatible with Features.encode_example/decode_example if they require special encoding / decoding logic because encode_nested_example / decode_nested_example checks whether the feature is in a fixed list of encodable types: https://github.com/huggingface/datasets/blob/16a121d7821a7691815a966270f577e2c503473f/src/datasets/features/features.py#L1349 This prevents the extensibility of features to complex cases ### Steps to reproduce the bug ```python class ListOfStrs: def encode_example(self, value): if isinstance(value, str): return [str] else: return value feats = Features(strlist=ListOfStrs()) assert feats.encode_example({"strlist": "a"})["strlist"] = feats["strlist"].encode_example("a")} ``` ### Expected behavior Registered feature types should be encoded based on some property of the feature (e.g. requires_encoding)? ### Environment info 3.0.2
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7220/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7220/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7219
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7219/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7219/comments
https://api.github.com/repos/huggingface/datasets/issues/7219/events
https://github.com/huggingface/datasets/pull/7219
2,581,708,084
PR_kwDODunzps5-XAQF
7,219
bump fsspec
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7219). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-11T15:56:36
2024-10-14T08:21:56
2024-10-14T08:21:55
MEMBER
null
null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7219/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7219", "html_url": "https://github.com/huggingface/datasets/pull/7219", "diff_url": "https://github.com/huggingface/datasets/pull/7219.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7219.patch", "merged_at": "2024-10-14T08:21:55" }
true
https://api.github.com/repos/huggingface/datasets/issues/7217
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7217/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7217/comments
https://api.github.com/repos/huggingface/datasets/issues/7217/events
https://github.com/huggingface/datasets/issues/7217
2,581,095,098
I_kwDODunzps6Z2GK6
7,217
ds.map(f, num_proc=10) is slower than df.apply
{ "login": "lanlanlanlanlanlan365", "id": 178981231, "node_id": "U_kgDOCqsJbw", "avatar_url": "https://avatars.githubusercontent.com/u/178981231?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lanlanlanlanlanlan365", "html_url": "https://github.com/lanlanlanlanlanlan365", "followers_url": "https://api.github.com/users/lanlanlanlanlanlan365/followers", "following_url": "https://api.github.com/users/lanlanlanlanlanlan365/following{/other_user}", "gists_url": "https://api.github.com/users/lanlanlanlanlanlan365/gists{/gist_id}", "starred_url": "https://api.github.com/users/lanlanlanlanlanlan365/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lanlanlanlanlanlan365/subscriptions", "organizations_url": "https://api.github.com/users/lanlanlanlanlanlan365/orgs", "repos_url": "https://api.github.com/users/lanlanlanlanlanlan365/repos", "events_url": "https://api.github.com/users/lanlanlanlanlanlan365/events{/privacy}", "received_events_url": "https://api.github.com/users/lanlanlanlanlanlan365/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! `map()` reads all the columns and writes the resulting dataset with all the columns as well, while df.column_name.apply only reads and writes one column and does it in memory. So this is speed difference is actually expected.\r\n\r\nMoreover using multiprocessing on a dataset that lives in memory (from_pandas uses the same in-memory data as the pandas DataFrame while load_dataset or from_generator load from disk) requires to copy the data to each subprocess which can also be slow. Data loaded from disk don't need to be copied though since they work as a form of shared memory thanks to memory mapping.\r\n\r\nHowever you can make you map() call much faster by making it read and write only the column you want:\r\n\r\n```python\r\nhas_cover_ds = ds.map(lambda song_name: {'has_cover': has_cover(song_name)}, input_columns=[\"song_name\"], remove_columns=ds.column_names) # outputs a dataset with 1 column\r\nds = ds.concatenate_datasets([ds, has_cover_ds], axis=1)\r\n```\r\n\r\nand if your dataset is loaded from disk you can pass num_proc=10 and get a nice speed up as well (no need to copy the data to subprocesses)" ]
2024-10-11T11:04:05
2024-10-22T16:05:58
null
NONE
null
### Describe the bug pandas columns: song_id, song_name ds = Dataset.from_pandas(df) def has_cover(song_name): if song_name is None or pd.isna(song_name): return False return 'cover' in song_name.lower() df['has_cover'] = df.song_name.progress_apply(has_cover) ds = ds.map(lambda x: {'has_cover': has_cover(x['song_name'])}, num_proc=10) time cost: 1. df.apply: 100%|██████████| 12500592/12500592 [00:13<00:00, 959825.47it/s] 2. ds.map: Map (num_proc=10):  31%  3899028/12500592 [00:28<00:38, 222532.89 examples/s] ### Steps to reproduce the bug pandas columns: song_id, song_name ds = Dataset.from_pandas(df) def has_cover(song_name): if song_name is None or pd.isna(song_name): return False return 'cover' in song_name.lower() df['has_cover'] = df.song_name.progress_apply(has_cover) ds = ds.map(lambda x: {'has_cover': has_cover(x['song_name'])}, num_proc=10) ### Expected behavior ds.map is ~num_proc faster than df.apply ### Environment info pandas: 2.2.2 datasets: 2.19.1
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7217/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7217/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7215
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7215/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7215/comments
https://api.github.com/repos/huggingface/datasets/issues/7215/events
https://github.com/huggingface/datasets/issues/7215
2,579,942,939
I_kwDODunzps6Zxs4b
7,215
Iterable dataset map with explicit features causes slowdown for Sequence features
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-10-10T22:08:20
2024-10-10T22:10:32
null
CONTRIBUTOR
null
### Describe the bug When performing map, it's nice to be able to pass the new feature type, and indeed required by interleave and concatenate datasets. However, this can cause a major slowdown for certain types of array features due to the features being re-encoded. This is separate to the slowdown reported in #7206 ### Steps to reproduce the bug ``` from datasets import Dataset, Features, Array3D, Sequence, Value import numpy as np import time features=Features(**{"array0": Sequence(feature=Value("float32"), length=-1), "array1": Sequence(feature=Value("float32"), length=-1)}) dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,), dtype=np.float32) for x in [5000,10000]*25] for i in range(2)}, features=features) ``` ``` ds = dataset.to_iterable_dataset() ds = ds.with_format("numpy").map(lambda x: x) t0 = time.time() for ex in ds: pass t1 = time.time() ``` ~1.5 s on main ``` ds = dataset.to_iterable_dataset() ds = ds.with_format("numpy").map(lambda x: x, features=features) t0 = time.time() for ex in ds: pass t1 = time.time() ``` ~ 3 s on main ### Expected behavior I'm not 100% sure whether passing new feature types to formatted outputs of map should be supported or not, but assuming it should, then there should be a cost-free way to specify the new feature type - knowing feature type is required by interleave_datasets and concatenate_datasets for example ### Environment info 3.0.2
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7215/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7215/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7214
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7214/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7214/comments
https://api.github.com/repos/huggingface/datasets/issues/7214/events
https://github.com/huggingface/datasets/issues/7214
2,578,743,713
I_kwDODunzps6ZtIGh
7,214
Formatted map + with_format(None) changes array dtype for iterable datasets
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "possibly due to this logic:\r\n\r\n```python\r\n def _arrow_array_to_numpy(self, pa_array: pa.Array) -> np.ndarray:\r\n if isinstance(pa_array, pa.ChunkedArray):\r\n if isinstance(pa_array.type, _ArrayXDExtensionType):\r\n # don't call to_pylist() to preserve dtype of the fixed-size array\r\n zero_copy_only = _is_zero_copy_only(pa_array.type.storage_dtype, unnest=True)\r\n array: List = [\r\n row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)\r\n ]\r\n else:\r\n zero_copy_only = _is_zero_copy_only(pa_array.type) and all(\r\n not _is_array_with_nulls(chunk) for chunk in pa_array.chunks\r\n )\r\n array: List = [\r\n row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)\r\n ]\r\n else:\r\n if isinstance(pa_array.type, _ArrayXDExtensionType):\r\n # don't call to_pylist() to preserve dtype of the fixed-size array\r\n zero_copy_only = _is_zero_copy_only(pa_array.type.storage_dtype, unnest=True)\r\n array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only)\r\n else:\r\n zero_copy_only = _is_zero_copy_only(pa_array.type) and not _is_array_with_nulls(pa_array)\r\n array: List = pa_array.to_numpy(zero_copy_only=zero_copy_only).tolist()\r\n```" ]
2024-10-10T12:45:16
2024-10-12T16:55:57
null
CONTRIBUTOR
null
### Describe the bug When applying with_format -> map -> with_format(None), array dtypes seem to change, even if features are passed ### Steps to reproduce the bug ```python features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32")}) dataset = Dataset.from_dict({f"array0": [np.zeros((100,10,10), dtype=np.float32)]*25}, features=features) ds = dataset.to_iterable_dataset().with_format("numpy").map(lambda x: x, features=features) ex_0 = next(iter(ds)) ds = dataset.to_iterable_dataset().with_format("numpy").map(lambda x: x, features=features).with_format(None) ex_1 = next(iter(ds)) assert ex_1["array0"].dtype == ex_0["array0"].dtype, f"{ex_1['array0'].dtype} {ex_0['array0'].dtype}" ``` ### Expected behavior Dtypes should be preserved. ### Environment info 3.0.2
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7214/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7213
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7213/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7213/comments
https://api.github.com/repos/huggingface/datasets/issues/7213/events
https://github.com/huggingface/datasets/issues/7213
2,578,675,565
I_kwDODunzps6Zs3dt
7,213
Add with_rank to Dataset.from_generator
{ "login": "muthissar", "id": 17828087, "node_id": "MDQ6VXNlcjE3ODI4MDg3", "avatar_url": "https://avatars.githubusercontent.com/u/17828087?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muthissar", "html_url": "https://github.com/muthissar", "followers_url": "https://api.github.com/users/muthissar/followers", "following_url": "https://api.github.com/users/muthissar/following{/other_user}", "gists_url": "https://api.github.com/users/muthissar/gists{/gist_id}", "starred_url": "https://api.github.com/users/muthissar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muthissar/subscriptions", "organizations_url": "https://api.github.com/users/muthissar/orgs", "repos_url": "https://api.github.com/users/muthissar/repos", "events_url": "https://api.github.com/users/muthissar/events{/privacy}", "received_events_url": "https://api.github.com/users/muthissar/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2024-10-10T12:15:29
2024-10-10T12:17:11
null
NONE
null
### Feature request Add `with_rank` to `Dataset.from_generator` similar to `Dataset.map` and `Dataset.filter`. ### Motivation As for `Dataset.map` and `Dataset.filter`, this is useful when creating cache files using multi-GPU, where the rank can be used to select GPU IDs. For now, rank can be added in the `gen_kwars` argument; however, this, in turn, includes the rank when computing the fingerprint. ### Your contribution Added #7199 which passes rank based on the `job_id` set by `num_proc`.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7213/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7213/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7212
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7212/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7212/comments
https://api.github.com/repos/huggingface/datasets/issues/7212/events
https://github.com/huggingface/datasets/issues/7212
2,578,641,259
I_kwDODunzps6ZsvFr
7,212
Windows do not supprot signal.alarm and singal.signal
{ "login": "TomasJavurek", "id": 33832672, "node_id": "MDQ6VXNlcjMzODMyNjcy", "avatar_url": "https://avatars.githubusercontent.com/u/33832672?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TomasJavurek", "html_url": "https://github.com/TomasJavurek", "followers_url": "https://api.github.com/users/TomasJavurek/followers", "following_url": "https://api.github.com/users/TomasJavurek/following{/other_user}", "gists_url": "https://api.github.com/users/TomasJavurek/gists{/gist_id}", "starred_url": "https://api.github.com/users/TomasJavurek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TomasJavurek/subscriptions", "organizations_url": "https://api.github.com/users/TomasJavurek/orgs", "repos_url": "https://api.github.com/users/TomasJavurek/repos", "events_url": "https://api.github.com/users/TomasJavurek/events{/privacy}", "received_events_url": "https://api.github.com/users/TomasJavurek/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-10-10T12:00:19
2024-10-10T12:00:19
null
NONE
null
### Describe the bug signal.alarm and signal.signal are used in the load.py module, but these are not supported by Windows. ### Steps to reproduce the bug lighteval accelerate --model_args "pretrained=gpt2,trust_remote_code=True" --tasks "community|kinit_sts" --custom_tasks "community_tasks/kinit_evals.py" --output_dir "./evals" ### Expected behavior proceed with input(..) method ### Environment info Windows 11
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7212/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7212/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7211
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7211/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7211/comments
https://api.github.com/repos/huggingface/datasets/issues/7211/events
https://github.com/huggingface/datasets/issues/7211
2,576,400,502
I_kwDODunzps6ZkMB2
7,211
Describe only selected fields in README
{ "login": "alozowski", "id": 67658835, "node_id": "MDQ6VXNlcjY3NjU4ODM1", "avatar_url": "https://avatars.githubusercontent.com/u/67658835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alozowski", "html_url": "https://github.com/alozowski", "followers_url": "https://api.github.com/users/alozowski/followers", "following_url": "https://api.github.com/users/alozowski/following{/other_user}", "gists_url": "https://api.github.com/users/alozowski/gists{/gist_id}", "starred_url": "https://api.github.com/users/alozowski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alozowski/subscriptions", "organizations_url": "https://api.github.com/users/alozowski/orgs", "repos_url": "https://api.github.com/users/alozowski/repos", "events_url": "https://api.github.com/users/alozowski/events{/privacy}", "received_events_url": "https://api.github.com/users/alozowski/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2024-10-09T16:25:47
2024-10-09T16:25:47
null
NONE
null
### Feature request Hi Datasets team! Is it possible to add the ability to describe only selected fields of the dataset files in `README.md`? For example, I have this open dataset ([open-llm-leaderboard/results](https://huggingface.co/datasets/open-llm-leaderboard/results?row=0)) and I want to describe only some fields in order not to overcomplicate the Dataset Preview and filter out some fields ### Motivation The `Results` dataset for the Open LLM Leaderboard contains json files with a complex nested structure. I would like to add `README.md` there to use the SQL console, for example. But if I describe the structure of this dataset completely, it will overcomplicate the use of Dataset Preview and the total number of columns will exceed 50 ### Your contribution I'm afraid I'm not familiar with the project structure, so I won't be able to open a PR, but I'll try to help with something else if possible
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7211/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7211/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7210
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7210/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7210/comments
https://api.github.com/repos/huggingface/datasets/issues/7210/events
https://github.com/huggingface/datasets/issues/7210
2,575,883,939
I_kwDODunzps6ZiN6j
7,210
Convert Array features to numpy arrays rather than lists by default
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
2024-10-09T13:05:21
2024-10-09T13:05:21
null
CONTRIBUTOR
null
### Feature request It is currently quite easy to cause massive slowdowns when using datasets and not familiar with the underlying data conversions by e.g. making bad choices of formatting. Would it be more user-friendly to set defaults that avoid this as much as possible? e.g. format Array features as numpy arrays rather than python lists ### Motivation Default array formatting leads to slow performance: e.g. ```python import numpy as np from datasets import Dataset, Features, Array3D features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float32")}) dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,10,10), dtype=np.float32) for x in [2000,1000]*25] for i in range(2)}, features=features) ``` ```python t0 = time.time() for ex in ds: pass t1 = time.time() ``` ~1.4 s ```python ds = dataset.to_iterable_dataset() t0 = time.time() for ex in ds: pass t1 = time.time() ``` ~10s ```python ds = dataset.with_format("numpy") t0 = time.time() for ex in ds: pass t1 = time.time() ``` ~0.04s ```python ds = dataset.to_iterable_dataset().with_format("numpy") t0 = time.time() for ex in ds: pass t1 = time.time() ``` ~0.04s ### Your contribution May be able to contribute
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7210/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7210/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7209
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7209/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7209/comments
https://api.github.com/repos/huggingface/datasets/issues/7209/events
https://github.com/huggingface/datasets/pull/7209
2,575,526,651
PR_kwDODunzps5-D6_P
7,209
Preserve features in iterable dataset.filter
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "Yes your assumption on concatenate/interleave is ok imo.\r\n\r\nIt seems the TypedExamplesIterable can slow down things, it should take formatting into account to not convert numpy arrays to python lists\r\n\r\nright now it's slow (unrelatedly to your PR):\r\n\r\n```python\r\n>>> ds = Dataset.from_dict({\"a\": np.zeros((1000, 32, 32))}).to_iterable_dataset().with_format(\"np\")\r\n>>> filtered_ds = ds.filter(lambda x: True)\r\n>>> %time sum(1 for _ in ds)\r\nCPU times: user 175 ms, sys: 8.1 ms, total: 183 ms\r\nWall time: 184 ms\r\n1000\r\n>>> %time sum(1 for _ in filtered_ds)\r\nCPU times: user 4.1 s, sys: 8.41 ms, total: 4.1 s\r\nWall time: 4.12 s\r\n1000\r\n```", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7209). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "> It seems the TypedExamplesIterable can slow down things, it should take formatting into account to not convert numpy arrays to python lists\r\n\r\nShould be fixed by updated #7207 I hope!" ]
2024-10-09T10:42:05
2024-10-16T11:27:22
2024-10-09T16:04:07
CONTRIBUTOR
null
Fixes example in #7208 - I'm not sure what other checks I should do? @lhoestq I also haven't thought hard about the concatenate / interleaving example iterables but think this might work assuming that features are either all identical or None?
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7209/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7209/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7209", "html_url": "https://github.com/huggingface/datasets/pull/7209", "diff_url": "https://github.com/huggingface/datasets/pull/7209.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7209.patch", "merged_at": "2024-10-09T16:04:07" }
true
https://api.github.com/repos/huggingface/datasets/issues/7208
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7208/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7208/comments
https://api.github.com/repos/huggingface/datasets/issues/7208/events
https://github.com/huggingface/datasets/issues/7208
2,575,484,256
I_kwDODunzps6ZgsVg
7,208
Iterable dataset.filter should not override features
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "closed by https://github.com/huggingface/datasets/pull/7209, thanks @alex-hh !" ]
2024-10-09T10:23:45
2024-10-09T16:08:46
2024-10-09T16:08:45
CONTRIBUTOR
null
### Describe the bug When calling filter on an iterable dataset, the features get set to None ### Steps to reproduce the bug import numpy as np import time from datasets import Dataset, Features, Array3D ```python features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float32")}) dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,10,10), dtype=np.float32) for x in [2000,1000]*25] for i in range(2)}, features=features) ds = dataset.to_iterable_dataset() orig_column_names = ds.column_names ds = ds.filter(lambda x: True) assert ds.column_names == orig_column_names ``` ### Expected behavior Filter should preserve features information ### Environment info 3.0.2
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7208/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7207
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7207/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7207/comments
https://api.github.com/repos/huggingface/datasets/issues/7207/events
https://github.com/huggingface/datasets/pull/7207
2,573,582,335
PR_kwDODunzps59-Dms
7,207
apply formatting after iter_arrow to speed up format -> map, filter for iterable datasets
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "I think the problem is that the underlying ex_iterable will not use iter_arrow unless the formatting type is arrow, which leads to conversion from arrow -> python -> numpy in this case rather than arrow -> numpy.\r\n\r\nIdea of updated fix is to use the ex_iterable's iter_arrow in any case where it's available and any formatting is specified. The formatter then works directly on arrow tables; the outputs of the formatter get passed to the function to be mapped.\r\n\r\nWith updated version:\r\n\r\n```python\r\nimport numpy as np\r\nimport time\r\nfrom datasets import Dataset, Features, Array3D\r\n\r\nfeatures=Features(**{\"array0\": Array3D((None, 10, 10), dtype=\"float32\"), \"array1\": Array3D((None,10,10), dtype=\"float32\")})\r\ndataset = Dataset.from_dict({f\"array{i}\": [np.zeros((x,10,10), dtype=np.float32) for x in [2000,1000]*25] for i in range(2)}, features=features)\r\n```\r\n\r\n```python\r\nds = dataset.to_iterable_dataset()\r\nds = ds.with_format(\"numpy\").map(lambda x: x, batched=True, batch_size=10)\r\nt0 = time.time()\r\nfor ex in ds:\r\n pass\r\nt1 = time.time()\r\n```\r\nTotal time: < 0.01s (~30s on main)\r\n\r\n```python\r\nds = dataset.to_iterable_dataset()\r\nds = ds.with_format(\"numpy\").map(lambda x: x, batched=False)\r\nt0 = time.time()\r\nfor ex in ds:\r\n pass\r\nt1 = time.time()\r\n```\r\nTime: ~0.02 s (~30s on main)\r\n\r\n```python\r\nds = dataset.to_iterable_dataset()\r\nds = ds.with_format(\"numpy\")\r\nt0 = time.time()\r\nfor ex in ds:\r\n pass\r\nt1 = time.time()\r\n```\r\nTime: ~0.02s", "also now working for filter with similar performance improvements:\r\n\r\n```python\r\nfiltered_examples = []\r\nds = dataset.to_iterable_dataset()\r\nds = ds.with_format(\"numpy\").filter(lambda x: [arr.shape[0]==2000 for arr in x[\"array0\"]], batch_size=10, batched=True)\r\nt0 = time.time()\r\nfor ex in ds:\r\n filtered_examples.append(ex)\r\nt1 = time.time()\r\nassert len(filtered_examples) == 25\r\n```\r\n0.01s vs 50s on main\r\n\r\n\r\n```python\r\nfiltered_examples = []\r\nds = dataset.to_iterable_dataset()\r\nds = ds.with_format(\"numpy\").filter(lambda x: x[\"array0\"].shape[0]==2000, batched=False)\r\nt0 = time.time()\r\nfor ex in ds:\r\n filtered_examples.append(ex)\r\nt1 = time.time()\r\nassert len(filtered_examples) == 25\r\n```\r\n0.04s vs 50s on main\r\n", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7207). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "(the distributed tests failing in the CI are unrelated)", "There also appears to be a separate? issue with chaining filter and map bc filter iter_arrow only returns _iter_arrow if arrow formatting is applied (and vv presumably)\r\n\r\nI don't have a good minimal example atm", "> issue with chaining filter and map bc filter iter_arrow only returns _iter_arrow if arrow formatting is applied (and vv presumably)\r\n\r\nMaybe related to this issue ?\r\n\r\n```python\r\nds = Dataset.from_dict({\"a\": range(10)}).to_iterable_dataset()\r\nds = ds.with_format(\"arrow\").map(lambda x: x, features=Features({\"a\": Value(\"string\")})).with_format(None)\r\nprint(list(ds)) # yields integers instead of strings\r\n```", "I feel like we could get rid of TypedExampleIterable altogether and apply formatting with feature conversion with `formatted_python_examples_iterator ` and `formatted_arrow_examples_iterator`\r\n\r\nbtw you can pass `features=` in `get_formatter()` to get a formatter that does the feature conversion at the same time as formatting\r\n\r\n(edit:\r\n\r\nexcept maybe the arrow formatter doesn't use `features` yet, we can fix it like this if it's really needed\r\n```diff\r\nclass ArrowFormatter(Formatter[pa.Table, pa.Array, pa.Table]):\r\n def format_row(self, pa_table: pa.Table) -> pa.Table:\r\n- return self.simple_arrow_extractor().extract_row(pa_table)\r\n+ pa_table = self.simple_arrow_extractor().extract_row(pa_table)\r\n+. return cast_table_to_features(pa_table, self.features) if self.features else pa_table\r\n \r\n```\r\n\r\n\r\n)", "> I feel like we could get rid of TypedExampleIterable altogether and apply formatting with feature conversion with formatted_python_examples_iterator and formatted_arrow_examples_iterator\r\n\r\nOh nice didn't know about the feature support in get_formatter. Haven't thought through whether this works but would a FormattedExampleIterable (with feature conversion) be able to solve this and fit the API better?", "> Oh nice didn't know about the feature support in get_formatter. Haven't thought through whether this works but would a FormattedExampleIterable (with feature conversion) be able to solve this and fit the API better?\r\n\r\nYes this is surely the way to go actually !", "ok i've fixed the chaining issue with my last two commits.\r\n\r\nWill see if I can refactor into a FormattedExampleIterable\r\n\r\nThe other issue you posted seems to be unrelated (maybe something to do with feature decoding?)", "updated with FormattedExamplesIterable.\r\n\r\nthere might be a few unnecessary format calls once the data is already formatted - doesn't seem like a big performance bottleneck but could maybe be fixed with e.g. an is_formatted property\r\n\r\nIt also might be possible to do a wider refactor and use FormattedExamplesIterable elsewhere. But I'd personally prefer not to try that rn.", "Thinking about this in the context of #7210 - am wondering if it would make sense for Features to define their own extraction arrow->object logic? e.g. Arrays should *always* be extracted with NumpyArrowExtractor, not only in case with_format is set to numpy (which a user can easily forget or not know to do)\r\n", "> Thinking about this in the context of https://github.com/huggingface/datasets/issues/7210 - am wondering if it would make sense for Features to define their own extraction arrow->object logic? e.g. Arrays should always be extracted with NumpyArrowExtractor, not only in case with_format is set to numpy (which a user can easily forget or not know to do)\r\n\r\nFor `ArrayND` they already implement `to_pylist` to decode arrow data and it can be updated to return a numpy array (see the `ArrayExtensionArray` class for more details)", "@lhoestq im no longer sure my specific concern about with_format(None) was well-founded - I didn't appreciate that the python formatter tries to do nothing to python objects including numpy arrays, so the existing with_format(None) should I *think* do what I want. Do you think with_format(None) is ok as is after all? If so think this is hopefully ready for final review!", "@lhoestq I've updated to make compatible with latest changes on main, and think the current with_format None behaviour is probably fine - please let me know if there's anything else I can do!", "Hi Alex, I will be less available from today and for a week. I'll review your PR and play with it once I come back if you don't mind !" ]
2024-10-08T15:44:53
2024-11-11T23:36:16
null
CONTRIBUTOR
null
I got to this by hacking around a bit but it seems to solve #7206 I have no idea if this approach makes sense or would break something else? Could maybe work on a full pr if this looks reasonable @lhoestq ? I imagine the same issue might affect other iterable dataset methods?
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7207/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7207/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7207", "html_url": "https://github.com/huggingface/datasets/pull/7207", "diff_url": "https://github.com/huggingface/datasets/pull/7207.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7207.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/7206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7206/comments
https://api.github.com/repos/huggingface/datasets/issues/7206/events
https://github.com/huggingface/datasets/issues/7206
2,573,567,467
I_kwDODunzps6ZZYXr
7,206
Slow iteration for iterable dataset with numpy formatting for array data
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "The below easily eats up 32G of RAM. Leaving it for a while bricked the laptop with 16GB.\r\n\r\n```\r\ndataset = load_dataset(\"Voxel51/OxfordFlowers102\", data_dir=\"data\").with_format(\"numpy\")\r\nprocessed_dataset = dataset.map(lambda x: x)\r\n```\r\n\r\n![image](https://github.com/user-attachments/assets/c1863a69-b18f-4014-89dc-98994336df96)\r\n\r\nSimilar problems occur if using a real transform function in `.map()`." ]
2024-10-08T15:38:11
2024-10-17T17:14:52
null
CONTRIBUTOR
null
### Describe the bug When working with large arrays, setting with_format to e.g. numpy then applying map causes a significant slowdown for iterable datasets. ### Steps to reproduce the bug ```python import numpy as np import time from datasets import Dataset, Features, Array3D features=Features(**{"array0": Array3D((None, 10, 10), dtype="float32"), "array1": Array3D((None,10,10), dtype="float32")}) dataset = Dataset.from_dict({f"array{i}": [np.zeros((x,10,10), dtype=np.float32) for x in [2000,1000]*25] for i in range(2)}, features=features) ``` Then ```python ds = dataset.to_iterable_dataset() ds = ds.with_format("numpy").map(lambda x: x) t0 = time.time() for ex in ds: pass t1 = time.time() print(t1-t0) ``` takes 27 s, whereas ```python ds = dataset.to_iterable_dataset() ds = ds.with_format("numpy") ds = dataset.to_iterable_dataset() t0 = time.time() for ex in ds: pass t1 = time.time() print(t1 - t0) ``` takes ~1s ### Expected behavior Map should not introduce a slowdown when formatting is enabled. ### Environment info 3.0.2
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7206/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7206/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7205
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7205/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7205/comments
https://api.github.com/repos/huggingface/datasets/issues/7205/events
https://github.com/huggingface/datasets/pull/7205
2,573,490,859
PR_kwDODunzps599w0I
7,205
fix ci benchmark
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7205). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-08T15:06:18
2024-10-08T15:25:28
2024-10-08T15:25:25
MEMBER
null
we're not using the benchmarks anymore + they were not working anyway due to token permissions I keep the code in case we ever want to re-run the benchmark manually
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7205/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7205/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7205", "html_url": "https://github.com/huggingface/datasets/pull/7205", "diff_url": "https://github.com/huggingface/datasets/pull/7205.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7205.patch", "merged_at": "2024-10-08T15:25:25" }
true
https://api.github.com/repos/huggingface/datasets/issues/7204
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7204/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7204/comments
https://api.github.com/repos/huggingface/datasets/issues/7204/events
https://github.com/huggingface/datasets/pull/7204
2,573,289,063
PR_kwDODunzps599Hem
7,204
fix unbatched arrow map for iterable datasets
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7204). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-08T13:54:09
2024-10-08T14:19:47
2024-10-08T14:19:47
CONTRIBUTOR
null
Fixes the bug when applying map to an arrow-formatted iterable dataset described here: https://github.com/huggingface/datasets/issues/6833#issuecomment-2399903885 ```python from datasets import load_dataset ds = load_dataset("rotten_tomatoes", split="train", streaming=True) ds = ds.with_format("arrow").map(lambda x: x) for ex in ds: pass ``` @lhoestq
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7204/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7204", "html_url": "https://github.com/huggingface/datasets/pull/7204", "diff_url": "https://github.com/huggingface/datasets/pull/7204.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7204.patch", "merged_at": "2024-10-08T14:19:46" }
true
https://api.github.com/repos/huggingface/datasets/issues/7203
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7203/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7203/comments
https://api.github.com/repos/huggingface/datasets/issues/7203/events
https://github.com/huggingface/datasets/pull/7203
2,573,154,222
PR_kwDODunzps598skr
7,203
with_format docstring
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7203). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update." ]
2024-10-08T13:05:19
2024-10-08T13:13:12
2024-10-08T13:13:05
MEMBER
null
reported at https://github.com/huggingface/datasets/issues/3444
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7203/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7203", "html_url": "https://github.com/huggingface/datasets/pull/7203", "diff_url": "https://github.com/huggingface/datasets/pull/7203.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7203.patch", "merged_at": "2024-10-08T13:13:05" }
true
https://api.github.com/repos/huggingface/datasets/issues/7202
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7202/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7202/comments
https://api.github.com/repos/huggingface/datasets/issues/7202/events
https://github.com/huggingface/datasets/issues/7202
2,572,583,798
I_kwDODunzps6ZVoN2
7,202
`from_parquet` return type annotation
{ "login": "saiden89", "id": 45285915, "node_id": "MDQ6VXNlcjQ1Mjg1OTE1", "avatar_url": "https://avatars.githubusercontent.com/u/45285915?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saiden89", "html_url": "https://github.com/saiden89", "followers_url": "https://api.github.com/users/saiden89/followers", "following_url": "https://api.github.com/users/saiden89/following{/other_user}", "gists_url": "https://api.github.com/users/saiden89/gists{/gist_id}", "starred_url": "https://api.github.com/users/saiden89/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/saiden89/subscriptions", "organizations_url": "https://api.github.com/users/saiden89/orgs", "repos_url": "https://api.github.com/users/saiden89/repos", "events_url": "https://api.github.com/users/saiden89/events{/privacy}", "received_events_url": "https://api.github.com/users/saiden89/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-10-08T09:08:10
2024-10-08T09:08:10
null
NONE
null
### Describe the bug As already posted in https://github.com/microsoft/pylance-release/issues/6534, the correct type hinting fails when building a dataset using the `from_parquet` constructor. Their suggestion is to comprehensively annotate the method's return type to better align with the docstring information. ### Steps to reproduce the bug ```python from datasets import Dataset dataset = Dataset.from_parquet(path_or_paths="file") dataset.map(lambda x: {"new": x["old"]}, batched=True) ``` ### Expected behavior map is a [valid](https://huggingface.co/docs/datasets/v3.0.1/en/package_reference/main_classes#datasets.Dataset.map), no error should be thrown. ### Environment info - `datasets` version: 3.0.1 - Platform: macOS-15.0.1-arm64-arm-64bit - Python version: 3.12.6 - `huggingface_hub` version: 0.25.1 - PyArrow version: 17.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.6.1
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7202/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7202/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7201
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7201/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7201/comments
https://api.github.com/repos/huggingface/datasets/issues/7201/events
https://github.com/huggingface/datasets/issues/7201
2,569,837,015
I_kwDODunzps6ZLJnX
7,201
`load_dataset()` of images from a single directory where `train.png` image exists
{ "login": "SagiPolaczek", "id": 56922146, "node_id": "MDQ6VXNlcjU2OTIyMTQ2", "avatar_url": "https://avatars.githubusercontent.com/u/56922146?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SagiPolaczek", "html_url": "https://github.com/SagiPolaczek", "followers_url": "https://api.github.com/users/SagiPolaczek/followers", "following_url": "https://api.github.com/users/SagiPolaczek/following{/other_user}", "gists_url": "https://api.github.com/users/SagiPolaczek/gists{/gist_id}", "starred_url": "https://api.github.com/users/SagiPolaczek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SagiPolaczek/subscriptions", "organizations_url": "https://api.github.com/users/SagiPolaczek/orgs", "repos_url": "https://api.github.com/users/SagiPolaczek/repos", "events_url": "https://api.github.com/users/SagiPolaczek/events{/privacy}", "received_events_url": "https://api.github.com/users/SagiPolaczek/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-10-07T09:14:17
2024-10-07T09:14:17
null
NONE
null
### Describe the bug Hey! Firstly, thanks for maintaining such framework! I had a small issue, where I wanted to load a custom dataset of image+text captioning. I had all of my images in a single directory, and one of the images had the name `train.png`. Then, the loaded dataset had only this image. I guess it's related to "train" as a split name, but it's definitely an unexpected behavior :) Unfortunately I don't have time to submit a proper PR. I'm attaching a toy example to reproduce the issue. Thanks, Sagi ### Steps to reproduce the bug All of the steps I'm attaching are in a fresh env :) ``` (base) sagipolaczek@Sagis-MacBook-Pro ~ % conda activate hf_issue_env (hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % python --version Python 3.10.15 (hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % pip list | grep datasets datasets 3.0.1 (hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % ls -la Documents/hf_datasets_issue total 352 drwxr-xr-x 6 sagipolaczek staff 192 Oct 7 11:59 . drwx------@ 23 sagipolaczek staff 736 Oct 7 11:46 .. -rw-r--r--@ 1 sagipolaczek staff 72 Oct 7 11:59 metadata.csv -rw-r--r--@ 1 sagipolaczek staff 160154 Oct 6 18:00 pika.png -rw-r--r--@ 1 sagipolaczek staff 5495 Oct 6 12:02 pika_pika.png -rw-r--r--@ 1 sagipolaczek staff 1753 Oct 6 11:50 train.png (hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % cat Documents/hf_datasets_issue/metadata.csv file_name,text train.png,A train pika.png,Pika pika_pika.png,Pika Pika! (hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % python Python 3.10.15 (main, Oct 3 2024, 02:33:33) [Clang 14.0.6 ] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from datasets import load_dataset >>> dataset = load_dataset("imagefolder", data_dir="Documents/hf_datasets_issue/") >>> dataset DatasetDict({ train: Dataset({ features: ['image', 'text'], num_rows: 1 }) }) >>> dataset["train"][0] {'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=354x84 at 0x10B50FD90>, 'text': 'A train'} ### DELETING `train.png` sample ### (hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % vim Documents/hf_datasets_issue/metadata.csv (hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % rm Documents/hf_datasets_issue/train.png (hf_issue_env) sagipolaczek@Sagis-MacBook-Pro ~ % python Python 3.10.15 (main, Oct 3 2024, 02:33:33) [Clang 14.0.6 ] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from datasets import load_dataset >>> dataset = load_dataset("imagefolder", data_dir="Documents/hf_datasets_issue/") Generating train split: 2 examples [00:00, 65.99 examples/s] >>> dataset DatasetDict({ train: Dataset({ features: ['image', 'text'], num_rows: 2 }) }) >>> dataset["train"] Dataset({ features: ['image', 'text'], num_rows: 2 }) >>> dataset["train"][0],dataset["train"][1] ({'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=2356x1054 at 0x10DD11E70>, 'text': 'Pika'}, {'image': <PIL.PngImagePlugin.PngImageFile image mode=RGBA size=343x154 at 0x10E258C70>, 'text': 'Pika Pika!'}) ``` ### Expected behavior My expected behavior would be to get a dataset with the sample `train.png` in it (along with the others data points). ### Environment info I've attached it in the example: Python 3.10.15 datasets 3.0.1
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7201/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7201/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7200
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7200/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7200/comments
https://api.github.com/repos/huggingface/datasets/issues/7200/events
https://github.com/huggingface/datasets/pull/7200
2,567,921,694
PR_kwDODunzps59sgRd
7,200
Fix the environment variable for huggingface cache
{ "login": "torotoki", "id": 989899, "node_id": "MDQ6VXNlcjk4OTg5OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/989899?v=4", "gravatar_id": "", "url": "https://api.github.com/users/torotoki", "html_url": "https://github.com/torotoki", "followers_url": "https://api.github.com/users/torotoki/followers", "following_url": "https://api.github.com/users/torotoki/following{/other_user}", "gists_url": "https://api.github.com/users/torotoki/gists{/gist_id}", "starred_url": "https://api.github.com/users/torotoki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/torotoki/subscriptions", "organizations_url": "https://api.github.com/users/torotoki/orgs", "repos_url": "https://api.github.com/users/torotoki/repos", "events_url": "https://api.github.com/users/torotoki/events{/privacy}", "received_events_url": "https://api.github.com/users/torotoki/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! yes now `datasets` uses `huggingface_hub` to download and cache files from the HF Hub so you need to use `HF_HOME` (or manually `HF_HUB_CACHE` and `HF_DATASETS_CACHE` if you want to separate HF Hub cached files and cached datasets Arrow files)\r\n\r\nSo in your change I guess it needs to be `HF_HOME` instead of `HF_CACHE` ?", "Thank you for your comment. You are right. I am sorry for my mistake, I fixed it.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7200). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "I just had this issue, and needed to move the setting the env code in the python file to top, before the import of the lib \r\nie. \r\n```python\r\nimport os\r\nLOCAL_DISK_MOUNT = '/mnt/data'\r\n\r\nos.environ['HF_HOME'] = f'{LOCAL_DISK_MOUNT}/hf_cache/'\r\nos.environ['HF_DATASETS_CACHE'] = f'{LOCAL_DISK_MOUNT}/datasets/'\r\n\r\nfrom datasets import load_dataset\r\nfrom datasets import load_dataset_builder\r\nfrom psutil._common import bytes2human\r\n\r\n\r\n```" ]
2024-10-05T11:54:35
2024-10-30T23:10:27
2024-10-08T15:45:18
CONTRIBUTOR
null
Resolve #6256. As far as I tested, `HF_DATASETS_CACHE` was ignored and I could not specify the cache directory at all except for the default one by this environment variable. `HF_HOME` has worked. Perhaps the recent change on file downloading by `huggingface_hub` could affect this bug. In my testing, I could not specify the cache directory even by `load_dataset("dataset_name" cache_dir="...")`. It might be another issue. I also welcome any advice to solve this issue.
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7200/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7200/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7200", "html_url": "https://github.com/huggingface/datasets/pull/7200", "diff_url": "https://github.com/huggingface/datasets/pull/7200.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7200.patch", "merged_at": "2024-10-08T15:45:17" }
true
https://api.github.com/repos/huggingface/datasets/issues/7199
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7199/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7199/comments
https://api.github.com/repos/huggingface/datasets/issues/7199/events
https://github.com/huggingface/datasets/pull/7199
2,566,788,225
PR_kwDODunzps59pN_M
7,199
Add with_rank to Dataset.from_generator
{ "login": "muthissar", "id": 17828087, "node_id": "MDQ6VXNlcjE3ODI4MDg3", "avatar_url": "https://avatars.githubusercontent.com/u/17828087?v=4", "gravatar_id": "", "url": "https://api.github.com/users/muthissar", "html_url": "https://github.com/muthissar", "followers_url": "https://api.github.com/users/muthissar/followers", "following_url": "https://api.github.com/users/muthissar/following{/other_user}", "gists_url": "https://api.github.com/users/muthissar/gists{/gist_id}", "starred_url": "https://api.github.com/users/muthissar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/muthissar/subscriptions", "organizations_url": "https://api.github.com/users/muthissar/orgs", "repos_url": "https://api.github.com/users/muthissar/repos", "events_url": "https://api.github.com/users/muthissar/events{/privacy}", "received_events_url": "https://api.github.com/users/muthissar/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-10-04T16:51:53
2024-10-04T16:51:53
null
NONE
null
Adds `with_rank` to `Dataset.from_generator`. As for `Dataset.map` and `Dataset.filter`, this is useful when creating cache files using multi-GPU.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7199/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7199/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7199", "html_url": "https://github.com/huggingface/datasets/pull/7199", "diff_url": "https://github.com/huggingface/datasets/pull/7199.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7199.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/7198
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7198/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7198/comments
https://api.github.com/repos/huggingface/datasets/issues/7198/events
https://github.com/huggingface/datasets/pull/7198
2,566,064,849
PR_kwDODunzps59mwgu
7,198
Add repeat method to datasets
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-10-04T10:45:16
2024-10-04T10:49:21
null
CONTRIBUTOR
null
Following up on discussion in #6623 and #7198 I thought this would be pretty useful for my case so had a go at implementing. My main motivation is to be able to call iterable_dataset.repeat(None).take(samples_per_epoch) to safely avoid timeout issues in a distributed training setting. This would provide a straightforward workaround for several open issues related to this situation: https://github.com/huggingface/datasets/issues/6437, https://github.com/huggingface/datasets/issues/6594, https://github.com/huggingface/datasets/issues/6623, https://github.com/huggingface/datasets/issues/6719. @lhoestq let me know if this looks on the right track!
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7198/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7198/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/7198", "html_url": "https://github.com/huggingface/datasets/pull/7198", "diff_url": "https://github.com/huggingface/datasets/pull/7198.diff", "patch_url": "https://github.com/huggingface/datasets/pull/7198.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/7197
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7197/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7197/comments
https://api.github.com/repos/huggingface/datasets/issues/7197/events
https://github.com/huggingface/datasets/issues/7197
2,565,924,788
I_kwDODunzps6Y8Oe0
7,197
ConnectionError: Couldn't reach 'allenai/c4' on the Hub (ConnectionError)数据集下不下来,怎么回事
{ "login": "Mrgengli", "id": 114299344, "node_id": "U_kgDOBtAR0A", "avatar_url": "https://avatars.githubusercontent.com/u/114299344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mrgengli", "html_url": "https://github.com/Mrgengli", "followers_url": "https://api.github.com/users/Mrgengli/followers", "following_url": "https://api.github.com/users/Mrgengli/following{/other_user}", "gists_url": "https://api.github.com/users/Mrgengli/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mrgengli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mrgengli/subscriptions", "organizations_url": "https://api.github.com/users/Mrgengli/orgs", "repos_url": "https://api.github.com/users/Mrgengli/repos", "events_url": "https://api.github.com/users/Mrgengli/events{/privacy}", "received_events_url": "https://api.github.com/users/Mrgengli/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[ "Also cant download \"allenai/c4\", but with different error reported:\r\n```\r\nTraceback (most recent call last): \r\n File \"/***/lib/python3.10/site-packages/datasets/load.py\", line 2074, in load_dataset \r\n builder_instance = load_dataset_builder( \r\n File \"/***/lib/python3.10/site-packages/datasets/load.py\", line 1795, in load_dataset_builder \r\n dataset_module = dataset_module_factory( \r\n File \"/***/lib/python3.10/site-packages/datasets/load.py\", line 1659, in dataset_module_factory \r\n raise e1 from None \r\n File \"/***/lib/python3.10/site-packages/datasets/load.py\", line 1647, in dataset_module_factory \r\n ).get_module() \r\n File \"/***/lib/python3.10/site-packages/datasets/load.py\", line 1069, in get_module \r\n module_name, default_builder_kwargs = infer_module_for_data_files( \r\n File \"/***/lib/python3.10/site-packages/datasets/load.py\", line 594, in infer_module_for_data_files \r\n raise DataFilesNotFoundError(\"No (supported) data files found\" + (f\" in {path}\" if path else \"\")) \r\ndatasets.exceptions.DataFilesNotFoundError: No (supported) data files found in allenai/c4 \r\n```\r\n\r\n## Code to reproduce\r\n```\r\ndataset = load_dataset(\"allenai/c4\", \"en\", split=\"train\", streaming=True,trust_remote_code=True,\r\n cache_dir=\"dataset/en\",\r\n download_mode=\"force_redownload\")\r\n```\r\n\r\n## Environment\r\ndatasets 3.0.1 \r\nhuggingface_hub 0.25.1" ]
2024-10-04T09:33:25
2024-10-04T09:56:22
null
NONE
null
### Describe the bug from datasets import load_dataset print("11") traindata = load_dataset('ptb_text_only', 'penn_treebank', split='train') print("22") valdata = load_dataset('ptb_text_only', 'penn_treebank', split='validation') ### Steps to reproduce the bug 1 ### Expected behavior 1 ### Environment info 1
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7197/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7197/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7196
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7196/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7196/comments
https://api.github.com/repos/huggingface/datasets/issues/7196/events
https://github.com/huggingface/datasets/issues/7196
2,564,218,566
I_kwDODunzps6Y1t7G
7,196
concatenate_datasets does not preserve shuffling state
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
open
false
null
[]
null
[]
2024-10-03T14:30:38
2024-10-03T15:47:48
null
CONTRIBUTOR
null
### Describe the bug After concatenate datasets on an iterable dataset, the shuffling state is destroyed, similar to #7156 This means concatenation cant be used for resolving uneven numbers of samples across devices when using iterable datasets in a distributed setting as discussed in #6623 I also noticed that the number of shards is the same after concatenation, which I found surprising, but I don't understand the internals well enough to know whether this is actually surprising or not ### Steps to reproduce the bug ```python import datasets import torch.utils.data def gen(shards): yield {"shards": shards} def main(): dataset1 = datasets.IterableDataset.from_generator( gen, gen_kwargs={"shards": list(range(25))} # TODO: how to understand this? ) dataset2 = datasets.IterableDataset.from_generator( gen, gen_kwargs={"shards": list(range(25, 50))} # TODO: how to understand this? ) dataset1 = dataset1.shuffle(buffer_size=1) dataset2 = dataset2.shuffle(buffer_size=1) print(dataset1.n_shards) print(dataset2.n_shards) dataset = datasets.concatenate_datasets( [dataset1, dataset2] ) print(dataset.n_shards) # dataset = dataset1 dataloader = torch.utils.data.DataLoader( dataset, batch_size=8, num_workers=0, ) for i, batch in enumerate(dataloader): print(batch) print("\nNew epoch") dataset = dataset.set_epoch(1) for i, batch in enumerate(dataloader): print(batch) if __name__ == "__main__": main() ``` ### Expected behavior Shuffling state should be preserved ### Environment info Latest datasets
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7196/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7196/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7195
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7195/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7195/comments
https://api.github.com/repos/huggingface/datasets/issues/7195/events
https://github.com/huggingface/datasets/issues/7195
2,564,070,809
I_kwDODunzps6Y1J2Z
7,195
Add support for 3D datasets
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "user_view_type": "public", "site_admin": false } ]
null
[ "maybe related: https://github.com/huggingface/datasets/issues/6388", "Also look at https://github.com/huggingface/dataset-viewer/blob/f5fd117ceded990a7766e705bba1203fa907d6ad/services/worker/src/worker/job_runners/dataset/modalities.py#L241 which lists the 3D file formats that will assign the 3D modality to a dataset.", "~~we can brainstorm about the UX maybe (i don't expect we should load all models on the page at once – IMO there should be a manual action from user to load + maybe load first couple of row by default) cc @gary149 @cfahlgren1~~\r\n\r\nit's more for the viewer issue (https://github.com/huggingface/dataset-viewer/issues/1003)" ]
2024-10-03T13:27:44
2024-10-04T09:23:36
null
COLLABORATOR
null
See https://huggingface.co/datasets/allenai/objaverse for example
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7195/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7195/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7194
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7194/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7194/comments
https://api.github.com/repos/huggingface/datasets/issues/7194/events
https://github.com/huggingface/datasets/issues/7194
2,563,364,199
I_kwDODunzps6YydVn
7,194
datasets.exceptions.DatasetNotFoundError for private dataset
{ "login": "kdutia", "id": 20212179, "node_id": "MDQ6VXNlcjIwMjEyMTc5", "avatar_url": "https://avatars.githubusercontent.com/u/20212179?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kdutia", "html_url": "https://github.com/kdutia", "followers_url": "https://api.github.com/users/kdutia/followers", "following_url": "https://api.github.com/users/kdutia/following{/other_user}", "gists_url": "https://api.github.com/users/kdutia/gists{/gist_id}", "starred_url": "https://api.github.com/users/kdutia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kdutia/subscriptions", "organizations_url": "https://api.github.com/users/kdutia/orgs", "repos_url": "https://api.github.com/users/kdutia/repos", "events_url": "https://api.github.com/users/kdutia/events{/privacy}", "received_events_url": "https://api.github.com/users/kdutia/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[]
closed
false
null
[]
null
[ "Actually there is no such dataset available, that is why you are getting that error.", "Fixed with @kdutia in Slack chat. Generating a new token fixed this issue. " ]
2024-10-03T07:49:36
2024-10-03T10:09:28
2024-10-03T10:09:28
NONE
null
### Describe the bug The following Python code tries to download a private dataset and fails with the error `datasets.exceptions.DatasetNotFoundError: Dataset 'ClimatePolicyRadar/all-document-text-data-weekly' doesn't exist on the Hub or cannot be accessed.`. Downloading a public dataset doesn't work. ``` py from datasets import load_dataset _ = load_dataset("ClimatePolicyRadar/all-document-text-data-weekly") ``` This seems to be just an issue with my machine config as the code above works with a colleague's machine. So far I have tried: - logging back out and in from the Huggingface CLI using `huggingface-cli logout` - manually removing the token cache at `/Users/kalyan/.cache/huggingface/token` (found using `huggingface-cli env`) - manually passing a token in `load_dataset` My output of `huggingface-cli whoami`: ``` kdutia orgs: ClimatePolicyRadar ``` ### Steps to reproduce the bug ``` python Python 3.12.2 (main, Feb 6 2024, 20:19:44) [Clang 15.0.0 (clang-1500.1.0.2.5)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from datasets import load_dataset >>> _ = load_dataset("ClimatePolicyRadar/all-document-text-data-weekly") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/kalyan/Library/Caches/pypoetry/virtualenvs/open-data-cnKQNmjn-py3.12/lib/python3.12/site-packages/datasets/load.py", line 2074, in load_dataset builder_instance = load_dataset_builder( ^^^^^^^^^^^^^^^^^^^^^ File "/Users/kalyan/Library/Caches/pypoetry/virtualenvs/open-data-cnKQNmjn-py3.12/lib/python3.12/site-packages/datasets/load.py", line 1795, in load_dataset_builder dataset_module = dataset_module_factory( ^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/kalyan/Library/Caches/pypoetry/virtualenvs/open-data-cnKQNmjn-py3.12/lib/python3.12/site-packages/datasets/load.py", line 1659, in dataset_module_factory raise e1 from None File "/Users/kalyan/Library/Caches/pypoetry/virtualenvs/open-data-cnKQNmjn-py3.12/lib/python3.12/site-packages/datasets/load.py", line 1597, in dataset_module_factory raise DatasetNotFoundError(f"Dataset '{path}' doesn't exist on the Hub or cannot be accessed.") from e datasets.exceptions.DatasetNotFoundError: Dataset 'ClimatePolicyRadar/all-document-text-data-weekly' doesn't exist on the Hub or cannot be accessed. >>> ``` ### Expected behavior The dataset downloads successfully. ### Environment info From `huggingface-cli env`: ``` - huggingface_hub version: 0.25.1 - Platform: macOS-14.2.1-arm64-arm-64bit - Python version: 3.12.2 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Running in Google Colab Enterprise ?: No - Token path ?: /Users/kalyan/.cache/huggingface/token - Has saved token ?: True - Who am I ?: kdutia - Configured git credential helpers: osxkeychain - FastAI: N/A - Tensorflow: N/A - Torch: N/A - Jinja2: 3.1.4 - Graphviz: N/A - keras: N/A - Pydot: N/A - Pillow: N/A - hf_transfer: N/A - gradio: N/A - tensorboard: N/A - numpy: 2.1.1 - pydantic: N/A - aiohttp: 3.10.8 - ENDPOINT: https://huggingface.co - HF_HUB_CACHE: /Users/kalyan/.cache/huggingface/hub - HF_ASSETS_CACHE: /Users/kalyan/.cache/huggingface/assets - HF_TOKEN_PATH: /Users/kalyan/.cache/huggingface/token - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False - HF_HUB_ENABLE_HF_TRANSFER: False - HF_HUB_ETAG_TIMEOUT: 10 - HF_HUB_DOWNLOAD_TIMEOUT: 10 ``` from `datasets-cli env`: ``` - `datasets` version: 3.0.1 - Platform: macOS-14.2.1-arm64-arm-64bit - Python version: 3.12.2 - `huggingface_hub` version: 0.25.1 - PyArrow version: 17.0.0 - Pandas version: 2.2.3 - `fsspec` version: 2024.6.1 ```
{ "login": "davanstrien", "id": 8995957, "node_id": "MDQ6VXNlcjg5OTU5NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davanstrien", "html_url": "https://github.com/davanstrien", "followers_url": "https://api.github.com/users/davanstrien/followers", "following_url": "https://api.github.com/users/davanstrien/following{/other_user}", "gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}", "starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions", "organizations_url": "https://api.github.com/users/davanstrien/orgs", "repos_url": "https://api.github.com/users/davanstrien/repos", "events_url": "https://api.github.com/users/davanstrien/events{/privacy}", "received_events_url": "https://api.github.com/users/davanstrien/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7194/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7193
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7193/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7193/comments
https://api.github.com/repos/huggingface/datasets/issues/7193/events
https://github.com/huggingface/datasets/issues/7193
2,562,392,887
I_kwDODunzps6YuwM3
7,193
Support of num_workers (multiprocessing) in map for IterableDataset
{ "login": "getao", "id": 12735658, "node_id": "MDQ6VXNlcjEyNzM1NjU4", "avatar_url": "https://avatars.githubusercontent.com/u/12735658?v=4", "gravatar_id": "", "url": "https://api.github.com/users/getao", "html_url": "https://github.com/getao", "followers_url": "https://api.github.com/users/getao/followers", "following_url": "https://api.github.com/users/getao/following{/other_user}", "gists_url": "https://api.github.com/users/getao/gists{/gist_id}", "starred_url": "https://api.github.com/users/getao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/getao/subscriptions", "organizations_url": "https://api.github.com/users/getao/orgs", "repos_url": "https://api.github.com/users/getao/repos", "events_url": "https://api.github.com/users/getao/events{/privacy}", "received_events_url": "https://api.github.com/users/getao/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "I was curious about the same - since map is applied on the fly I was assuming that setting num_workers>1 in DataLoader would effectively do the map in parallel, have you tried that?" ]
2024-10-02T18:34:04
2024-10-03T09:54:15
null
NONE
null
### Feature request Currently, IterableDataset doesn't support setting num_worker in .map(), which results in slow processing here. Could we add support for it? As .map() can be run in the batch fashion (e.g., batch_size is default to 1000 in datasets), it seems to be doable for IterableDataset as the regular Dataset. ### Motivation Improving data processing efficiency ### Your contribution Testing
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7193/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/7192
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/7192/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/7192/comments
https://api.github.com/repos/huggingface/datasets/issues/7192/events
https://github.com/huggingface/datasets/issues/7192
2,562,289,642
I_kwDODunzps6YuW_q
7,192
Add repeat() for iterable datasets
{ "login": "alex-hh", "id": 5719745, "node_id": "MDQ6VXNlcjU3MTk3NDU=", "avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alex-hh", "html_url": "https://github.com/alex-hh", "followers_url": "https://api.github.com/users/alex-hh/followers", "following_url": "https://api.github.com/users/alex-hh/following{/other_user}", "gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions", "organizations_url": "https://api.github.com/users/alex-hh/orgs", "repos_url": "https://api.github.com/users/alex-hh/repos", "events_url": "https://api.github.com/users/alex-hh/events{/privacy}", "received_events_url": "https://api.github.com/users/alex-hh/received_events", "type": "User", "user_view_type": "public", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "perhaps concatenate_datasets can already be used to achieve almost the same effect? ", "`concatenate_datasets` does the job when there is a finite number of repetitions, but in case of `.repeat()` forever we need a new logic in `iterable_dataset.py`" ]
2024-10-02T17:48:13
2024-10-03T12:53:34
null
CONTRIBUTOR
null
### Feature request It would be useful to be able to straightforwardly repeat iterable datasets indefinitely, to provide complete control over starting and ending of iteration to the user. An IterableDataset.repeat(n) function could do this automatically ### Motivation This feature was discussed in this issue https://github.com/huggingface/datasets/issues/7147, and would resolve the need to use the hack of interleave datasets with probability 0 as a simple way to achieve this functionality. An additional benefit might be the simplification of the use of iterable datasets in a distributed setting: If the user can assume that datasets will repeat indefinitely, then issues around different numbers of samples appearing on different devices (e.g. https://github.com/huggingface/datasets/issues/6437, https://github.com/huggingface/datasets/issues/6594, https://github.com/huggingface/datasets/issues/6623, https://github.com/huggingface/datasets/issues/6719) can potentially be straightforwardly resolved by simply doing: ids.repeat(None).take(n_samples_per_epoch) ### Your contribution I'm not familiar enough with the codebase to assess how straightforward this would be to implement. If it might be very straightforward, I could possibly have a go.
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/7192/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/7192/timeline
null
null
null
null
false