url
stringlengths 60
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 74
75
| comments_url
stringlengths 69
70
| events_url
stringlengths 67
68
| html_url
stringlengths 49
51
| id
int64 620M
2.29B
| node_id
stringlengths 18
32
| number
int64 153
6.9k
| title
stringlengths 9
244
| user
dict | labels
listlengths 0
3
| state
stringclasses 1
value | locked
bool 1
class | assignee
dict | assignees
listlengths 0
3
| milestone
dict | comments
sequencelengths 0
30
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
null | author_association
stringclasses 4
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 10
33.9k
⌀ | reactions
dict | timeline_url
stringlengths 69
70
| performed_via_github_app
null | state_reason
stringclasses 1
value | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6898 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6898/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6898/comments | https://api.github.com/repos/huggingface/datasets/issues/6898/events | https://github.com/huggingface/datasets/pull/6898 | 2,294,432,108 | PR_kwDODunzps5vWJ9v | 6,898 | Fix YAML error in README files appearing on GitHub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6898). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"After this PR, the README file looks like:\r\n\r\n![Screenshot from 2024-05-14 14-19-29](https://github.com/huggingface/datasets/assets/8515462/1f665a06-98be-4dd7-ba7e-7cc025489503)\r\n"
] | 2024-05-14T05:21:57 | 2024-05-14T12:21:02 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6898",
"html_url": "https://github.com/huggingface/datasets/pull/6898",
"diff_url": "https://github.com/huggingface/datasets/pull/6898.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6898.patch",
"merged_at": null
} | Fix YAML error in README files appearing on GitHub.
See error message:
![Screenshot from 2024-05-14 06-58-02](https://github.com/huggingface/datasets/assets/8515462/7984cc4e-96ee-4e83-99a4-4c0c5791fa05)
Fix #6897. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6898/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6897/comments | https://api.github.com/repos/huggingface/datasets/issues/6897/events | https://github.com/huggingface/datasets/issues/6897 | 2,293,428,243 | I_kwDODunzps6IsvAT | 6,897 | datasets template guide :: issue in documentation YAML | {
"login": "bghira",
"id": 59658056,
"node_id": "MDQ6VXNlcjU5NjU4MDU2",
"avatar_url": "https://avatars.githubusercontent.com/u/59658056?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bghira",
"html_url": "https://github.com/bghira",
"followers_url": "https://api.github.com/users/bghira/followers",
"following_url": "https://api.github.com/users/bghira/following{/other_user}",
"gists_url": "https://api.github.com/users/bghira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bghira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bghira/subscriptions",
"organizations_url": "https://api.github.com/users/bghira/orgs",
"repos_url": "https://api.github.com/users/bghira/repos",
"events_url": "https://api.github.com/users/bghira/events{/privacy}",
"received_events_url": "https://api.github.com/users/bghira/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hello, @bghira.\r\n\r\nThanks for reporting. Please note that the text originating the error is not supposed to be valid YAML: it contains the instructions to generate the actual YAML content, that should replace the instructions comment.\r\n\r\nOn the other hand, I agree that it is not nice to have that YAML error message at the top of the page: \r\n![Screenshot from 2024-05-14 06-58-02](https://github.com/huggingface/datasets/assets/8515462/28409eb4-99e7-4b24-8eaa-21a65a8f23b2)\r\n\r\nI am proposing a change to make the YAML error disappear.",
"thanks albert! i looked at it for a while to figure it out. i think the `raw` view option is the correct way to look at it?"
] | 2024-05-13T17:33:59 | 2024-05-14T12:08:50 | null | NONE | null | null | null | ### Describe the bug
There is a YAML error at the top of the page, and I don't think it's supposed to be there
### Steps to reproduce the bug
1. Browse to [this tutorial document](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md)
2. Observe a big red error at the top
3. The rest of the document remains functional
### Expected behavior
I think the YAML block should be displayed or ignored.
### Environment info
N/A | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6897/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6896 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6896/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6896/comments | https://api.github.com/repos/huggingface/datasets/issues/6896/events | https://github.com/huggingface/datasets/issues/6896 | 2,293,176,061 | I_kwDODunzps6Irxb9 | 6,896 | Regression bug: `NonMatchingSplitsSizesError` for (possibly) overwritten dataset | {
"login": "finiteautomata",
"id": 167943,
"node_id": "MDQ6VXNlcjE2Nzk0Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/167943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/finiteautomata",
"html_url": "https://github.com/finiteautomata",
"followers_url": "https://api.github.com/users/finiteautomata/followers",
"following_url": "https://api.github.com/users/finiteautomata/following{/other_user}",
"gists_url": "https://api.github.com/users/finiteautomata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/finiteautomata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/finiteautomata/subscriptions",
"organizations_url": "https://api.github.com/users/finiteautomata/orgs",
"repos_url": "https://api.github.com/users/finiteautomata/repos",
"events_url": "https://api.github.com/users/finiteautomata/events{/privacy}",
"received_events_url": "https://api.github.com/users/finiteautomata/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-05-13T15:41:57 | 2024-05-13T15:44:48 | null | NONE | null | null | null | ### Describe the bug
While trying to load the dataset `https://huggingface.co/datasets/pysentimiento/spanish-tweets-small`, I get this error:
```python
---------------------------------------------------------------------------
NonMatchingSplitsSizesError Traceback (most recent call last)
[<ipython-input-1-d6a3c721d3b8>](https://localhost:8080/#) in <cell line: 3>()
1 from datasets import load_dataset
2
----> 3 ds = load_dataset("pysentimiento/spanish-tweets-small")
3 frames
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2150
2151 # Download and prepare data
-> 2152 builder_instance.download_and_prepare(
2153 download_config=download_config,
2154 download_mode=download_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
946 if num_proc is not None:
947 prepare_split_kwargs["num_proc"] = num_proc
--> 948 self._download_and_prepare(
949 dl_manager=dl_manager,
950 verification_mode=verification_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
1059
1060 if verification_mode == VerificationMode.BASIC_CHECKS or verification_mode == VerificationMode.ALL_CHECKS:
-> 1061 verify_splits(self.info.splits, split_dict)
1062
1063 # Update the info object with the splits.
[/usr/local/lib/python3.10/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_splits(expected_splits, recorded_splits)
98 ]
99 if len(bad_splits) > 0:
--> 100 raise NonMatchingSplitsSizesError(str(bad_splits))
101 logger.info("All the splits matched successfully.")
102
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=82649695458, num_examples=597433111, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=3358310095, num_examples=24898932, shard_lengths=[3626991, 3716991, 4036990, 3506990, 3676990, 3716990, 2616990], dataset_name='spanish-tweets-small')}]
```
I think I had this dataset updated, might be related to #6271
It is working fine as late in `2.10.0` , but not in `2.13.0` onwards.
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("pysentimiento/spanish-tweets-small")
```
You can run it in [this notebook](https://colab.research.google.com/drive/1FdhqLiVimHIlkn7B54DbhizeQ4U3vGVl#scrollTo=YgA50cBSibUg)
### Expected behavior
Load the dataset without any error
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.20.3
- PyArrow version: 14.0.2
- Pandas version: 2.0.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6896/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6895/comments | https://api.github.com/repos/huggingface/datasets/issues/6895/events | https://github.com/huggingface/datasets/pull/6895 | 2,292,993,156 | PR_kwDODunzps5vRK8P | 6,895 | Document that to_json defaults to JSON Lines | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6895). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-05-13T14:22:34 | 2024-05-13T14:25:08 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6895",
"html_url": "https://github.com/huggingface/datasets/pull/6895",
"diff_url": "https://github.com/huggingface/datasets/pull/6895.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6895.patch",
"merged_at": null
} | Document that `Dataset.to_json` defaults to JSON Lines, by adding explanation in the corresponding docstring.
Fix #6894. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6895/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6894/comments | https://api.github.com/repos/huggingface/datasets/issues/6894/events | https://github.com/huggingface/datasets/issues/6894 | 2,292,840,226 | I_kwDODunzps6Iqfci | 6,894 | Better document defaults of to_json | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2024-05-13T13:30:54 | 2024-05-13T13:30:55 | null | MEMBER | null | null | null | Better document defaults of `to_json`: the default format is [JSON-Lines](https://jsonlines.org/).
Related to:
- #6891 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6894/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6892/comments | https://api.github.com/repos/huggingface/datasets/issues/6892/events | https://github.com/huggingface/datasets/pull/6892 | 2,291,201,347 | PR_kwDODunzps5vLIlp | 6,892 | Add support for categorical/dictionary types | {
"login": "EthanSteinberg",
"id": 342233,
"node_id": "MDQ6VXNlcjM0MjIzMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/342233?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EthanSteinberg",
"html_url": "https://github.com/EthanSteinberg",
"followers_url": "https://api.github.com/users/EthanSteinberg/followers",
"following_url": "https://api.github.com/users/EthanSteinberg/following{/other_user}",
"gists_url": "https://api.github.com/users/EthanSteinberg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EthanSteinberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EthanSteinberg/subscriptions",
"organizations_url": "https://api.github.com/users/EthanSteinberg/orgs",
"repos_url": "https://api.github.com/users/EthanSteinberg/repos",
"events_url": "https://api.github.com/users/EthanSteinberg/events{/privacy}",
"received_events_url": "https://api.github.com/users/EthanSteinberg/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-05-12T07:15:08 | 2024-05-12T07:15:37 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6892",
"html_url": "https://github.com/huggingface/datasets/pull/6892",
"diff_url": "https://github.com/huggingface/datasets/pull/6892.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6892.patch",
"merged_at": null
} | Arrow has a very useful dictionary/categorical type (https://arrow.apache.org/docs/python/generated/pyarrow.dictionary.html). This data type has significant speed, memory and disk benefits over pa.string() when there are only a few unique text strings in a column.
Unfortunately, huggingface datasets currently does not support this type. So huggingface datasets cannot natively read many parquet files that use this datatype .This PR adds support for Huggingface Datasets to read categorical/dictionary data.
Note: This PR functions by simply converting those dictionary/categorical types to strings. This means that huggingface datasets cannot take advantage of the compute benefits of categoricals, but it significantly simplifies logic. At this time, I do not think it makes sense to optimize categorical support within huggingface datasets and that we should only try to optimize later, if necessary.
Closes #5706 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6892/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6892/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6890 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6890/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6890/comments | https://api.github.com/repos/huggingface/datasets/issues/6890/events | https://github.com/huggingface/datasets/issues/6890 | 2,288,699,041 | I_kwDODunzps6Iasah | 6,890 | add `with_transform` and/or `set_transform` to IterableDataset | {
"login": "not-lain",
"id": 70411813,
"node_id": "MDQ6VXNlcjcwNDExODEz",
"avatar_url": "https://avatars.githubusercontent.com/u/70411813?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/not-lain",
"html_url": "https://github.com/not-lain",
"followers_url": "https://api.github.com/users/not-lain/followers",
"following_url": "https://api.github.com/users/not-lain/following{/other_user}",
"gists_url": "https://api.github.com/users/not-lain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/not-lain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/not-lain/subscriptions",
"organizations_url": "https://api.github.com/users/not-lain/orgs",
"repos_url": "https://api.github.com/users/not-lain/repos",
"events_url": "https://api.github.com/users/not-lain/events{/privacy}",
"received_events_url": "https://api.github.com/users/not-lain/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2024-05-10T01:00:12 | 2024-05-10T01:00:46 | null | NONE | null | null | null | ### Feature request
when working with a really large dataset it would save us a lot of time (and compute resources) to use either with_transform or the set_transform from the Dataset class instead of waiting for the entire dataset to map
### Motivation
don't want to wait for a really long dataset to map, this would give IterableDataset an extra advantage over the Dataset class.
reducing time and resources
### Your contribution
I am a little busy with my job search lately, but would post about this feature in my social media.
Apologies again (dad going to kick me out soon), if I ever have some free time I will contribute to making this a reality, but that's going to be hard
/ (┬┬﹏┬┬)\ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6890/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6887/comments | https://api.github.com/repos/huggingface/datasets/issues/6887/events | https://github.com/huggingface/datasets/issues/6887 | 2,286,786,396 | I_kwDODunzps6ITZdc | 6,887 | FAISS load to None | {
"login": "brainer3220",
"id": 40418544,
"node_id": "MDQ6VXNlcjQwNDE4NTQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/40418544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brainer3220",
"html_url": "https://github.com/brainer3220",
"followers_url": "https://api.github.com/users/brainer3220/followers",
"following_url": "https://api.github.com/users/brainer3220/following{/other_user}",
"gists_url": "https://api.github.com/users/brainer3220/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brainer3220/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brainer3220/subscriptions",
"organizations_url": "https://api.github.com/users/brainer3220/orgs",
"repos_url": "https://api.github.com/users/brainer3220/repos",
"events_url": "https://api.github.com/users/brainer3220/events{/privacy}",
"received_events_url": "https://api.github.com/users/brainer3220/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-05-09T02:43:50 | 2024-05-09T02:43:50 | null | NONE | null | null | null | ### Describe the bug
I've use FAISS with Datasets and save to FAISS.
Then load to save FAISS then no error, then ds to None
```python
ds.load_faiss_index('embeddings', 'my_index.faiss')
```
### Steps to reproduce the bug
# 1.
```python
ds_with_embeddings = ds.map(lambda example: {'embeddings': model(transforms(example['image']).unsqueeze(0)).squeeze()}, batch_size=64)
ds_with_embeddings.add_faiss_index(column='embeddings')
ds_with_embeddings.save_faiss_index('embeddings', 'index.faiss')
```
# 2.
```python
ds.load_faiss_index('embeddings', 'my_index.faiss')
```
### Expected behavior
Add column in Datasets.
### Environment info
Google Colab, SageMaker Notebook | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6887/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6886 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6886/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6886/comments | https://api.github.com/repos/huggingface/datasets/issues/6886/events | https://github.com/huggingface/datasets/issues/6886 | 2,286,328,984 | I_kwDODunzps6IRpyY | 6,886 | load_dataset with data_dir and cache_dir set fail with not supported | {
"login": "fah",
"id": 322496,
"node_id": "MDQ6VXNlcjMyMjQ5Ng==",
"avatar_url": "https://avatars.githubusercontent.com/u/322496?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fah",
"html_url": "https://github.com/fah",
"followers_url": "https://api.github.com/users/fah/followers",
"following_url": "https://api.github.com/users/fah/following{/other_user}",
"gists_url": "https://api.github.com/users/fah/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fah/subscriptions",
"organizations_url": "https://api.github.com/users/fah/orgs",
"repos_url": "https://api.github.com/users/fah/repos",
"events_url": "https://api.github.com/users/fah/events{/privacy}",
"received_events_url": "https://api.github.com/users/fah/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-05-08T19:52:35 | 2024-05-08T19:58:11 | null | NONE | null | null | null | ### Describe the bug
with python 3.11 I execute:
```py
from transformers import Wav2Vec2Processor, Data2VecAudioModel
import torch
from torch import nn
from datasets import load_dataset, concatenate_datasets
# load demo audio and set processor
dataset_clean = load_dataset("librispeech_asr", "clean", split="validation", data_dir="data", cache_dir="cache")
```
This fails in the last line with
```log
Found cached dataset librispeech_asr (file:///Users/as/Documents/Project/git/audio2vec/cache/librispeech_asr/clean-data_dir=data/2.1.0/cff5df6e7955c80a67f80e27e7e655de71c689e2d2364bece785b972acb37fe7)
Traceback (most recent call last):
File "/Users/as/Documents/Project/git/audio2vec/src/music2vec-v1.py", line 7, in <module>
dataset_clean = load_dataset("librispeech_asr", "clean", split="validation", data_dir="data", cache_dir="cache")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/as/anaconda3/lib/python3.11/site-packages/datasets/load.py", line 1810, in load_dataset
ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/as/anaconda3/lib/python3.11/site-packages/datasets/builder.py", line 1113, in as_dataset
raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.")
NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported.
```
### Steps to reproduce the bug
I setup an venv with requirements.txt
```txt
transformers==4.40.2
torch==2.2.2
datasets==2.16.0
fsspec==2023.9.2
```
pip freeze is:
```
aiohttp==3.9.5
aiosignal==1.3.1
attrs==23.2.0
certifi==2024.2.2
charset-normalizer==3.3.2
datasets==2.16.0
dill==0.3.7
filelock==3.14.0
frozenlist==1.4.1
fsspec==2023.9.2
huggingface-hub==0.23.0
idna==3.7
Jinja2==3.1.4
MarkupSafe==2.1.5
mpmath==1.3.0
multidict==6.0.5
multiprocess==0.70.15
networkx==3.3
numpy==1.26.4
packaging==24.0
pandas==2.2.2
pyarrow==16.0.0
pyarrow-hotfix==0.6
python-dateutil==2.9.0.post0
pytz==2024.1
PyYAML==6.0.1
regex==2024.4.28
requests==2.31.0
safetensors==0.4.3
six==1.16.0
sympy==1.12
tokenizers==0.19.1
torch==2.2.2
tqdm==4.66.4
transformers==4.40.2
typing_extensions==4.11.0
tzdata==2024.1
urllib3==2.2.1
xxhash==3.4.1
yarl==1.9.4
```
I execute this on a M1 Mac.
### Expected behavior
I don't understand the error message. Why is "local" caching not supported. Would it possible to give some additional hint with the error message how to solve this issue?
### Environment info
source ....
python -u example.py | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6886/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6883/comments | https://api.github.com/repos/huggingface/datasets/issues/6883/events | https://github.com/huggingface/datasets/pull/6883 | 2,284,808,399 | PR_kwDODunzps5u1sL1 | 6,883 | Require Pillow >= 9.4.0 to avoid AttributeError when loading image dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6883). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-05-08T06:43:29 | 2024-05-08T09:38:27 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6883",
"html_url": "https://github.com/huggingface/datasets/pull/6883",
"diff_url": "https://github.com/huggingface/datasets/pull/6883.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6883.patch",
"merged_at": null
} | Require Pillow >= 9.4.0 to avoid AttributeError when loading image dataset.
The `PIL.Image.ExifTags` that we use in our code was implemented in Pillow-9.4.0: https://github.com/python-pillow/Pillow/commit/24a5405a9f7ea22f28f9c98b3e407292ea5ee1d3
The bug #6881 was introduced in datasets-2.19.0 by this PR:
- #6739
Fix #6881. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6883/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6882 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6882/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6882/comments | https://api.github.com/repos/huggingface/datasets/issues/6882/events | https://github.com/huggingface/datasets/issues/6882 | 2,284,803,158 | I_kwDODunzps6IL1RW | 6,882 | Connection Error When Using By-pass Proxies | {
"login": "MRNOBODY-ZST",
"id": 78351684,
"node_id": "MDQ6VXNlcjc4MzUxNjg0",
"avatar_url": "https://avatars.githubusercontent.com/u/78351684?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MRNOBODY-ZST",
"html_url": "https://github.com/MRNOBODY-ZST",
"followers_url": "https://api.github.com/users/MRNOBODY-ZST/followers",
"following_url": "https://api.github.com/users/MRNOBODY-ZST/following{/other_user}",
"gists_url": "https://api.github.com/users/MRNOBODY-ZST/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MRNOBODY-ZST/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MRNOBODY-ZST/subscriptions",
"organizations_url": "https://api.github.com/users/MRNOBODY-ZST/orgs",
"repos_url": "https://api.github.com/users/MRNOBODY-ZST/repos",
"events_url": "https://api.github.com/users/MRNOBODY-ZST/events{/privacy}",
"received_events_url": "https://api.github.com/users/MRNOBODY-ZST/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-05-08T06:40:14 | 2024-05-08T06:40:14 | null | NONE | null | null | null | ### Describe the bug
I'm currently using Clash for Windows as my proxy tunnel, after exporting HTTP_PROXY and HTTPS_PROXY to the port that clash provides🤔, it runs into a connection error saying "Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f969d391870>: Failed to establish a new connection: [Errno 111] Connection refused'))")))"
I have already read the documentation provided on the hugginface, but I think I didn't see the detailed instruction on how to set up proxies for this library.
### Steps to reproduce the bug
1. Turn on any proxy software like Clash / ShadosocksR etc.
2. export system varibles to the port provided by your proxy software in wsl (It's ok for other applications to use proxy expect dataset-library)
3. load any dataset from hugginface online
### Expected behavior
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
Cell In[33], [line 3](vscode-notebook-cell:?execution_count=33&line=3)
[1](vscode-notebook-cell:?execution_count=33&line=1) from datasets import load_metric
----> [3](vscode-notebook-cell:?execution_count=33&line=3) metric = load_metric("seqeval")
File ~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46, in deprecated.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
[44](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:44) warnings.warn(warning_msg, category=FutureWarning, stacklevel=2)
[45](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:45) _emitted_deprecation_warnings.add(func_hash)
---> [46](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/deprecation_utils.py:46) return deprecated_function(*args, **kwargs)
File ~/.local/lib/python3.10/site-packages/datasets/load.py:2104, in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, trust_remote_code, **metric_init_kwargs)
[2101](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2101) warnings.filterwarnings("ignore", message=".*https://huggingface.co/docs/evaluate$", category=FutureWarning)
[2103](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2103) download_mode = DownloadMode(download_mode or DownloadMode.REUSE_DATASET_IF_EXISTS)
-> [2104](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2104) metric_module = metric_module_factory(
[2105](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2105) path,
[2106](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2106) revision=revision,
[2107](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2107) download_config=download_config,
[2108](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2108) download_mode=download_mode,
[2109](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2109) trust_remote_code=trust_remote_code,
[2110](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2110) ).module_path
[2111](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2111) metric_cls = import_main_class(metric_module, dataset=False)
[2112](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2112) metric = metric_cls(
[2113](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2113) config_name=config_name,
[2114](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/load.py:2114) process_id=process_id,
...
--> [633](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:633) raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
[634](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:634) elif response is not None:
[635](https://vscode-remote+wsl-002bubuntu-002d22-002e04.vscode-resource.vscode-cdn.net/home/noodle/Transformers-Tutorials/LayoutLMv3/~/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py:635) raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})")
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (SSLError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (Caused by SSLError(SSLEOFError(8, '[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation of protocol (_ssl.c:1007)')))")))
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.0
- PyArrow version: 16.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.2.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6882/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6881 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6881/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6881/comments | https://api.github.com/repos/huggingface/datasets/issues/6881/events | https://github.com/huggingface/datasets/issues/6881 | 2,284,794,009 | I_kwDODunzps6ILzCZ | 6,881 | AttributeError: module 'PIL.Image' has no attribute 'ExifTags' | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2024-05-08T06:33:57 | 2024-05-08T06:33:58 | null | MEMBER | null | null | null | When trying to load an image dataset in an old Python environment (with Pillow-8.4.0), an error is raised:
```Python traceback
AttributeError: module 'PIL.Image' has no attribute 'ExifTags'
```
The error traceback:
```Python traceback
~/huggingface/datasets/src/datasets/iterable_dataset.py in __iter__(self)
1391 # `IterableDataset` automatically fills missing columns with None.
1392 # This is done with `_apply_feature_types_on_example`.
-> 1393 example = _apply_feature_types_on_example(
1394 example, self.features, token_per_repo_id=self._token_per_repo_id
1395 )
~/huggingface/datasets/src/datasets/iterable_dataset.py in _apply_feature_types_on_example(example, features, token_per_repo_id)
1080 encoded_example = features.encode_example(example)
1081 # Decode example for Audio feature, e.g.
-> 1082 decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id)
1083 return decoded_example
1084
~/huggingface/datasets/src/datasets/features/features.py in decode_example(self, example, token_per_repo_id)
1974
-> 1975 return {
1976 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1977 if self._column_requires_decoding[column_name]
~/huggingface/datasets/src/datasets/features/features.py in <dictcomp>(.0)
1974
1975 return {
-> 1976 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id)
1977 if self._column_requires_decoding[column_name]
1978 else value
~/huggingface/datasets/src/datasets/features/features.py in decode_nested_example(schema, obj, token_per_repo_id)
1339 # we pass the token to read and decode files from private repositories in streaming mode
1340 if obj is not None and schema.decode:
-> 1341 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
1342 return obj
1343
~/huggingface/datasets/src/datasets/features/image.py in decode_example(self, value, token_per_repo_id)
187 image = PIL.Image.open(BytesIO(bytes_))
188 image.load() # to avoid "Too many open files" errors
--> 189 if image.getexif().get(PIL.Image.ExifTags.Base.Orientation) is not None:
190 image = PIL.ImageOps.exif_transpose(image)
191 if self.mode and self.mode != image.mode:
~/huggingface/datasets/venv/lib/python3.9/site-packages/PIL/Image.py in __getattr__(name)
75 )
76 return categories[name]
---> 77 raise AttributeError(f"module '{__name__}' has no attribute '{name}'")
78
79
AttributeError: module 'PIL.Image' has no attribute 'ExifTags'
```
### Environment info
Since datasets 2.19.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6881/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6880 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6880/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6880/comments | https://api.github.com/repos/huggingface/datasets/issues/6880/events | https://github.com/huggingface/datasets/issues/6880 | 2,283,278,337 | I_kwDODunzps6IGBAB | 6,880 | Webdataset: KeyError: 'png' on some datasets when streaming | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"The error is caused by malformed basenames of the files within the TARs:\r\n- `15_Cohen_1-s2.0-S0929664620300449-gr3_lrg-b.png` becomes `15_Cohen_1-s2` as the grouping `__key__`, and `0-S0929664620300449-gr3_lrg-b.png` as the additional key to be added to the example\r\n- whereas the intended behavior was to use `15_Cohen_1-s2.0-S0929664620300449-gr3_lrg-b` as the grouping `__key__`, and `png` as the additional key to be added to the example\r\n\r\nTo get the expected behavior, the basenames of the files within the TARs should be fixed so that they only contain a single dot, the one separating the file extension.",
"I reopen it because I think we should try to give a clearer error message with a specific error code.\r\n\r\nFor now, it's hard for the user to understand where the error comes from (not everybody knows the subtleties of the webdataset filename structure).\r\n\r\n(we can transfer it to https://github.com/huggingface/dataset-viewer if it fits better there)",
"same with .jpg -> https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions\r\n\r\n```\r\nError code: DatasetGenerationError\r\nException: DatasetGenerationError\r\nMessage: An error occurred while generating the dataset\r\nTraceback: Traceback (most recent call last):\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1748, in _prepare_split_single\r\n for key, record in generator:\r\n File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 818, in wrapped\r\n for item in generator(*args, **kwargs):\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py\", line 109, in _generate_examples\r\n example[field_name] = {\"path\": example[\"__key__\"] + \".\" + field_name, \"bytes\": example[field_name]}\r\n KeyError: 'jpg'\r\n \r\n The above exception was the direct cause of the following exception:\r\n \r\n Traceback (most recent call last):\r\n File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 1316, in compute_config_parquet_and_info_response\r\n parquet_operations, partial = stream_convert_to_parquet(\r\n File \"/src/services/worker/src/worker/job_runners/config/parquet_and_info.py\", line 909, in stream_convert_to_parquet\r\n builder._prepare_split(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1627, in _prepare_split\r\n for job_id, done, content in self._prepare_split_single(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1784, in _prepare_split_single\r\n raise DatasetGenerationError(\"An error occurred while generating the dataset\") from e\r\n datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset\r\n```\r\n",
"More details in the spec (https://docs.google.com/document/d/18OdLjruFNX74ILmgrdiCI9J1fQZuhzzRBCHV9URWto0/edit#heading=h.hkptaq2kct2s)\r\n\r\n> The prefix of a file is all directory components of the file plus the file name component up to the first “.” in the file name.\r\n> The last extension (i.e., the portion after the last “.”) in a file name determines the file type.\r\n\r\n> Example:\r\n\timages17/image194.left.jpg\r\n\timages17/image194.right.jpg\r\n\timages17/image194.json\r\n\timages17/image12.left.jpg\r\n\timages17/image12.json\r\n\timages17/image12.right.jpg\r\n\timages3/image1459.left.jpg\r\n> \t…\r\n> When reading this with a WebDataset library, you would get the following two dictionaries back in sequence:\r\n\r\n { “__key__”: “images17/image194”, “left.jpg”: b”...”, “right.jpg”: b”...”, “json”: b”...”}\r\n { “__key__”: “images17/image12”, “left.jpg”: b”...”, “right.jpg”: b”...”, “json”: b”...”}\r\n",
"OK, the issue is different in the latter case: some files are suffixed as `.jpeg`, and others as `.jpg` :)\r\n\r\nIs it a limitation of the webdataset format, or of the datasets library @lhoestq? And could we be able to give a clearer error?"
] | 2024-05-07T13:09:02 | 2024-05-14T20:34:05 | null | MEMBER | null | null | null | reported at https://huggingface.co/datasets/tbone5563/tar_images/discussions/1
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("tbone5563/tar_images")
Downloading data: 100%
1.41G/1.41G [00:48<00:00, 17.2MB/s]
Downloading data: 100%
619M/619M [00:11<00:00, 57.4MB/s]
Generating train split:
970/0 [00:02<00:00, 534.94 examples/s]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1747 _time = time.time()
-> 1748 for key, record in generator:
1749 if max_shard_size is not None and writer._num_bytes > max_shard_size:
7 frames
[/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/webdataset/webdataset.py](https://localhost:8080/#) in _generate_examples(self, tar_paths, tar_iterators)
108 for field_name in image_field_names + audio_field_names:
--> 109 example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]}
110 yield f"{tar_idx}_{example_idx}", example
KeyError: 'png'
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
[<ipython-input-2-8e0fbb7badc9>](https://localhost:8080/#) in <cell line: 3>()
1 from datasets import load_dataset
2
----> 3 ds = load_dataset("tbone5563/tar_images")
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)
2607
2608 # Download and prepare data
-> 2609 builder_instance.download_and_prepare(
2610 download_config=download_config,
2611 download_mode=download_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
1025 if num_proc is not None:
1026 prepare_split_kwargs["num_proc"] = num_proc
-> 1027 self._download_and_prepare(
1028 dl_manager=dl_manager,
1029 verification_mode=verification_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs)
1787
1788 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs):
-> 1789 super()._download_and_prepare(
1790 dl_manager,
1791 verification_mode,
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
1120 try:
1121 # Prepare split will record examples associated to the split
-> 1122 self._prepare_split(split_generator, **prepare_split_kwargs)
1123 except OSError as e:
1124 raise OSError(
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size)
1625 job_id = 0
1626 with pbar:
-> 1627 for job_id, done, content in self._prepare_split_single(
1628 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1629 ):
[/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id)
1782 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1783 e = e.__context__
-> 1784 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1785
1786 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6880/timeline | null | reopened | false |
https://api.github.com/repos/huggingface/datasets/issues/6879 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6879/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6879/comments | https://api.github.com/repos/huggingface/datasets/issues/6879/events | https://github.com/huggingface/datasets/issues/6879 | 2,282,968,259 | I_kwDODunzps6IE1TD | 6,879 | Batched mapping does not raise an error if values for an existing column are empty | {
"login": "felix-schneider",
"id": 208336,
"node_id": "MDQ6VXNlcjIwODMzNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/208336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/felix-schneider",
"html_url": "https://github.com/felix-schneider",
"followers_url": "https://api.github.com/users/felix-schneider/followers",
"following_url": "https://api.github.com/users/felix-schneider/following{/other_user}",
"gists_url": "https://api.github.com/users/felix-schneider/gists{/gist_id}",
"starred_url": "https://api.github.com/users/felix-schneider/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felix-schneider/subscriptions",
"organizations_url": "https://api.github.com/users/felix-schneider/orgs",
"repos_url": "https://api.github.com/users/felix-schneider/repos",
"events_url": "https://api.github.com/users/felix-schneider/events{/privacy}",
"received_events_url": "https://api.github.com/users/felix-schneider/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-05-07T11:02:40 | 2024-05-07T11:02:40 | null | NONE | null | null | null | ### Describe the bug
Using `Dataset.map(fn, batched=True)` allows resizing the dataset by returning a dict of lists, all of which must be the same size. If they are not the same size, an error like `pyarrow.lib.ArrowInvalid: Column 1 named x expected length 1 but got length 0` is raised.
This is not the case if the function returns an empty list for an existing column in the dataset. In that case, the dataset is silently resized to 0 rows.
### Steps to reproduce the bug
MWE:
```
import datasets
data = datasets.Dataset.from_dict({"test": [1]})
def mapping_fn(examples):
return {"test": [], "y": [1]}
data = data.map(mapping_fn, batched=True)
print(len(data))
```
Note that when returning `"x": []`, the error is raised correctly, also when returning `"test": [1,2]`.
### Expected behavior
Expected an exception: `pyarrow.lib.ArrowInvalid: Column 1 named test expected length 1 but got length 0` or `pyarrow.lib.ArrowInvalid: Column 2 named y expected length 0 but got length 1`.
Any exception would be acceptable.
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.31
- Python version: 3.11.8
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6879/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6878 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6878/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6878/comments | https://api.github.com/repos/huggingface/datasets/issues/6878/events | https://github.com/huggingface/datasets/pull/6878 | 2,282,879,491 | PR_kwDODunzps5uviBh | 6,878 | Create function to convert to parquet | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6878). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-05-07T10:27:07 | 2024-05-07T10:30:01 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6878",
"html_url": "https://github.com/huggingface/datasets/pull/6878",
"diff_url": "https://github.com/huggingface/datasets/pull/6878.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6878.patch",
"merged_at": null
} | Analogously with `delete_from_hub`, this PR:
- creates the Python function `convert_to_parquet`
- makes the corresponding CLI command use that function.
This way, the functionality can be used both from a terminal and from a Python console. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6878/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6876 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6876/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6876/comments | https://api.github.com/repos/huggingface/datasets/issues/6876/events | https://github.com/huggingface/datasets/pull/6876 | 2,281,450,743 | PR_kwDODunzps5uqs46 | 6,876 | Unpin hfh | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6876). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"transformers 4.40.2 was release yesterday but not sure if it contains the fix",
"@lhoestq yes I knew transformers 4.40.2 was released yesterday, but I had checked that it does not contain the fix: only 2 bug fixes. That is why our CI continues failing in this PR. We will have to wait until the next minor version.",
"> If we urgently need some dev feature for dataset-viewer, I would suggest pushing the feature (cherry-picked) to a dedicated branch with 2.19.1 as its starting point (without opening a PR), and install datasets from that branch.\r\n\r\nI have done so:\r\n- Created a branch from 2.19.1: https://github.com/huggingface/datasets/tree/datasets-2.19.1-hotfix\r\n- Cherry-picked the commit in this PR: https://github.com/huggingface/datasets/commit/3638183e2f7e0dce8924e46e7cc21bf6d5d7adfb\r\n- Opened a PR in dataset-viewer to update datasets to this revision: https://github.com/huggingface/dataset-viewer/pull/2783"
] | 2024-05-06T18:10:49 | 2024-05-07T13:24:08 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6876",
"html_url": "https://github.com/huggingface/datasets/pull/6876",
"diff_url": "https://github.com/huggingface/datasets/pull/6876.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6876.patch",
"merged_at": null
} | Needed to use those in dataset-viewer:
- dev version of hfh https://github.com/huggingface/dataset-viewer/pull/2781: don't span the hub with /paths-info requests
- dev version of datasets at https://github.com/huggingface/datasets/pull/6875: don't write too big logs in the viewer
close https://github.com/huggingface/datasets/issues/6863 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6876/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6874/comments | https://api.github.com/repos/huggingface/datasets/issues/6874/events | https://github.com/huggingface/datasets/pull/6874 | 2,280,717,233 | PR_kwDODunzps5uoOk- | 6,874 | Use pandas ujson in JSON loader to improve performance | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6874). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Before pandas-2.2.0, the function `ujson_loads` was named `loads`: https://github.com/pandas-dev/pandas/blob/v2.1.0/pandas/io/json/__init__.py#L5\r\n```python\r\nimport ujson_loads as loads\r\n```"
] | 2024-05-06T12:01:27 | 2024-05-06T13:05:17 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6874",
"html_url": "https://github.com/huggingface/datasets/pull/6874",
"diff_url": "https://github.com/huggingface/datasets/pull/6874.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6874.patch",
"merged_at": null
} | Use pandas ujson in JSON loader to improve performance.
Note that `datasets` has `pandas` as required dependency. And `pandas` includes `ujson` in `pd.io.json.ujson_loads`.
Fix #6867.
CC: @natolambert | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6874/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6867 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6867/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6867/comments | https://api.github.com/repos/huggingface/datasets/issues/6867/events | https://github.com/huggingface/datasets/issues/6867 | 2,279,059,787 | I_kwDODunzps6H17FL | 6,867 | Improve performance of JSON loader | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks! Feel free to ping me for examples. May not respond immediately because we're all busy but would like to help.",
"Hi @natolambert, could you please give some examples of JSON files to benchmark?\r\n\r\nPlease note that this JSON file (https://huggingface.co/datasets/allenai/reward-bench-results/blob/main/eval-set-scores/Ray2333/reward-model-Mistral-7B-instruct-Unified-Feedback.json) is not in \"records\" orient; instead it has the following structure:\r\n```json\r\n{\r\n \"chat_template\": \"tulu\",\r\n \"id\": [30, 34, 35,...],\r\n \"model\": \"Ray2333/reward-model-Mistral-7B-instruct-Unified-Feedback\",\r\n \"model_type\": \"Seq. Classifier\",\r\n \"results\": [1, 1, 1, ...],\r\n \"scores_chosen\": [4.421875, 1.8916015625, 3.8515625,...],\r\n \"scores_rejected\": [-2.416015625, -1.47265625, -0.9912109375,...],\r\n \"subset\": [\"alpacaeval-easy\", \"alpacaeval-easy\", \"alpacaeval-easy\",...]\r\n \"text_chosen\": [\"<s>[INST] How do I detail a...\",...],\r\n \"text_rejected\": [\"<s>[INST] How do I detail a...\",...]\r\n}\r\n```\r\n\r\nNote that \"records\" orient should be a list (not a dict) with each row as one item of the list:\r\n```json\r\n[\r\n {\"chat_template\": \"tulu\", \"id\": 30,... },\r\n {\"chat_template\": \"tulu\", \"id\": 34,... },\r\n ...\r\n]\r\n```",
"We use a mix (which is a mess), here's an example with the records orient\r\nhttps://huggingface.co/datasets/allenai/reward-bench-results/blob/main/best-of-n/alpaca_eval/tulu-13b/OpenAssistant/oasst-rm-2.1-pythia-1.4b-epoch-2.5.json\r\n\r\nThere are more in that folder, ~40mb maybe?",
"@albertvillanova here's a snippet so you don't need to click\r\n```\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 0\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.076171875\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 1\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.87890625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 2\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.287109375\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 3\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 1.6337890625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 4\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 5.27734375\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 5\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.0625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 6\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 2.29296875\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 7\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 6.77734375\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 8\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 3.853515625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 9\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 4.86328125\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 10\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 2.890625\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 11\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 4.70703125\r\n}\r\n{\r\n \"config\": \"top_p=0.9;temp=1.0\",\r\n \"dataset_details\": \"helpful_base\",\r\n \"id\": [\r\n 0,\r\n 12\r\n ],\r\n \"model\": \"allenai/tulu-2-dpo-13b\",\r\n \"scores\": 4.45703125\r\n}\r\n```",
"Thanks again for your feedback, @natolambert.\r\n\r\nHowever, strictly speaking, the last file is not in JSON format but in kind of JSON-Lines like format (although not properly either because there are multiple newline characters within each object). Not even pandas can read that file format.\r\n\r\nAnyway, for JSON-Lines, I would expect that `datasets` and `pandas` have the same performance for JSON Lines files, as both use `pyarrow` under the hood...\r\n\r\nA proper JSON file in records orient should be a list (a JSON array): the first character should be `[`.\r\n\r\nAnyway, I am generating a JSON file from your JSON-Lines file to test performance."
] | 2024-05-04T15:04:16 | 2024-05-14T07:34:58 | null | MEMBER | null | null | null | As reported by @natolambert, loading regular JSON files with `datasets` shows poor performance.
The cause is that we use the `json` Python standard library instead of other faster libraries. See my old comment: https://github.com/huggingface/datasets/pull/2638#pullrequestreview-706983714
> There are benchmarks that compare different JSON packages, with the Standard Library one among the worst performant:
> - https://github.com/ultrajson/ultrajson#benchmarks
> - https://github.com/ijl/orjson#performance
I remember having a discussion about this and it was decided that it was better not to include an additional dependency on a 3rd-party library.
However:
- We already depend on `pandas` and `pandas` depends on `ujson`: so we have an indirect dependency on `ujson`
- Even if the above were not the case, we always could include `ujson` as an optional extra dependency, and check at runtime if it is installed to decide which library to use, either json or ujson | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6867/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6867/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6865 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6865/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6865/comments | https://api.github.com/repos/huggingface/datasets/issues/6865/events | https://github.com/huggingface/datasets/issues/6865 | 2,277,304,832 | I_kwDODunzps6HvOoA | 6,865 | Example on Semantic segmentation contains bug | {
"login": "ducha-aiki",
"id": 4803565,
"node_id": "MDQ6VXNlcjQ4MDM1NjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4803565?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ducha-aiki",
"html_url": "https://github.com/ducha-aiki",
"followers_url": "https://api.github.com/users/ducha-aiki/followers",
"following_url": "https://api.github.com/users/ducha-aiki/following{/other_user}",
"gists_url": "https://api.github.com/users/ducha-aiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ducha-aiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ducha-aiki/subscriptions",
"organizations_url": "https://api.github.com/users/ducha-aiki/orgs",
"repos_url": "https://api.github.com/users/ducha-aiki/repos",
"events_url": "https://api.github.com/users/ducha-aiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/ducha-aiki/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-05-03T09:40:12 | 2024-05-03T09:40:12 | null | NONE | null | null | null | ### Describe the bug
https://huggingface.co/docs/datasets/en/semantic_segmentation shows wrong example with torchvision transforms.
Specifically, as one can see in screenshot below, the object boundaries have weird colors.
<img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/59aa0e2c-2e3e-415b-9d42-2314044c5aee">
Original example with `albumentations` is correct
<img width="705" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/27dbd725-cea5-4e48-ba59-7050c3ce17b3">
That is because `torch vision.transforms.Resize` interpolates with bilinear everything which is wrong when used for segmentation labels - you just cannot mix them. Overall, `torchvision.transforms` is designed for classification only and cannot be used to images and masks together, unless you write two separate branches of augmentations.
The correct way would be to use `v2` version of transforms and convert the segmentation labels to https://pytorch.org/vision/main/generated/torchvision.tv_tensors.Mask.html#torchvision.tv_tensors.Mask object
### Steps to reproduce the bug
Go to the website.
<img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/ea1276d0-d69a-48cf-b9c2-cd61217815ef">
https://huggingface.co/docs/datasets/en/semantic_segmentation
### Expected behavior
Results, similar to `albumentation`. Or remove the torch vision part altogether. Or use `kornia` instead.
### Environment info
Irrelevant | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6865/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6865/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6863 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6863/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6863/comments | https://api.github.com/repos/huggingface/datasets/issues/6863/events | https://github.com/huggingface/datasets/issues/6863 | 2,276,977,534 | I_kwDODunzps6Ht-t- | 6,863 | Revert temporary pin huggingface-hub < 0.23.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 2024-05-03T05:53:55 | 2024-05-03T05:53:55 | null | MEMBER | null | null | null | Revert temporary pin huggingface-hub < 0.23.0 introduced by
- #6861
once the following issue is fixed and released:
- huggingface/transformers#30618 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6863/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6862 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6862/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6862/comments | https://api.github.com/repos/huggingface/datasets/issues/6862/events | https://github.com/huggingface/datasets/pull/6862 | 2,276,763,745 | PR_kwDODunzps5ubOoL | 6,862 | Issue 6598: load_dataset broken for data_files on s3 | {
"login": "matstrand",
"id": 544843,
"node_id": "MDQ6VXNlcjU0NDg0Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/544843?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matstrand",
"html_url": "https://github.com/matstrand",
"followers_url": "https://api.github.com/users/matstrand/followers",
"following_url": "https://api.github.com/users/matstrand/following{/other_user}",
"gists_url": "https://api.github.com/users/matstrand/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matstrand/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matstrand/subscriptions",
"organizations_url": "https://api.github.com/users/matstrand/orgs",
"repos_url": "https://api.github.com/users/matstrand/repos",
"events_url": "https://api.github.com/users/matstrand/events{/privacy}",
"received_events_url": "https://api.github.com/users/matstrand/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-05-03T01:43:47 | 2024-05-03T09:04:55 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6862",
"html_url": "https://github.com/huggingface/datasets/pull/6862",
"diff_url": "https://github.com/huggingface/datasets/pull/6862.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6862.patch",
"merged_at": null
} | Fixes huggingface/datasets/issues/6598
I've added a new test case and a solution. Before applying the solution the test case was failing with the same error described in the linked issue. I encountered this issue while following the Hugging Face documentation, trying to perform GPT-2 fine-tuning using `run_clm.py` on SageMaker with a data file stored on S3.
MRE:
```
pip install "datasets[s3]"
python -c "from datasets import load_dataset; load_dataset('csv', data_files={'train': 's3://noaa-gsod-pds/2024/A5125600451.csv'})"
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6862/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6859 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6859/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6859/comments | https://api.github.com/repos/huggingface/datasets/issues/6859/events | https://github.com/huggingface/datasets/pull/6859 | 2,274,996,774 | PR_kwDODunzps5uVIoZ | 6,859 | Support folder-based datasets with large metadata.jsonl | {
"login": "gbenson",
"id": 580564,
"node_id": "MDQ6VXNlcjU4MDU2NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/580564?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gbenson",
"html_url": "https://github.com/gbenson",
"followers_url": "https://api.github.com/users/gbenson/followers",
"following_url": "https://api.github.com/users/gbenson/following{/other_user}",
"gists_url": "https://api.github.com/users/gbenson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gbenson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gbenson/subscriptions",
"organizations_url": "https://api.github.com/users/gbenson/orgs",
"repos_url": "https://api.github.com/users/gbenson/repos",
"events_url": "https://api.github.com/users/gbenson/events{/privacy}",
"received_events_url": "https://api.github.com/users/gbenson/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-05-02T09:07:26 | 2024-05-02T09:07:26 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6859",
"html_url": "https://github.com/huggingface/datasets/pull/6859",
"diff_url": "https://github.com/huggingface/datasets/pull/6859.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6859.patch",
"merged_at": null
} | I tried creating an `imagefolder` dataset with a 714MB `metadata.jsonl` but got the error below. This pull request fixes the problem by increasing the block size like the message suggests.
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("imagefolder", data_dir="data-for-upload")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/path/to/datasets/load.py", line 2609, in load_dataset
builder_instance.download_and_prepare(
...
File "/path/to/datasets/packaged_modules/folder_based_builder/folder_based_builder.py", line 245, in _read_metadata
return paj.read_json(f)
File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6859/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6853/comments | https://api.github.com/repos/huggingface/datasets/issues/6853/events | https://github.com/huggingface/datasets/issues/6853 | 2,272,570,000 | I_kwDODunzps6HdKqQ | 6,853 | Support soft links for load_datasets imagefolder | {
"login": "billytcl",
"id": 10386511,
"node_id": "MDQ6VXNlcjEwMzg2NTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/10386511?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/billytcl",
"html_url": "https://github.com/billytcl",
"followers_url": "https://api.github.com/users/billytcl/followers",
"following_url": "https://api.github.com/users/billytcl/following{/other_user}",
"gists_url": "https://api.github.com/users/billytcl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/billytcl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/billytcl/subscriptions",
"organizations_url": "https://api.github.com/users/billytcl/orgs",
"repos_url": "https://api.github.com/users/billytcl/repos",
"events_url": "https://api.github.com/users/billytcl/events{/privacy}",
"received_events_url": "https://api.github.com/users/billytcl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2024-04-30T22:14:29 | 2024-04-30T22:14:29 | null | NONE | null | null | null | ### Feature request
Load_dataset from a folder of images doesn't seem to support soft links. It would be nice if it did, especially during methods development where image folders are being curated.
### Motivation
Images are coming from a complex variety of sources and we'd like to be able to soft link directly from the originating folders as opposed to copying. Having a copy of the file ensures that there may be issues with image versioning as well as having double the amount of required disk space.
### Your contribution
N/A | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6853/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6851 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6851/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6851/comments | https://api.github.com/repos/huggingface/datasets/issues/6851/events | https://github.com/huggingface/datasets/issues/6851 | 2,270,965,503 | I_kwDODunzps6HXC7_ | 6,851 | load_dataset('emotion') UnicodeDecodeError | {
"login": "L-Block-C",
"id": 32314558,
"node_id": "MDQ6VXNlcjMyMzE0NTU4",
"avatar_url": "https://avatars.githubusercontent.com/u/32314558?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/L-Block-C",
"html_url": "https://github.com/L-Block-C",
"followers_url": "https://api.github.com/users/L-Block-C/followers",
"following_url": "https://api.github.com/users/L-Block-C/following{/other_user}",
"gists_url": "https://api.github.com/users/L-Block-C/gists{/gist_id}",
"starred_url": "https://api.github.com/users/L-Block-C/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/L-Block-C/subscriptions",
"organizations_url": "https://api.github.com/users/L-Block-C/orgs",
"repos_url": "https://api.github.com/users/L-Block-C/repos",
"events_url": "https://api.github.com/users/L-Block-C/events{/privacy}",
"received_events_url": "https://api.github.com/users/L-Block-C/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-04-30T09:25:01 | 2024-04-30T09:25:01 | null | NONE | null | null | null | ### Describe the bug
**emotions = load_dataset('emotion')**
_UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte_
### Steps to reproduce the bug
load_dataset('emotion')
### Expected behavior
succese
### Environment info
py3.10
transformers 4.41.0.dev0
datasets 2.19.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6851/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6849 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6849/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6849/comments | https://api.github.com/repos/huggingface/datasets/issues/6849/events | https://github.com/huggingface/datasets/pull/6849 | 2,268,718,355 | PR_kwDODunzps5t_wnu | 6,849 | fix webdataset filename split | {
"login": "Bowser1704",
"id": 43539191,
"node_id": "MDQ6VXNlcjQzNTM5MTkx",
"avatar_url": "https://avatars.githubusercontent.com/u/43539191?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bowser1704",
"html_url": "https://github.com/Bowser1704",
"followers_url": "https://api.github.com/users/Bowser1704/followers",
"following_url": "https://api.github.com/users/Bowser1704/following{/other_user}",
"gists_url": "https://api.github.com/users/Bowser1704/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bowser1704/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bowser1704/subscriptions",
"organizations_url": "https://api.github.com/users/Bowser1704/orgs",
"repos_url": "https://api.github.com/users/Bowser1704/repos",
"events_url": "https://api.github.com/users/Bowser1704/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bowser1704/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-04-29T10:57:18 | 2024-04-29T11:14:41 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6849",
"html_url": "https://github.com/huggingface/datasets/pull/6849",
"diff_url": "https://github.com/huggingface/datasets/pull/6849.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6849.patch",
"merged_at": null
} | use `os.path.splitext` to parse field_name.
fix filename which has dot. like:
```
a.b.jpeg
a.b.txt
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6849/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6848 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6848/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6848/comments | https://api.github.com/repos/huggingface/datasets/issues/6848/events | https://github.com/huggingface/datasets/issues/6848 | 2,268,622,609 | I_kwDODunzps6HOG8R | 6,848 | Cant Downlaod Common Voice 17.0 hy-AM | {
"login": "mheryerznkanyan",
"id": 31586104,
"node_id": "MDQ6VXNlcjMxNTg2MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/31586104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mheryerznkanyan",
"html_url": "https://github.com/mheryerznkanyan",
"followers_url": "https://api.github.com/users/mheryerznkanyan/followers",
"following_url": "https://api.github.com/users/mheryerznkanyan/following{/other_user}",
"gists_url": "https://api.github.com/users/mheryerznkanyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mheryerznkanyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mheryerznkanyan/subscriptions",
"organizations_url": "https://api.github.com/users/mheryerznkanyan/orgs",
"repos_url": "https://api.github.com/users/mheryerznkanyan/repos",
"events_url": "https://api.github.com/users/mheryerznkanyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/mheryerznkanyan/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Same issue here."
] | 2024-04-29T10:06:02 | 2024-05-13T06:09:30 | null | NONE | null | null | null | ### Describe the bug
I want to download Common Voice 17.0 hy-AM but it returns an error.
```
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
@hydra.main(config_name='hfds_config', config_path=None)
/usr/local/lib/python3.10/dist-packages/hydra/_internal/hydra.py:119: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default.
See https://hydra.cc/docs/1.2/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information.
ret = run_job(
/usr/local/lib/python3.10/dist-packages/datasets/load.py:1429: FutureWarning: The repository for mozilla-foundation/common_voice_17_0 contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at https://hf.co/datasets/mozilla-foundation/common_voice_17_0
You can avoid this message in future by passing the argument `trust_remote_code=True`.
Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`.
warnings.warn(
Reading metadata...: 6180it [00:00, 133224.37it/s]les/s]
Generating train split: 0 examples [00:00, ? examples/s]
HuggingFace datasets failed due to some reason (stack trace below).
For certain datasets (eg: MCV), it may be necessary to login to the huggingface-cli (via `huggingface-cli login`).
Once logged in, you need to set `use_auth_token=True` when calling this script.
Traceback error for reference :
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1743, in _prepare_split_single
example = self.info.features.encode_example(record) if self.info.features is not None else record
File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1878, in encode_example
return encode_nested_example(self, example)
File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1243, in encode_nested_example
{
File "/usr/local/lib/python3.10/dist-packages/datasets/features/features.py", line 1243, in <dictcomp>
{
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 326, in zip_dict
yield key, tuple(d[key] for d in dicts)
File "/usr/local/lib/python3.10/dist-packages/datasets/utils/py_utils.py", line 326, in <genexpr>
yield key, tuple(d[key] for d in dicts)
KeyError: 'sentence_id'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/workspace/nemo/scripts/speech_recognition/convert_hf_dataset_to_nemo.py", line 358, in main
dataset = load_dataset(
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line 2549, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1005, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1767, in _download_and_prepare
super()._download_and_prepare(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1100, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1605, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/usr/local/lib/python3.10/dist-packages/datasets/builder.py", line 1762, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
```
from datasets import load_dataset
cv_17 = load_dataset("mozilla-foundation/common_voice_17_0", "hy-AM")
```
### Expected behavior
It works fine with common_voice_16_1
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.0-1042-nvidia-x86_64-with-glibc2.35
- Python version: 3.11.6
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2024.2.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6848/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6847 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6847/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6847/comments | https://api.github.com/repos/huggingface/datasets/issues/6847/events | https://github.com/huggingface/datasets/issues/6847 | 2,268,589,177 | I_kwDODunzps6HN-x5 | 6,847 | [Streaming] Only load requested splits without resolving files for the other splits | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"This should help fixing this issue: https://github.com/huggingface/datasets/pull/6832",
"I'm having a similar issue when using splices:\r\n<img width=\"947\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/28941213/2153faac-e1fe-4b6d-a79b-30b2699407e8\">\r\n<img width=\"823\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/28941213/80919eca-eb6c-407d-8070-52642fdcee54\">\r\n<img width=\"914\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/28941213/5219c201-e22e-4536-acc3-a922677785ff\">\r\n\r\n\r\nIt seems to be downloading, loading, and generating splits using the entire dataset."
] | 2024-04-29T09:49:32 | 2024-05-07T04:43:59 | null | MEMBER | null | null | null | e.g. [thangvip](https://huggingface.co/thangvip)/[cosmopedia_vi_math](https://huggingface.co/datasets/thangvip/cosmopedia_vi_math) has 300 splits and it takes a very long time to load only one split.
This is due to `load_dataset()` resolving the files of all the splits even if only one is needed.
In `dataset-viewer` the splits are loaded in different jobs so it results in 300 jobs that resolve 300 splits -> 90k calls to `/paths-info` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6847/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6847/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6845/comments | https://api.github.com/repos/huggingface/datasets/issues/6845/events | https://github.com/huggingface/datasets/issues/6845 | 2,265,876,551 | I_kwDODunzps6HDohH | 6,845 | load_dataset doesn't support list column | {
"login": "arthasking123",
"id": 16257131,
"node_id": "MDQ6VXNlcjE2MjU3MTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/16257131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arthasking123",
"html_url": "https://github.com/arthasking123",
"followers_url": "https://api.github.com/users/arthasking123/followers",
"following_url": "https://api.github.com/users/arthasking123/following{/other_user}",
"gists_url": "https://api.github.com/users/arthasking123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arthasking123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arthasking123/subscriptions",
"organizations_url": "https://api.github.com/users/arthasking123/orgs",
"repos_url": "https://api.github.com/users/arthasking123/repos",
"events_url": "https://api.github.com/users/arthasking123/events{/privacy}",
"received_events_url": "https://api.github.com/users/arthasking123/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-04-26T14:11:44 | 2024-04-26T14:11:44 | null | NONE | null | null | null | ### Describe the bug
dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese")
got exception:
Generating train split: 1834 examples [00:00, 5227.98 examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2011, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 585, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2295, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2254, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2254, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1802, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1802, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2018, in cast_array_to_feature
casted_array_values = _c(array.values, feature[0])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 1804, in wrapper
return func(array, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/table.py", line 2115, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
struct<m.name: string, x.name: string, p.name: string, n.name: string, h.name: string, name: string, c: int64, collect(r.name): list<item: string>, q.name: string, rel.name: string, count(p): int64, 1: int64, p.location: string, max(n.name): null, mn.name: string, p.time: int64, min(q.name): string>
to
{'q.name': Value(dtype='string', id=None), 'mn.name': Value(dtype='string', id=None), 'x.name': Value(dtype='string', id=None), 'p.name': Value(dtype='string', id=None), 'n.name': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None), 'm.name': Value(dtype='string', id=None), 'h.name': Value(dtype='string', id=None), 'count(p)': Value(dtype='int64', id=None), 'rel.name': Value(dtype='string', id=None), 'c': Value(dtype='int64', id=None), 'collect(r.name)': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '1': Value(dtype='int64', id=None), 'p.location': Value(dtype='string', id=None), 'substring(h.name,0,5)': Value(dtype='string', id=None), 'p.time': Value(dtype='int64', id=None)}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ubuntu/llm/train-2.py", line 150, in <module>
dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/load.py", line 2609, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1122, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 1882, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2038, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
### Steps to reproduce the bug
dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese")
### Expected behavior
no exception
### Environment info
python 3.11
datasets 2.19.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6845/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6843 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6843/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6843/comments | https://api.github.com/repos/huggingface/datasets/issues/6843/events | https://github.com/huggingface/datasets/issues/6843 | 2,265,432,897 | I_kwDODunzps6HB8NB | 6,843 | IterableDataset raises exception instead of retrying | {
"login": "bauwenst",
"id": 145220868,
"node_id": "U_kgDOCKflBA",
"avatar_url": "https://avatars.githubusercontent.com/u/145220868?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bauwenst",
"html_url": "https://github.com/bauwenst",
"followers_url": "https://api.github.com/users/bauwenst/followers",
"following_url": "https://api.github.com/users/bauwenst/following{/other_user}",
"gists_url": "https://api.github.com/users/bauwenst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bauwenst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bauwenst/subscriptions",
"organizations_url": "https://api.github.com/users/bauwenst/orgs",
"repos_url": "https://api.github.com/users/bauwenst/repos",
"events_url": "https://api.github.com/users/bauwenst/events{/privacy}",
"received_events_url": "https://api.github.com/users/bauwenst/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Thanks for reporting! I've opened a PR with a fix.",
"Thanks, @mariosasko! Related question (although I guess this is a feature request): could we have some kind of exponential back-off for these retries? Here's my reasoning:\r\n- If a one-time accidental error happens, you should retry immediately and will succeed immediately.\r\n- If the Hub has a small outage on the order of minutes, you don't want to retry on the order of hours. \r\n- If the Hub has a prologned outage of several hours, we don't want to keep retrying on the order of minutes.\r\n\r\nThere actually already exists an implementation for (clipped) exponential backoff in the HuggingFace suite ([here](https://github.com/huggingface/huggingface_hub/blob/61b156a4f2e5fe1a492ed8712b26803e2122bde0/src/huggingface_hub/utils/_http.py#L306)), but I don't think it is used here.\r\n\r\nThe requirements are basically that you have an initial minimum waiting time and a maximum waiting time, and with each retry, the waiting time is doubled. We don't want to overload your servers with needless retries, especially when they're down :sweat_smile:",
"Oh, I've just remembered that we added retries to the `HfFileSystem` in `huggingface_hub` 0.21.0 (see [this](https://github.com/huggingface/huggingface_hub/blob/61b156a4f2e5fe1a492ed8712b26803e2122bde0/src/huggingface_hub/hf_file_system.py#L703)), so I'll close the linked PR as we don't want to retry the retries :).\r\n\r\nI agree with the exponential backoff suggestion, so I'll open another PR.",
"@mariosasko The call you linked indeed points to the implementation I linked in my previous comment, yes, but it has no configurability. Arguably, you want to have this hidden backoff under the hood that catches small network disturbances on the time scale of seconds -- perhaps even with hardcoded limits as is the case currently -- but you also still want to have a separate backoff on top of that with the configurability as suggested by @lhoestq in [the comment I linked](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229).\r\n\r\nMy particular use-case is that I'm streaming a dataset while training on a university cluster with a very long scheduling queue. This means that when the backoff runs out of retries (which happens in under 30 seconds with the call you linked), I lose my spot on the cluster and have to queue for a whole day or more. Ideally, I should be able to specify that I want to retry for 2 to 3 hours but with more and more time between requests, so that I can smooth over hours-long outages without a setback of days.",
"I also have my runs crash a surprising amount due to the dataloader crashing because of the hub, some way to address this would be nice."
] | 2024-04-26T10:00:43 | 2024-04-30T13:14:13 | null | NONE | null | null | null | ### Describe the bug
In light of the recent server outages, I decided to look into whether I could somehow wrap my IterableDataset streams to retry rather than error out immediately. To my surprise, `datasets` [already supports retries](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229). Since a commit by @lhoestq [last week](https://github.com/huggingface/datasets/commit/a188022dc43a76a119d90c03832d51d6e4a94d91), that code lives here:
https://github.com/huggingface/datasets/blob/fe2bea6a4b09b180bd23b88fe96dfd1a11191a4f/src/datasets/utils/file_utils.py#L1097C1-L1111C19
If GitHub code snippets still aren't working, here's a copy:
```python
def read_with_retries(*args, **kwargs):
disconnect_err = None
for retry in range(1, max_retries + 1):
try:
out = read(*args, **kwargs)
break
except (ClientError, TimeoutError) as err:
disconnect_err = err
logger.warning(
f"Got disconnected from remote data host. Retrying in {config.STREAMING_READ_RETRY_INTERVAL}sec [{retry}/{max_retries}]"
)
time.sleep(config.STREAMING_READ_RETRY_INTERVAL)
else:
raise ConnectionError("Server Disconnected") from disconnect_err
return out
```
With the latest outage, the end of my stack trace looked like this:
```
...
File "/miniconda3/envs/draft/lib/python3.11/site-packages/datasets/download/streaming_download_manager.py", line 342, in read_with_retries
out = read(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 301, in read
return self._buffer.read(size)
^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/_compression.py", line 68, in readinto
data = self.read(len(byte_view))
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 505, in read
buf = self._fp.read(io.DEFAULT_BUFFER_SIZE)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/gzip.py", line 88, in read
return self.file.read(size)
^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/site-packages/fsspec/spec.py", line 1856, in read
out = self.cache._fetch(self.loc, self.loc + length)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/site-packages/fsspec/caching.py", line 189, in _fetch
self.cache = self.fetcher(start, end) # new block replaces old
^^^^^^^^^^^^^^^^^^^^^^^^
File "/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/hf_file_system.py", line 626, in _fetch_range
hf_raise_for_status(r)
File "/miniconda3/envs/draft/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py", line 333, in hf_raise_for_status
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/datasets/allenai/c4/resolve/1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-train.00346-of-01024.json.gz
```
Indeed, the code for retries only catches `ClientError`s and `TimeoutError`s, and all other exceptions, *including HuggingFace's own custom HTTP error class*, **are not caught. Nothing is retried,** and instead the exception is propagated upwards immediately.
### Steps to reproduce the bug
Not sure how you reproduce this. Maybe unplug your Ethernet cable while streaming a dataset; the issue is pretty clear from the stack trace.
### Expected behavior
All HTTP errors while iterating a streamable dataset should cause retries.
### Environment info
Output from `datasets-cli env`:
- `datasets` version: 2.18.0
- Platform: Linux-4.18.0-513.24.1.el8_9.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.7
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6843/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6842 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6842/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6842/comments | https://api.github.com/repos/huggingface/datasets/issues/6842/events | https://github.com/huggingface/datasets/issues/6842 | 2,264,692,159 | I_kwDODunzps6G_HW_ | 6,842 | Datasets with files with colon : in filenames cannot be used on Windows | {
"login": "jacobjennings",
"id": 1038927,
"node_id": "MDQ6VXNlcjEwMzg5Mjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1038927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jacobjennings",
"html_url": "https://github.com/jacobjennings",
"followers_url": "https://api.github.com/users/jacobjennings/followers",
"following_url": "https://api.github.com/users/jacobjennings/following{/other_user}",
"gists_url": "https://api.github.com/users/jacobjennings/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jacobjennings/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jacobjennings/subscriptions",
"organizations_url": "https://api.github.com/users/jacobjennings/orgs",
"repos_url": "https://api.github.com/users/jacobjennings/repos",
"events_url": "https://api.github.com/users/jacobjennings/events{/privacy}",
"received_events_url": "https://api.github.com/users/jacobjennings/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-04-26T00:14:16 | 2024-04-26T00:14:16 | null | NONE | null | null | null | ### Describe the bug
Datasets (such as https://huggingface.co/datasets/MLCommons/peoples_speech) cannot be used on Windows due to the fact that windows does not allow colons ":" in filenames. These should be converted into alternative strings.
### Steps to reproduce the bug
1. Attempt to run load_dataset on MLCommons/peoples_speech
### Expected behavior
Does not crash during extraction
### Environment info
Windows 11, NTFS filesystem, Python 3.12
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6842/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6840/comments | https://api.github.com/repos/huggingface/datasets/issues/6840/events | https://github.com/huggingface/datasets/issues/6840 | 2,264,604,766 | I_kwDODunzps6G-yBe | 6,840 | Delete uploaded files from the UI | {
"login": "saicharan2804",
"id": 62512681,
"node_id": "MDQ6VXNlcjYyNTEyNjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/62512681?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saicharan2804",
"html_url": "https://github.com/saicharan2804",
"followers_url": "https://api.github.com/users/saicharan2804/followers",
"following_url": "https://api.github.com/users/saicharan2804/following{/other_user}",
"gists_url": "https://api.github.com/users/saicharan2804/gists{/gist_id}",
"starred_url": "https://api.github.com/users/saicharan2804/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/saicharan2804/subscriptions",
"organizations_url": "https://api.github.com/users/saicharan2804/orgs",
"repos_url": "https://api.github.com/users/saicharan2804/repos",
"events_url": "https://api.github.com/users/saicharan2804/events{/privacy}",
"received_events_url": "https://api.github.com/users/saicharan2804/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 2024-04-25T22:33:57 | 2024-04-25T22:33:57 | null | NONE | null | null | null | ### Feature request
Once a file is uploaded and the commit is made, I am unable to delete individual files without completely deleting the whole dataset via the website UI.
### Motivation
Would be a useful addition
### Your contribution
Would love to help out with some guidance | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6840/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6837 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6837/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6837/comments | https://api.github.com/repos/huggingface/datasets/issues/6837/events | https://github.com/huggingface/datasets/issues/6837 | 2,263,273,983 | I_kwDODunzps6G5tH_ | 6,837 | Cannot use cached dataset without Internet connection (or when servers are down) | {
"login": "DionisMuzenitov",
"id": 112088378,
"node_id": "U_kgDOBq5VOg",
"avatar_url": "https://avatars.githubusercontent.com/u/112088378?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DionisMuzenitov",
"html_url": "https://github.com/DionisMuzenitov",
"followers_url": "https://api.github.com/users/DionisMuzenitov/followers",
"following_url": "https://api.github.com/users/DionisMuzenitov/following{/other_user}",
"gists_url": "https://api.github.com/users/DionisMuzenitov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DionisMuzenitov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DionisMuzenitov/subscriptions",
"organizations_url": "https://api.github.com/users/DionisMuzenitov/orgs",
"repos_url": "https://api.github.com/users/DionisMuzenitov/repos",
"events_url": "https://api.github.com/users/DionisMuzenitov/events{/privacy}",
"received_events_url": "https://api.github.com/users/DionisMuzenitov/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"There are 2 workarounds, tho:\r\n1. Download datasets from web and just load them locally\r\n2. Use metadata directly (temporal solution, since metadata can change)\r\n```\r\nimport datasets\r\nfrom datasets.data_files import DataFilesDict, DataFilesList\r\n\r\ndata_files_list = DataFilesList(\r\n [\r\n \"hf://datasets/allenai/c4@1588ec454efa1a09f29cd18ddd04fe05fc8653a2/en/c4-train.00000-of-01024.json.gz\"\r\n ],\r\n [(\"allenai/c4\", \"1588ec454efa1a09f29cd18ddd04fe05fc8653a2\")],\r\n)\r\ndata_files = DataFilesDict({\"train\": data_files_list})\r\nc4_dataset = datasets.load_dataset(\r\n path=\"allenai/c4\",\r\n data_files=data_files,\r\n split=\"train\",\r\n cache_dir=\"/datesets/cache\",\r\n download_mode=\"reuse_cache_if_exists\",\r\n token=False,\r\n)\r\n```\r\nSecond solution also shows where to find the bug. I suggest that the hashing functions should always use only original parameter `data_files`, and not the one they get after connecting to the server and creating `DataFilesDict`",
"Hi! You need to set the `HF_DATASETS_OFFLINE` env variable to `1` to load cached datasets offline, as explained in the docs [here](https://huggingface.co/docs/datasets/v2.19.0/en/loading#offline).",
"Just tested. It doesn't work, because of the exact problem I described above: hash of dataset config is different.\r\nThe only error difference is the reason why it cannot connect to HuggingFace (now it's 'offline mode is enabled')\r\n![image](https://github.com/huggingface/datasets/assets/112088378/1a7e1720-d711-46e3-9c90-53d52c441e68)\r\n"
] | 2024-04-25T10:48:20 | 2024-04-26T14:27:15 | null | NONE | null | null | null | ### Describe the bug
I want to be able to use cached dataset from HuggingFace even when I have no Internet connection (or when HuggingFace servers are down, or my company has network issues).
The problem why I can't use it:
`data_files` argument from `datasets.load_dataset()` function get it updates from the server before calculating hash for caching. As a result, when I run the same code with and without Internet I get different dataset configuration directory name.
### Steps to reproduce the bug
```
import datasets
c4_dataset = datasets.load_dataset(
path="allenai/c4",
data_files={"train": "en/c4-train.00000-of-01024.json.gz"},
split="train",
cache_dir="/datesets/cache",
download_mode="reuse_cache_if_exists",
token=False,
)
```
1. Run this code with the Internet.
2. Run the same code without the Internet.
### Expected behavior
When running without the Internet connection, the loader should be able to get dataset from cache
### Environment info
- `datasets` version: 2.19.0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.10.13
- `huggingface_hub` version: 0.22.2
- PyArrow version: 16.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.12.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6837/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6836 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6836/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6836/comments | https://api.github.com/repos/huggingface/datasets/issues/6836/events | https://github.com/huggingface/datasets/issues/6836 | 2,262,249,919 | I_kwDODunzps6G1zG_ | 6,836 | ExpectedMoreSplits error on load_dataset when upgrading to 2.19.0 | {
"login": "ebsmothers",
"id": 24319399,
"node_id": "MDQ6VXNlcjI0MzE5Mzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/24319399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ebsmothers",
"html_url": "https://github.com/ebsmothers",
"followers_url": "https://api.github.com/users/ebsmothers/followers",
"following_url": "https://api.github.com/users/ebsmothers/following{/other_user}",
"gists_url": "https://api.github.com/users/ebsmothers/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ebsmothers/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ebsmothers/subscriptions",
"organizations_url": "https://api.github.com/users/ebsmothers/orgs",
"repos_url": "https://api.github.com/users/ebsmothers/repos",
"events_url": "https://api.github.com/users/ebsmothers/events{/privacy}",
"received_events_url": "https://api.github.com/users/ebsmothers/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Get same error on same datasets too.",
"+1",
"same error"
] | 2024-04-24T21:52:35 | 2024-05-14T04:08:19 | null | NONE | null | null | null | ### Describe the bug
Hi there, thanks for the great library! We have been using it a lot in torchtune and it's been a huge help for us.
Regarding the bug: the same call to `load_dataset` errors with `ExpectedMoreSplits` in 2.19.0 after working fine in 2.18.0. Full details given in the repro below.
### Steps to reproduce the bug
On 2.18.0, things work fine:
```
# First clear the locally cached dataset
rm -r ~/.cache/huggingface/datasets/lvwerra___stack-exchange-paired
pip install "datasets==2.18.0"
python3
>>> from datasets import load_dataset
>>> dataset = load_dataset('lvwerra/stack-exchange-paired', split='train', data_dir='data/rl')
```
On 2.19.0, they do not:
```
# First clear the locally cached dataset
rm -r ~/.cache/huggingface/datasets/lvwerra___stack-exchange-paired
pip install "datasets==2.19.0"
python3
>>> from datasets import load_dataset
>>> dataset = load_dataset('lvwerra/stack-exchange-paired', split='train', data_dir='data/rl')
```
The stack trace I see from the 2.19.0 version of load_dataset can be seen [here](https://gist.github.com/ebsmothers/f9b1f1949bee7030a8d7bb8a491550d2).
(Maybe unsurprising but) notably if I do not delete the cache first I am able to load the dataset successfully. So based on this I suspect the cause is somewhere in the download logic.
### Expected behavior
Download the dataset successfully :)
### Environment info
- `datasets` version: 2.19.0
- Platform: Linux-5.12.0-0_fbk16_zion_7661_geb00762ce6d2-x86_64-with-glibc2.34
- Python version: 3.11.9
- `huggingface_hub` version: 0.22.2
- PyArrow version: 16.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.3.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6836/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6835 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6835/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6835/comments | https://api.github.com/repos/huggingface/datasets/issues/6835/events | https://github.com/huggingface/datasets/pull/6835 | 2,261,079,263 | PR_kwDODunzps5tl2fc | 6,835 | LargeListType support #6834 | {
"login": "Modexus",
"id": 37351874,
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Modexus",
"html_url": "https://github.com/Modexus",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"repos_url": "https://api.github.com/users/Modexus/repos",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6835). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Fixed the conversion from `pyarrow` to `python` `Sequence` features. \r\n\r\nThere is still an issue that if `features` are passed the `Sequence` always forces conversion to `ListArray`.\r\nThis probably causes issues if the `LargeListArray` is actually needed.\r\n\r\nThere doesn't seem to be a great solution since this list is created solely on the `schema` for `Sequence`.\r\nOne solution would be to always use `LargeListArray` instead.\r\n"
] | 2024-04-24T11:34:24 | 2024-04-30T13:16:14 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6835",
"html_url": "https://github.com/huggingface/datasets/pull/6835",
"diff_url": "https://github.com/huggingface/datasets/pull/6835.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6835.patch",
"merged_at": null
} | Fixes #6834 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6835/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6834/comments | https://api.github.com/repos/huggingface/datasets/issues/6834/events | https://github.com/huggingface/datasets/issues/6834 | 2,261,078,104 | I_kwDODunzps6GxVBY | 6,834 | largelisttype not supported (.from_polars()) | {
"login": "Modexus",
"id": 37351874,
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Modexus",
"html_url": "https://github.com/Modexus",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"repos_url": "https://api.github.com/users/Modexus/repos",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-04-24T11:33:43 | 2024-04-24T12:06:37 | null | CONTRIBUTOR | null | null | null | ### Describe the bug
The following code fails because LargeListType is not supported.
This is especially a problem for .from_polars since polars uses LargeListType.
### Steps to reproduce the bug
```python
import datasets
import polars as pl
df = pl.DataFrame({"list": [[]]})
datasets.Dataset.from_polars(df)
```
### Expected behavior
Convert LargeListType to list.
### Environment info
- `datasets` version: 2.19.1.dev0
- Platform: Linux-6.8.7-200.fc39.x86_64-x86_64-with-glibc2.38
- Python version: 3.12.2
- `huggingface_hub` version: 0.22.2
- PyArrow version: 16.0.0
- Pandas version: 2.1.4
- `fsspec` version: 2024.3.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6834/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6833 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6833/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6833/comments | https://api.github.com/repos/huggingface/datasets/issues/6833/events | https://github.com/huggingface/datasets/issues/6833 | 2,259,731,274 | I_kwDODunzps6GsMNK | 6,833 | Super slow iteration with trivial custom transform | {
"login": "xslittlegrass",
"id": 2780075,
"node_id": "MDQ6VXNlcjI3ODAwNzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2780075?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xslittlegrass",
"html_url": "https://github.com/xslittlegrass",
"followers_url": "https://api.github.com/users/xslittlegrass/followers",
"following_url": "https://api.github.com/users/xslittlegrass/following{/other_user}",
"gists_url": "https://api.github.com/users/xslittlegrass/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xslittlegrass/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xslittlegrass/subscriptions",
"organizations_url": "https://api.github.com/users/xslittlegrass/orgs",
"repos_url": "https://api.github.com/users/xslittlegrass/repos",
"events_url": "https://api.github.com/users/xslittlegrass/events{/privacy}",
"received_events_url": "https://api.github.com/users/xslittlegrass/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Similar issue in text process \r\n\r\n```python\r\n\r\ntokenizer=AutoTokenizer.from_pretrained(model_dir[args.model])\r\ntrain_dataset=datasets.load_from_disk(dataset_dir[args.dataset],keep_in_memory=True)['train']\r\ntrain_dataset=train_dataset.map(partial(dname2func[args.dataset],tokenizer=tokenizer),batched=True,num_proc =50,remove_columns=train_dataset.features.keys(),desc='tokenize',keep_in_memory=True)\r\n\r\n```\r\nAfter this train_dataset will be like\r\n```python\r\nDataset({\r\n features: ['input_ids', 'labels'],\r\n num_rows: 51760\r\n})\r\n```\r\nIn which input_ids and labels are both List[int]\r\nHowever, per iter on dataset cost 7.412479639053345s ……?\r\n```python\r\nfor j in tqdm(range(len(train_dataset)),desc='first stage'):\r\n input_id,label=train_dataset['input_ids'][j],train_dataset['labels'][j]\r\n\r\n``` ",
"The transform currently replaces the numpy formatting.\r\n\r\nSo you're back to copying data to long python lists which is super slow.\r\n\r\nIt would be cool for the transform to not remove the formatting in this case, but this requires a few changes in the lib"
] | 2024-04-23T20:40:59 | 2024-05-04T11:24:37 | null | NONE | null | null | null | ### Describe the bug
Dataset is 10X slower when applying trivial transforms:
```
import time
import numpy as np
from datasets import Dataset, Features, Array2D
a = np.zeros((800, 800))
a = np.stack([a] * 1000)
features = Features({"a": Array2D(shape=(800, 800), dtype="uint8")})
ds1 = Dataset.from_dict({"a": a}, features=features).with_format('numpy')
def transform(batch):
return batch
ds2 = ds1.with_transform(transform)
%time sum(1 for _ in ds1)
%time sum(1 for _ in ds2)
```
```
CPU times: user 472 ms, sys: 319 ms, total: 791 ms
Wall time: 794 ms
CPU times: user 9.32 s, sys: 443 ms, total: 9.76 s
Wall time: 9.78 s
```
In my real code I'm using set_transform to apply some post-processing on-the-fly for the 2d array, but it significantly slows down the dataset even if the transform itself is trivial.
Related issue: https://github.com/huggingface/datasets/issues/5841
### Steps to reproduce the bug
Use code in the description to reproduce.
### Expected behavior
Trivial custom transform in the example should not slowdown the dataset iteration.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.0-79-generic-x86_64-with-glibc2.35
- Python version: 3.11.4
- `huggingface_hub` version: 0.20.2
- PyArrow version: 15.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.12.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6833/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6833/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6832 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6832/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6832/comments | https://api.github.com/repos/huggingface/datasets/issues/6832/events | https://github.com/huggingface/datasets/pull/6832 | 2,258,761,447 | PR_kwDODunzps5teFoJ | 6,832 | Support downloading specific splits in `load_dataset` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6832). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-04-23T12:32:27 | 2024-04-30T08:55:28 | null | COLLABORATOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6832",
"html_url": "https://github.com/huggingface/datasets/pull/6832",
"diff_url": "https://github.com/huggingface/datasets/pull/6832.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6832.patch",
"merged_at": null
} | This PR builds on https://github.com/huggingface/datasets/pull/6639 to support downloading only the specified splits in `load_dataset`. For this to work, a builder's `_split_generators` need to be able to accept the requested splits (as a list) via a `splits` argument to avoid processing the non-requested ones. Also, the builder has to define a `_available_splits` method that lists all the possible `splits` values.
Close https://github.com/huggingface/datasets/issues/4101, close https://github.com/huggingface/datasets/issues/2538 (I'm probably missing some)
Should also make it possible to address https://github.com/huggingface/datasets/issues/6793 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6832/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6832/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6829 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6829/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6829/comments | https://api.github.com/repos/huggingface/datasets/issues/6829/events | https://github.com/huggingface/datasets/issues/6829 | 2,258,424,577 | I_kwDODunzps6GnNMB | 6,829 | Load and save from/to disk no longer accept pathlib.Path | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 2024-04-23T09:44:45 | 2024-04-23T09:44:46 | null | MEMBER | null | null | null | Reported by @vttrifonov at https://github.com/huggingface/datasets/pull/6704#issuecomment-2071168296:
> This change is breaking in
> https://github.com/huggingface/datasets/blob/f96e74d5c633cd5435dd526adb4a74631eb05c43/src/datasets/arrow_dataset.py#L1515
> when the input is `pathlib.Path`. The issue is that `url_to_fs` expects a `str` and cannot deal with `Path`. `get_fs_token_paths` converts to `str` so it is not a problem
This change was introduced in:
- #6704 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6829/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6828/comments | https://api.github.com/repos/huggingface/datasets/issues/6828/events | https://github.com/huggingface/datasets/pull/6828 | 2,258,420,421 | PR_kwDODunzps5tc55y | 6,828 | Support PathLike input in save_to_disk / load_from_disk | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6828). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | 2024-04-23T09:42:38 | 2024-04-23T11:05:52 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6828",
"html_url": "https://github.com/huggingface/datasets/pull/6828",
"diff_url": "https://github.com/huggingface/datasets/pull/6828.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6828.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6828/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6827 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6827/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6827/comments | https://api.github.com/repos/huggingface/datasets/issues/6827/events | https://github.com/huggingface/datasets/issues/6827 | 2,254,011,833 | I_kwDODunzps6GWX25 | 6,827 | Loading a remote dataset fails in the last release (v2.19.0) | {
"login": "zrthxn",
"id": 35369637,
"node_id": "MDQ6VXNlcjM1MzY5NjM3",
"avatar_url": "https://avatars.githubusercontent.com/u/35369637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zrthxn",
"html_url": "https://github.com/zrthxn",
"followers_url": "https://api.github.com/users/zrthxn/followers",
"following_url": "https://api.github.com/users/zrthxn/following{/other_user}",
"gists_url": "https://api.github.com/users/zrthxn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zrthxn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zrthxn/subscriptions",
"organizations_url": "https://api.github.com/users/zrthxn/orgs",
"repos_url": "https://api.github.com/users/zrthxn/repos",
"events_url": "https://api.github.com/users/zrthxn/events{/privacy}",
"received_events_url": "https://api.github.com/users/zrthxn/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-04-19T21:11:58 | 2024-04-19T21:13:42 | null | NONE | null | null | null | While loading a dataset with multiple splits I get an error saying `Couldn't find file at <URL>`
I am loading the dataset like so, nothing out of the ordinary.
This dataset needs a token to access it.
```
token="hf_myhftoken-sdhbdsjgkhbd"
load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test", token=token)
```
I get the following error
![Screenshot 2024-04-19 at 11 03 07 PM](https://github.com/huggingface/datasets/assets/35369637/8dce757f-08ff-45dd-85b5-890fced7c5bc)
Now you can see that the URL that it is trying to reach has the JSON object of the dataset split appended to the base URL. I think this may be due to a newly introduced issue.
I did not have this issue with the previous version of the datasets. Everything was fine for me yesterday and after the release 12 hours ago, this seems to have broken. Also, the dataset in question runs custom code and I checked and there have been no commits to the dataset on Huggingface in 6 months.
### Steps to reproduce the bug
Since this happened with one particular dataset for me, I am listing steps to use that dataset.
1. Open https://huggingface.co/datasets/speechcolab/gigaspeech and fill the form to get access.
2. Create a token on your huggingface account with read access.
3. Run the following line, substituing `<your_token_here>` with your token.
```
load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test", token="<your_token_here>")
```
### Expected behavior
Be able to load the dataset in question.
### Environment info
datasets == 2.19.0
python == 3.10
kernel == Linux 6.1.58+ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6827/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6823 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6823/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6823/comments | https://api.github.com/repos/huggingface/datasets/issues/6823/events | https://github.com/huggingface/datasets/issues/6823 | 2,250,775,569 | I_kwDODunzps6GKBwR | 6,823 | Loading problems of Datasets with a single shard | {
"login": "andjoer",
"id": 60151338,
"node_id": "MDQ6VXNlcjYwMTUxMzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/60151338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andjoer",
"html_url": "https://github.com/andjoer",
"followers_url": "https://api.github.com/users/andjoer/followers",
"following_url": "https://api.github.com/users/andjoer/following{/other_user}",
"gists_url": "https://api.github.com/users/andjoer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andjoer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andjoer/subscriptions",
"organizations_url": "https://api.github.com/users/andjoer/orgs",
"repos_url": "https://api.github.com/users/andjoer/repos",
"events_url": "https://api.github.com/users/andjoer/events{/privacy}",
"received_events_url": "https://api.github.com/users/andjoer/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 2024-04-18T13:59:00 | 2024-04-18T17:51:08 | null | NONE | null | null | null | ### Describe the bug
When saving a dataset on disk and it has a single shard it is not loaded as when it is saved in multiple shards. I installed the latest version of datasets via pip.
### Steps to reproduce the bug
The code below reproduces the behavior. All works well when the range of the loop is 10000 but it fails when it is 1000.
```
from PIL import Image
import numpy as np
from datasets import Dataset, DatasetDict, load_dataset
def load_image():
# Generate random noise image
noise = np.random.randint(0, 256, (256, 256, 3), dtype=np.uint8)
return Image.fromarray(noise)
def create_dataset():
input_images = []
output_images = []
text_prompts = []
for _ in range(10000): # this is the problematic parameter
input_images.append(load_image())
output_images.append(load_image())
text_prompts.append('test prompt')
data = {'input_image': input_images, 'output_image': output_images, 'text_prompt': text_prompts}
dataset = Dataset.from_dict(data)
return DatasetDict({'train': dataset})
dataset = create_dataset()
print('dataset before saving')
print(dataset)
print(dataset['train'].column_names)
dataset.save_to_disk('test_ds')
print('dataset after loading')
dataset_loaded = load_dataset('test_ds')
print(dataset_loaded)
print(dataset_loaded['train'].column_names)
```
The output for 1000 iterations is:
```
dataset before saving
DatasetDict({
train: Dataset({
features: ['input_image', 'output_image', 'text_prompt'],
num_rows: 1000
})
})
['input_image', 'output_image', 'text_prompt']
Saving the dataset (1/1 shards): 100%|█| 1000/1000 [00:00<00:00, 5156.00 example
dataset after loading
Generating train split: 1 examples [00:00, 230.52 examples/s]
DatasetDict({
train: Dataset({
features: ['_data_files', '_fingerprint', '_format_columns', '_format_kwargs', '_format_type', '_output_all_columns', '_split'],
num_rows: 1
})
})
['_data_files', '_fingerprint', '_format_columns', '_format_kwargs', '_format_type', '_output_all_columns', '_split']
```
For 10000 iteration (8 shards) it is correct:
```
dataset before saving
DatasetDict({
train: Dataset({
features: ['input_image', 'output_image', 'text_prompt'],
num_rows: 10000
})
})
['input_image', 'output_image', 'text_prompt']
Saving the dataset (8/8 shards): 100%|█| 10000/10000 [00:01<00:00, 6237.68 examp
dataset after loading
Generating train split: 10000 examples [00:00, 10773.16 examples/s]
DatasetDict({
train: Dataset({
features: ['input_image', 'output_image', 'text_prompt'],
num_rows: 10000
})
})
['input_image', 'output_image', 'text_prompt']
```
### Expected behavior
The procedure should work for a dataset with one shrad the same as for one with multiple shards
### Environment info
- `datasets` version: 2.18.0
- Platform: macOS-14.1-arm64-arm-64bit
- Python version: 3.11.8
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.2
- `fsspec` version: 2024.2.0
Edit: I looked in the source code of load.py in datasets. I should have used "load_from_disk" and it indeed works that way. But ideally load_dataset would have raisen an error the same way as if I call a path:
```
if Path(path, config.DATASET_STATE_JSON_FILENAME).exists():
raise ValueError(
"You are trying to load a dataset that was saved using `save_to_disk`. "
"Please use `load_from_disk` instead."
)
```
nevertheless I find it interesting that it works just well and without a warning if there are multiple shards. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6823/timeline | null | null | false |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 34