url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 2.28B
2.65B
| node_id
stringlengths 18
19
| number
int64 6.87k
7.29k
| title
stringlengths 2
159
| user
dict | labels
listlengths 0
2
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
1
| milestone
dict | comments
sequencelengths 0
21
| created_at
unknown | updated_at
unknown | closed_at
unknown | author_association
stringclasses 4
values | active_lock_reason
float64 | body
stringlengths 10
47.9k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
float64 | state_reason
stringclasses 3
values | draft
float64 0
1
⌀ | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7180/comments | https://api.github.com/repos/huggingface/datasets/issues/7180/events | https://github.com/huggingface/datasets/issues/7180 | 2,554,244,750 | I_kwDODunzps6YPq6O | 7,180 | Memory leak when wrapping datasets into PyTorch Dataset without explicit deletion | {
"avatar_url": "https://avatars.githubusercontent.com/u/38123329?v=4",
"events_url": "https://api.github.com/users/iamwangyabin/events{/privacy}",
"followers_url": "https://api.github.com/users/iamwangyabin/followers",
"following_url": "https://api.github.com/users/iamwangyabin/following{/other_user}",
"gists_url": "https://api.github.com/users/iamwangyabin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iamwangyabin",
"id": 38123329,
"login": "iamwangyabin",
"node_id": "MDQ6VXNlcjM4MTIzMzI5",
"organizations_url": "https://api.github.com/users/iamwangyabin/orgs",
"received_events_url": "https://api.github.com/users/iamwangyabin/received_events",
"repos_url": "https://api.github.com/users/iamwangyabin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iamwangyabin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iamwangyabin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iamwangyabin",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"> I've encountered a memory leak when wrapping the HuggingFace dataset into a PyTorch Dataset. The RAM usage constantly increases during iteration if items are not explicitly deleted after use.\r\n\r\nDatasets are memory mapped so they work like SWAP memory. In particular as long as you have RAM available the data will stay in RAM, and get paged out once your system needs RAM for something else (no OOM).\r\n\r\nrelated: https://github.com/huggingface/datasets/issues/4883"
] | "2024-09-28T14:00:47Z" | "2024-09-30T12:07:56Z" | "2024-09-30T12:07:56Z" | NONE | null | ### Describe the bug
I've encountered a memory leak when wrapping the HuggingFace dataset into a PyTorch Dataset. The RAM usage constantly increases during iteration if items are not explicitly deleted after use.
### Steps to reproduce the bug
Steps to reproduce:
Create a PyTorch Dataset wrapper for 'nebula/cc12m':
````
from torch.utils.data import Dataset
from tqdm import tqdm
from datasets import load_dataset
from torchvision import transforms
Image.MAX_IMAGE_PIXELS = None
class CC12M(Dataset):
def __init__(self, path_or_name='nebula/cc12m', split='train', transform=None, single_caption=True):
self.raw_dataset = load_dataset(path_or_name)[split]
if transform is None:
self.transform = transforms.Compose([
transforms.Resize((224, 224)),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(
mean=[0.48145466, 0.4578275, 0.40821073],
std=[0.26862954, 0.26130258, 0.27577711]
)
])
else:
self.transform = transforms.Compose(transform)
self.single_caption = single_caption
self.length = len(self.raw_dataset)
def __len__(self):
return self.length
def __getitem__(self, index):
item = self.raw_dataset[index]
caption = item['txt']
with io.BytesIO(item['webp']) as buffer:
image = Image.open(buffer).convert('RGB')
if self.transform:
image = self.transform(image)
# del item # Uncomment this line to prevent the memory leak
return image, caption
````
Iterate through the dataset without the del item line in __getitem__.
Observe RAM usage increasing constantly.
Add del item at the end of __getitem__:
```
def __getitem__(self, index):
item = self.raw_dataset[index]
caption = item['txt']
with io.BytesIO(item['webp']) as buffer:
image = Image.open(buffer).convert('RGB')
if self.transform:
image = self.transform(image)
del item # This line prevents the memory leak
return image, caption
```
Iterate through the dataset again and observe that RAM usage remains stable.
### Expected behavior
Expected behavior:
RAM usage should remain stable during iteration without needing to explicitly delete items.
Actual behavior:
RAM usage constantly increases unless items are explicitly deleted after use
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-4.18.0-513.5.1.el8_9.x86_64-x86_64-with-glibc2.28
- Python version: 3.12.4
- `huggingface_hub` version: 0.24.6
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1
| {
"avatar_url": "https://avatars.githubusercontent.com/u/38123329?v=4",
"events_url": "https://api.github.com/users/iamwangyabin/events{/privacy}",
"followers_url": "https://api.github.com/users/iamwangyabin/followers",
"following_url": "https://api.github.com/users/iamwangyabin/following{/other_user}",
"gists_url": "https://api.github.com/users/iamwangyabin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iamwangyabin",
"id": 38123329,
"login": "iamwangyabin",
"node_id": "MDQ6VXNlcjM4MTIzMzI5",
"organizations_url": "https://api.github.com/users/iamwangyabin/orgs",
"received_events_url": "https://api.github.com/users/iamwangyabin/received_events",
"repos_url": "https://api.github.com/users/iamwangyabin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iamwangyabin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iamwangyabin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iamwangyabin",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7180/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7180/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7179/comments | https://api.github.com/repos/huggingface/datasets/issues/7179/events | https://github.com/huggingface/datasets/pull/7179 | 2,552,387,980 | PR_kwDODunzps585Jcd | 7,179 | Support Python 3.11 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7179). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | "2024-09-27T08:55:44Z" | "2024-10-08T16:21:06Z" | "2024-10-08T16:21:03Z" | MEMBER | null | Support Python 3.11.
Fix #7178. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7179/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7179/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7179.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7179",
"merged_at": "2024-10-08T16:21:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7179.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7179"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7178/comments | https://api.github.com/repos/huggingface/datasets/issues/7178/events | https://github.com/huggingface/datasets/issues/7178 | 2,552,378,330 | I_kwDODunzps6YIjPa | 7,178 | Support Python 3.11 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | "2024-09-27T08:50:47Z" | "2024-10-08T16:21:04Z" | "2024-10-08T16:21:04Z" | MEMBER | null | Support Python 3.11: https://peps.python.org/pep-0664/ | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7178/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7178/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7177 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7177/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7177/comments | https://api.github.com/repos/huggingface/datasets/issues/7177/events | https://github.com/huggingface/datasets/pull/7177 | 2,552,371,082 | PR_kwDODunzps585Fx2 | 7,177 | Fix release instructions | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7177). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | "2024-09-27T08:47:01Z" | "2024-09-27T08:57:35Z" | "2024-09-27T08:57:32Z" | MEMBER | null | Fix release instructions.
During last release, I had to make this additional update. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7177/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7177/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7177.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7177",
"merged_at": "2024-09-27T08:57:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7177.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7177"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7176/comments | https://api.github.com/repos/huggingface/datasets/issues/7176/events | https://github.com/huggingface/datasets/pull/7176 | 2,551,025,564 | PR_kwDODunzps580hTn | 7,176 | fix grammar in fingerprint.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | "2024-09-26T16:13:42Z" | "2024-09-26T16:13:42Z" | null | CONTRIBUTOR | null | I see this error all the time and it was starting to get to me. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7176/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7176/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7176.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7176",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7176.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7176"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7175/comments | https://api.github.com/repos/huggingface/datasets/issues/7175/events | https://github.com/huggingface/datasets/issues/7175 | 2,550,957,337 | I_kwDODunzps6YDIUZ | 7,175 | [FSTimeoutError] load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/53268607?v=4",
"events_url": "https://api.github.com/users/cosmo3769/events{/privacy}",
"followers_url": "https://api.github.com/users/cosmo3769/followers",
"following_url": "https://api.github.com/users/cosmo3769/following{/other_user}",
"gists_url": "https://api.github.com/users/cosmo3769/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cosmo3769",
"id": 53268607,
"login": "cosmo3769",
"node_id": "MDQ6VXNlcjUzMjY4NjA3",
"organizations_url": "https://api.github.com/users/cosmo3769/orgs",
"received_events_url": "https://api.github.com/users/cosmo3769/received_events",
"repos_url": "https://api.github.com/users/cosmo3769/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cosmo3769/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cosmo3769/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cosmo3769",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Is this `FSTimeoutError` due to download network issue from remote resource (from where it is being accessed)?",
"It seems to happen for all datasets, not just a specific one, and especially for versions after 3.0. (3.0.0, 3.0.1 have this problem)\r\n\r\nI had the same error on a different dataset, but after downgrading to datasets==2.21.0, the problem was solved.",
"Same as https://github.com/huggingface/datasets/issues/7164\r\n\r\nThis dataset is made of a python script that downloads data from elsewhere than HF, so availability depends on the original host. Ultimately it would be nice to host the files of this dataset on HF\r\n\r\nin `datasets` <3.0 there were lots of mechanisms that got removed after the decision to make datasets with python loading scripts legacy for security and maintenance reasons (we only do very basic support now)",
"@lhoestq Thank you for the clarification! Closing the issue.",
"I'm getting this too, and also at 5 minutes. But for `CSTR-Edinburgh/vctk`, so it's not just this dataset, it seems to be a timeout that was introduced and needs to be raised. The progress bar was moving along just fine before the timeout, and I get more or less of it depending on how fast the network is.",
"You can change the `aiohttp` timeout from 5min to 1h like this:\r\n\r\n```python\r\nimport datasets, aiohttp\r\ndataset = datasets.load_dataset(\r\n dataset_name,\r\n storage_options={'client_kwargs': {'timeout': aiohttp.ClientTimeout(total=3600)}}\r\n)\r\n```"
] | "2024-09-26T15:42:29Z" | "2024-10-26T16:12:57Z" | "2024-09-30T17:28:35Z" | NONE | null | ### Describe the bug
When using `load_dataset`to load [HuggingFaceM4/VQAv2](https://huggingface.co/datasets/HuggingFaceM4/VQAv2), I am getting `FSTimeoutError`.
### Error
```
TimeoutError:
The above exception was the direct cause of the following exception:
FSTimeoutError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/fsspec/asyn.py](https://klh9mr78js-496ff2e9c6d22116-0-colab.googleusercontent.com/outputframe.html?vrz=colab_20240924-060116_RC00_678132060#) in sync(loop, func, timeout, *args, **kwargs)
99 if isinstance(return_result, asyncio.TimeoutError):
100 # suppress asyncio.TimeoutError, raise FSTimeoutError
--> 101 raise FSTimeoutError from return_result
102 elif isinstance(return_result, BaseException):
103 raise return_result
FSTimeoutError:
```
It usually fails around 5-6 GB.
<img width="847" alt="Screenshot 2024-09-26 at 9 10 19 PM" src="https://github.com/user-attachments/assets/ff91995a-fb55-4de6-8214-94025d6c8470">
### Steps to reproduce the bug
To reproduce it, run this in colab notebook:
```
!pip install -q -U datasets
from datasets import load_dataset
ds = load_dataset('HuggingFaceM4/VQAv2', split="train[:10%]")
```
### Expected behavior
It should download properly.
### Environment info
Using Colab Notebook. | {
"avatar_url": "https://avatars.githubusercontent.com/u/53268607?v=4",
"events_url": "https://api.github.com/users/cosmo3769/events{/privacy}",
"followers_url": "https://api.github.com/users/cosmo3769/followers",
"following_url": "https://api.github.com/users/cosmo3769/following{/other_user}",
"gists_url": "https://api.github.com/users/cosmo3769/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cosmo3769",
"id": 53268607,
"login": "cosmo3769",
"node_id": "MDQ6VXNlcjUzMjY4NjA3",
"organizations_url": "https://api.github.com/users/cosmo3769/orgs",
"received_events_url": "https://api.github.com/users/cosmo3769/received_events",
"repos_url": "https://api.github.com/users/cosmo3769/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cosmo3769/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cosmo3769/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cosmo3769",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7175/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7175/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7174 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7174/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7174/comments | https://api.github.com/repos/huggingface/datasets/issues/7174/events | https://github.com/huggingface/datasets/pull/7174 | 2,549,892,315 | PR_kwDODunzps58wluR | 7,174 | Set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7174). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | "2024-09-26T08:30:11Z" | "2024-09-26T08:32:39Z" | "2024-09-26T08:30:21Z" | MEMBER | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7174/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7174/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7174.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7174",
"merged_at": "2024-09-26T08:30:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7174.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7174"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7173 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7173/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7173/comments | https://api.github.com/repos/huggingface/datasets/issues/7173/events | https://github.com/huggingface/datasets/pull/7173 | 2,549,882,529 | PR_kwDODunzps58wjjc | 7,173 | Release: 3.0.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7173). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | "2024-09-26T08:25:54Z" | "2024-09-26T08:28:29Z" | "2024-09-26T08:26:03Z" | MEMBER | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7173/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7173/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7173.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7173",
"merged_at": "2024-09-26T08:26:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7173.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7173"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7172 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7172/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7172/comments | https://api.github.com/repos/huggingface/datasets/issues/7172/events | https://github.com/huggingface/datasets/pull/7172 | 2,549,781,691 | PR_kwDODunzps58wNQ7 | 7,172 | Add torchdata as a regular test dependency | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7172). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | "2024-09-26T07:45:55Z" | "2024-09-26T08:12:12Z" | "2024-09-26T08:05:40Z" | MEMBER | null | Add `torchdata` as a regular test dependency.
Note that previously, `torchdata` was installed from their repo and current main branch (0.10.0.dev) requires Python>=3.9.
Also note they made a recent release: 0.8.0 on Jul 31, 2024.
Fix #7171. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7172/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7172/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7172.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7172",
"merged_at": "2024-09-26T08:05:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7172.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7172"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7171 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7171/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7171/comments | https://api.github.com/repos/huggingface/datasets/issues/7171/events | https://github.com/huggingface/datasets/issues/7171 | 2,549,738,919 | I_kwDODunzps6X-e2n | 7,171 | CI is broken: No solution found when resolving dependencies | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | "2024-09-26T07:24:58Z" | "2024-09-26T08:05:41Z" | "2024-09-26T08:05:41Z" | MEMBER | null | See: https://github.com/huggingface/datasets/actions/runs/11046967444/job/30687294297
```
Run uv pip install --system -r additional-tests-requirements.txt --no-deps
× No solution found when resolving dependencies:
╰─▶ Because the current Python version (3.8.18) does not satisfy Python>=3.9
and torchdata==0.10.0a0+1a98f21 depends on Python>=3.9, we can conclude
that torchdata==0.10.0a0+1a98f21 cannot be used.
And because only torchdata==0.10.0a0+1a98f21 is available and
you require torchdata, we can conclude that your requirements are
unsatisfiable.
Error: Process completed with exit code 1.
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7171/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7171/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7170/comments | https://api.github.com/repos/huggingface/datasets/issues/7170/events | https://github.com/huggingface/datasets/pull/7170 | 2,546,944,016 | PR_kwDODunzps58mfF5 | 7,170 | Support JSON lines with missing columns | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7170). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | "2024-09-25T05:08:15Z" | "2024-09-26T06:42:09Z" | "2024-09-26T06:42:07Z" | MEMBER | null | Support JSON lines with missing columns.
Fix #7169.
The implemented test raised:
```
datasets.table.CastError: Couldn't cast
age: int64
to
{'age': Value(dtype='int32', id=None), 'name': Value(dtype='string', id=None)}
because column names don't match
```
Related to:
- #7160
- #7162 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7170/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7170/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7170.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7170",
"merged_at": "2024-09-26T06:42:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7170.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7170"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7169/comments | https://api.github.com/repos/huggingface/datasets/issues/7169/events | https://github.com/huggingface/datasets/issues/7169 | 2,546,894,076 | I_kwDODunzps6XzoT8 | 7,169 | JSON lines with missing columns raise CastError | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | "2024-09-25T04:43:28Z" | "2024-09-26T06:42:08Z" | "2024-09-26T06:42:08Z" | MEMBER | null | JSON lines with missing columns raise CastError:
> CastError: Couldn't cast ... to ... because column names don't match
Related to:
- #7159
- #7161 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7169/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7169/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7168/comments | https://api.github.com/repos/huggingface/datasets/issues/7168/events | https://github.com/huggingface/datasets/issues/7168 | 2,546,710,631 | I_kwDODunzps6Xy7hn | 7,168 | sd1.5 diffusers controlnet training script gives new error | {
"avatar_url": "https://avatars.githubusercontent.com/u/90132896?v=4",
"events_url": "https://api.github.com/users/Night1099/events{/privacy}",
"followers_url": "https://api.github.com/users/Night1099/followers",
"following_url": "https://api.github.com/users/Night1099/following{/other_user}",
"gists_url": "https://api.github.com/users/Night1099/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Night1099",
"id": 90132896,
"login": "Night1099",
"node_id": "MDQ6VXNlcjkwMTMyODk2",
"organizations_url": "https://api.github.com/users/Night1099/orgs",
"received_events_url": "https://api.github.com/users/Night1099/received_events",
"repos_url": "https://api.github.com/users/Night1099/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Night1099/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Night1099/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Night1099",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"not sure why the issue is formatting oddly",
"I guess this is a dupe of\r\n\r\nhttps://github.com/huggingface/datasets/issues/7071",
"this turned out to be because of a bad image in dataset"
] | "2024-09-25T01:42:49Z" | "2024-09-30T05:24:03Z" | "2024-09-30T05:24:02Z" | NONE | null | ### Describe the bug
This will randomly pop up during training now
```
Traceback (most recent call last):
File "/workspace/diffusers/examples/controlnet/train_controlnet.py", line 1192, in <module>
main(args)
File "/workspace/diffusers/examples/controlnet/train_controlnet.py", line 1041, in main
for step, batch in enumerate(train_dataloader):
File "/usr/local/lib/python3.11/dist-packages/accelerate/data_loader.py", line 561, in __iter__
next_batch = next(dataloader_iter)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/utils/data/dataloader.py", line 630, in __next__
data = self._next_data()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/utils/data/dataloader.py", line 673, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/utils/data/_utils/fetch.py", line 50, in fetch
data = self.dataset.__getitems__(possibly_batched_index)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 2746, in __getitems__
batch = self.__getitem__(keys)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 2742, in __getitem__
return self._getitem(key)
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 2727, in _getitem
formatted_output = format_table(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 639, in format_table
return formatter(pa_table, query_type=query_type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 407, in __call__
return self.format_batch(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 521, in format_batch
batch = self.python_features_decoder.decode_batch(batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/formatting/formatting.py", line 228, in decode_batch
return self.features.decode_batch(batch) if self.features else batch
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/features/features.py", line 2084, in decode_batch
[
File "/usr/local/lib/python3.11/dist-packages/datasets/features/features.py", line 2085, in <listcomp>
decode_nested_example(self[column_name], value, token_per_repo_id=token_per_repo_id)
File "/usr/local/lib/python3.11/dist-packages/datasets/features/features.py", line 1403, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/features/image.py", line 188, in decode_example
image.load() # to avoid "Too many open files" errors
```
### Steps to reproduce the bug
Train on diffusers sd1.5 controlnet example script
This will pop up randomly, you can see in wandb below when i manually resume run everytime this error appears
![image](https://github.com/user-attachments/assets/87e9a6af-cb3c-4398-82e7-d6a90add8d31)
### Expected behavior
Training to continue without above error
### Environment info
- datasets version: 3.0.0
- Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- huggingface_hub version: 0.25.1
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- fsspec version: 2024.6.1
Training on 4090 | {
"avatar_url": "https://avatars.githubusercontent.com/u/90132896?v=4",
"events_url": "https://api.github.com/users/Night1099/events{/privacy}",
"followers_url": "https://api.github.com/users/Night1099/followers",
"following_url": "https://api.github.com/users/Night1099/following{/other_user}",
"gists_url": "https://api.github.com/users/Night1099/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Night1099",
"id": 90132896,
"login": "Night1099",
"node_id": "MDQ6VXNlcjkwMTMyODk2",
"organizations_url": "https://api.github.com/users/Night1099/orgs",
"received_events_url": "https://api.github.com/users/Night1099/received_events",
"repos_url": "https://api.github.com/users/Night1099/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Night1099/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Night1099/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Night1099",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7168/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7168/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7167/comments | https://api.github.com/repos/huggingface/datasets/issues/7167/events | https://github.com/huggingface/datasets/issues/7167 | 2,546,708,014 | I_kwDODunzps6Xy64u | 7,167 | Error Mapping on sd3, sdxl and upcoming flux controlnet training scripts in diffusers | {
"avatar_url": "https://avatars.githubusercontent.com/u/90132896?v=4",
"events_url": "https://api.github.com/users/Night1099/events{/privacy}",
"followers_url": "https://api.github.com/users/Night1099/followers",
"following_url": "https://api.github.com/users/Night1099/following{/other_user}",
"gists_url": "https://api.github.com/users/Night1099/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Night1099",
"id": 90132896,
"login": "Night1099",
"node_id": "MDQ6VXNlcjkwMTMyODk2",
"organizations_url": "https://api.github.com/users/Night1099/orgs",
"received_events_url": "https://api.github.com/users/Night1099/received_events",
"repos_url": "https://api.github.com/users/Night1099/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Night1099/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Night1099/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Night1099",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"this is happening on large datasets, if anyone happens upon this i was able to fix by changing\r\n\r\n```\r\ntrain_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint)\r\n```\r\n\r\nto\r\n\r\n```\r\ntrain_dataset = train_dataset.map(compute_embeddings_fn, batched=True, batch_size=16, new_fingerprint=new_fingerprint)\r\n```"
] | "2024-09-25T01:39:51Z" | "2024-09-30T05:28:15Z" | "2024-09-30T05:28:04Z" | NONE | null | ### Describe the bug
```
Map: 6%|██████ | 8000/138120 [19:27<5:16:36, 6.85 examples/s]
Traceback (most recent call last):
File "/workspace/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1416, in <module>
main(args)
File "/workspace/diffusers/examples/controlnet/train_controlnet_sd3.py", line 1132, in main
train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 560, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 3035, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_dataset.py", line 3461, in _map_single
writer.write_batch(batch)
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 567, in write_batch
self.write_table(pa_table, writer_batch_size)
File "/usr/local/lib/python3.11/dist-packages/datasets/arrow_writer.py", line 579, in write_table
pa_table = pa_table.combine_chunks()
^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 4387, in pyarrow.lib.Table.combine_chunks
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
Traceback (most recent call last):
File "/usr/local/bin/accelerate", line 8, in <module>
sys.exit(main())
^^^^^^
File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/accelerate_cli.py", line 48, in main
args.func(args)
File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/launch.py", line 1174, in launch_command
simple_launcher(args)
File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/launch.py", line 769, in simple_launcher
```
### Steps to reproduce the bug
The dataset has no problem training on sd1.5 controlnet train script
### Expected behavior
Script not randomly erroing with error above
### Environment info
- `datasets` version: 3.0.0
- Platform: Linux-6.5.0-44-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- `huggingface_hub` version: 0.25.1
- PyArrow version: 17.0.0
- Pandas version: 2.2.3
- `fsspec` version: 2024.6.1
training on A100 | {
"avatar_url": "https://avatars.githubusercontent.com/u/90132896?v=4",
"events_url": "https://api.github.com/users/Night1099/events{/privacy}",
"followers_url": "https://api.github.com/users/Night1099/followers",
"following_url": "https://api.github.com/users/Night1099/following{/other_user}",
"gists_url": "https://api.github.com/users/Night1099/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Night1099",
"id": 90132896,
"login": "Night1099",
"node_id": "MDQ6VXNlcjkwMTMyODk2",
"organizations_url": "https://api.github.com/users/Night1099/orgs",
"received_events_url": "https://api.github.com/users/Night1099/received_events",
"repos_url": "https://api.github.com/users/Night1099/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Night1099/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Night1099/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Night1099",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7167/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7167/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7166 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7166/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7166/comments | https://api.github.com/repos/huggingface/datasets/issues/7166/events | https://github.com/huggingface/datasets/pull/7166 | 2,545,608,736 | PR_kwDODunzps58h8pd | 7,166 | fix docstring code example for distributed shuffle | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7166). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | "2024-09-24T14:39:54Z" | "2024-09-24T14:42:41Z" | "2024-09-24T14:40:14Z" | MEMBER | null | close https://github.com/huggingface/datasets/issues/7163 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7166/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7166/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7166.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7166",
"merged_at": "2024-09-24T14:40:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7166.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7166"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7165/comments | https://api.github.com/repos/huggingface/datasets/issues/7165/events | https://github.com/huggingface/datasets/pull/7165 | 2,544,972,541 | PR_kwDODunzps58fva1 | 7,165 | fix increase_load_count | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7165). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"I tested a few load_dataset and they do show up in download stats now",
"Thanks for having noticed and fixed."
] | "2024-09-24T10:14:40Z" | "2024-09-24T17:31:07Z" | "2024-09-24T13:48:00Z" | MEMBER | null | it was failing since 3.0 and therefore not updating download counts on HF or in our dashboard | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 1,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7165/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7165/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7165.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7165",
"merged_at": "2024-09-24T13:48:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7165.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7165"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7164/comments | https://api.github.com/repos/huggingface/datasets/issues/7164/events | https://github.com/huggingface/datasets/issues/7164 | 2,544,757,297 | I_kwDODunzps6Xreox | 7,164 | fsspec.exceptions.FSTimeoutError when downloading dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/38216460?v=4",
"events_url": "https://api.github.com/users/timonmerk/events{/privacy}",
"followers_url": "https://api.github.com/users/timonmerk/followers",
"following_url": "https://api.github.com/users/timonmerk/following{/other_user}",
"gists_url": "https://api.github.com/users/timonmerk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/timonmerk",
"id": 38216460,
"login": "timonmerk",
"node_id": "MDQ6VXNlcjM4MjE2NDYw",
"organizations_url": "https://api.github.com/users/timonmerk/orgs",
"received_events_url": "https://api.github.com/users/timonmerk/received_events",
"repos_url": "https://api.github.com/users/timonmerk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/timonmerk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timonmerk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/timonmerk",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! If you check the dataset loading script [here](https://huggingface.co/datasets/openslr/librispeech_asr/blob/main/librispeech_asr.py) you'll see that it downloads the data from OpenSLR, and apparently their storage has timeout issues. It would be great to ultimately host the dataset on Hugging Face instead.\r\n\r\nIn the meantime I can only recommend to try again later :/",
"Ok, still many thanks!",
"I'm also getting this same error but for `CSTR-Edinburgh/vctk`, so I don't think it's the remote host that's timing out, since I also time out at exactly 5 minutes. It seems there is a universal fsspec timeout that's getting hit starting in v3.",
"in v3 we cleaned the download parts of the library to make it more robust for HF downloads and to simplify support of script-based datasets. As a side effect it's not the same code that is used for other hosts, maybe time out handling changed. Anyway it should be possible to tweak fsspec to use retries\r\n\r\nFor example using [aiohttp_retry](https://github.com/inyutin/aiohttp_retry) maybe (haven't tried) ?\r\n\r\n```python\r\nimport fsspec\r\nfrom aiohttp_retry import RetryClient\r\n\r\nfsspec.filesystem(\"http\")._session = RetryClient()\r\n```\r\n\r\nrelated topic : https://github.com/huggingface/datasets/issues/7175",
"Adding a timeout argument to the `fs.get_file` call in `fsspec_get` in `datasets/utils/file_utils.py` might fix this ([source code](https://github.com/huggingface/datasets/blob/65f6eb54aa0e8bb44cea35deea28e0e8fecc25b9/src/datasets/utils/file_utils.py#L330)):\r\n\r\n```python\r\nfs.get_file(path, temp_file.name, callback=callback, timeout=3600)\r\n```\r\n\r\nSetting `timeout=1` fails after about one second, so setting it to 3600 should give us 1h. Havn't really tested this though. I'm also not sure what implications this has and if it causes errors for other `fs` implementations/configurations.\r\n\r\nThis is using `datasets==3.0.1` and Python 3.11.6.\r\n\r\n---\r\n\r\nEdit: This doesn't seem to change the timeout time, but add a second timeout counter (probably in `fsspec/asyn.py/sync`). So one can reduce the time for downloading like this, but not expand.\r\n\r\n---\r\n\r\nEdit 2: `fs` is of type `fsspec.implementations.http.HTTPFileSystem` which initializes a `aiohttp.ClientSession` using `client_kwargs`. We can pass these when calling `load_dataset`.\r\n\r\n**TLDR; This fixes it:**\r\n\r\n```python\r\nimport datasets, aiohttp\r\ndataset = datasets.load_dataset(\r\n dataset_name,\r\n storage_options={'client_kwargs': {'timeout': aiohttp.ClientTimeout(total=3600)}}\r\n)\r\n```"
] | "2024-09-24T08:45:05Z" | "2024-10-26T15:11:13Z" | null | NONE | null | ### Describe the bug
I am trying to download the `librispeech_asr` `clean` dataset, which results in a `FSTimeoutError` exception after downloading around 61% of the data.
### Steps to reproduce the bug
```
import datasets
datasets.load_dataset("librispeech_asr", "clean")
```
The output is as follows:
> Downloading data: 61%|██████████████▋ | 3.92G/6.39G [05:00<03:06, 13.2MB/s]Traceback (most recent call last):
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 56, in _runner
> result[0] = await coro
> ^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/implementations/http.py", line 262, in _get_file
> chunk = await r.content.read(chunk_size)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/streams.py", line 393, in read
> await self._wait("read")
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/streams.py", line 311, in _wait
> with self._timer:
> ^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/aiohttp/helpers.py", line 713, in __exit__
> raise asyncio.TimeoutError from None
> TimeoutError
>
> The above exception was the direct cause of the following exception:
>
> Traceback (most recent call last):
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/load_dataset.py", line 3, in <module>
> datasets.load_dataset("librispeech_asr", "clean")
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/load.py", line 2096, in load_dataset
> builder_instance.download_and_prepare(
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 924, in download_and_prepare
> self._download_and_prepare(
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 1647, in _download_and_prepare
> super()._download_and_prepare(
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/builder.py", line 977, in _download_and_prepare
> split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/.cache/huggingface/modules/datasets_modules/datasets/librispeech_asr/2712a8f82f0d20807a56faadcd08734f9bdd24c850bb118ba21ff33ebff0432f/librispeech_asr.py", line 115, in _split_generators
> archive_path = dl_manager.download(_DL_URLS[self.config.name])
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 159, in download
> downloaded_path_or_paths = map_nested(
> ^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 512, in map_nested
> _single_map_nested((function, obj, batched, batch_size, types, None, True, None))
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 380, in _single_map_nested
> return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]
> ^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 216, in _download_batched
> self._download_single(url_or_filename, download_config=download_config)
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/download/download_manager.py", line 225, in _download_single
> out = cached_path(url_or_filename, download_config=download_config)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 205, in cached_path
> output_path = get_from_cache(
> ^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 415, in get_from_cache
> fsspec_get(url, temp_file, storage_options=storage_options, desc=download_desc, disable_tqdm=disable_tqdm)
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 334, in fsspec_get
> fs.get_file(path, temp_file.name, callback=callback)
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 118, in wrapper
> return sync(self.loop, func, *args, **kwargs)
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> File "/Users/Timon/Documents/iEEG_deeplearning/wav2vec_pretrain/.venv/lib/python3.12/site-packages/fsspec/asyn.py", line 101, in sync
> raise FSTimeoutError from return_result
> fsspec.exceptions.FSTimeoutError
> Downloading data: 61%|██████████████▋ | 3.92G/6.39G [05:00<03:09, 13.0MB/s]
### Expected behavior
Complete the download
### Environment info
Python version 3.12.6
Dependencies:
> dependencies = [
> "accelerate>=0.34.2",
> "datasets[audio]>=3.0.0",
> "ipython>=8.18.1",
> "librosa>=0.10.2.post1",
> "torch>=2.4.1",
> "torchaudio>=2.4.1",
> "transformers>=4.44.2",
> ]
MacOS 14.6.1 (23G93) | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7164/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7164/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7163/comments | https://api.github.com/repos/huggingface/datasets/issues/7163/events | https://github.com/huggingface/datasets/issues/7163 | 2,542,361,234 | I_kwDODunzps6XiVqS | 7,163 | Set explicit seed in iterable dataset ddp shuffling example | {
"avatar_url": "https://avatars.githubusercontent.com/u/5719745?v=4",
"events_url": "https://api.github.com/users/alex-hh/events{/privacy}",
"followers_url": "https://api.github.com/users/alex-hh/followers",
"following_url": "https://api.github.com/users/alex-hh/following{/other_user}",
"gists_url": "https://api.github.com/users/alex-hh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alex-hh",
"id": 5719745,
"login": "alex-hh",
"node_id": "MDQ6VXNlcjU3MTk3NDU=",
"organizations_url": "https://api.github.com/users/alex-hh/orgs",
"received_events_url": "https://api.github.com/users/alex-hh/received_events",
"repos_url": "https://api.github.com/users/alex-hh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alex-hh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alex-hh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alex-hh",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"thanks for reporting !"
] | "2024-09-23T11:34:06Z" | "2024-09-24T14:40:15Z" | "2024-09-24T14:40:15Z" | CONTRIBUTOR | null | ### Describe the bug
In the examples section of the iterable dataset docs https://huggingface.co/docs/datasets/en/package_reference/main_classes#datasets.IterableDataset
the ddp example shuffles without seeding
```python
from datasets.distributed import split_dataset_by_node
ids = ds.to_iterable_dataset(num_shards=512)
ids = ids.shuffle(buffer_size=10_000) # will shuffle the shards order and use a shuffle buffer when you start iterating
ids = split_dataset_by_node(ds, world_size=8, rank=0) # will keep only 512 / 8 = 64 shards from the shuffled lists of shards when you start iterating
dataloader = torch.utils.data.DataLoader(ids, num_workers=4) # will assign 64 / 4 = 16 shards from this node's list of shards to each worker when you start iterating
for example in ids:
pass
```
This code would - I think - raise an error due to the lack of an explicit seed:
https://github.com/huggingface/datasets/blob/2eb4edb97e1a6af2ea62738ec58afbd3812fc66e/src/datasets/iterable_dataset.py#L1707-L1711
### Steps to reproduce the bug
Run example code
### Expected behavior
Add explicit seeding to example code
### Environment info
latest datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7163/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7163/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7162/comments | https://api.github.com/repos/huggingface/datasets/issues/7162/events | https://github.com/huggingface/datasets/pull/7162 | 2,542,323,382 | PR_kwDODunzps58WlX5 | 7,162 | Support JSON lines with empty struct | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7162). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | "2024-09-23T11:16:12Z" | "2024-09-23T11:30:08Z" | "2024-09-23T11:30:06Z" | MEMBER | null | Support JSON lines with empty struct.
Fix #7161.
Related to:
- #7160 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7162/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7162/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7162.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7162",
"merged_at": "2024-09-23T11:30:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7162.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7162"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7161/comments | https://api.github.com/repos/huggingface/datasets/issues/7161/events | https://github.com/huggingface/datasets/issues/7161 | 2,541,971,931 | I_kwDODunzps6Xg2nb | 7,161 | JSON lines with empty struct raise ArrowTypeError | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | "2024-09-23T08:48:56Z" | "2024-09-25T04:43:44Z" | "2024-09-23T11:30:07Z" | MEMBER | null | JSON lines with empty struct raise ArrowTypeError: struct fields don't match or are in the wrong order
See example: https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5
> ArrowTypeError: struct fields don't match or are in the wrong order: Input fields: struct<> output fields: struct<pov_count: int64, update_count: int64, citation_needed_count: int64>
Related to:
- #7159 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7161/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7161/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7160 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7160/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7160/comments | https://api.github.com/repos/huggingface/datasets/issues/7160/events | https://github.com/huggingface/datasets/pull/7160 | 2,541,877,813 | PR_kwDODunzps58VDhq | 7,160 | Support JSON lines with missing struct fields | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7160). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | "2024-09-23T08:04:09Z" | "2024-09-23T11:09:19Z" | "2024-09-23T11:09:17Z" | MEMBER | null | Support JSON lines with missing struct fields.
Fix #7159.
The implemented test raised:
```
TypeError: Couldn't cast array of type
struct<age: int64>
to
{'age': Value(dtype='int32', id=None), 'name': Value(dtype='string', id=None)}
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7160/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7160/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7160.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7160",
"merged_at": "2024-09-23T11:09:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7160.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7160"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7159/comments | https://api.github.com/repos/huggingface/datasets/issues/7159/events | https://github.com/huggingface/datasets/issues/7159 | 2,541,865,613 | I_kwDODunzps6XgcqN | 7,159 | JSON lines with missing struct fields raise TypeError: Couldn't cast array | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [
"Hello,\r\n\r\nI have still the same issue when loading the dataset with the new version:\r\n[https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5](https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5)\r\n\r\nI have downloaded and unzipped the wikimedia/structured-wikipedia dataset locally but when loading I have the same issue.\r\n\r\n```\r\nimport datasets\r\n\r\ndataset = datasets.load_dataset(\"/gpfsdsdir/dataset/HuggingFace/wikimedia/structured-wikipedia/20240916.fr\")\r\n```\r\n```\r\nTypeError: Couldn't cast array of type\r\nstruct<content_url: string, width: int64, height: int64, alternative_text: string>\r\nto\r\n{'content_url': Value(dtype='string', id=None), 'width': Value(dtype='int64', id=None), 'height': Value(dtype='int64', id=None)}\r\n\r\nThe above exception was the direct cause of the following exception:\r\n```\r\nMy version of datasets is 3.0.1"
] | "2024-09-23T07:57:58Z" | "2024-10-21T08:07:07Z" | "2024-09-23T11:09:18Z" | MEMBER | null | JSON lines with missing struct fields raise TypeError: Couldn't cast array of type.
See example: https://huggingface.co/datasets/wikimedia/structured-wikipedia/discussions/5
One would expect that the struct missing fields are added with null values. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7159/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7159/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7158/comments | https://api.github.com/repos/huggingface/datasets/issues/7158/events | https://github.com/huggingface/datasets/pull/7158 | 2,541,494,765 | PR_kwDODunzps58Tuw9 | 7,158 | google colab ex | {
"avatar_url": "https://avatars.githubusercontent.com/u/157789664?v=4",
"events_url": "https://api.github.com/users/docfhsp/events{/privacy}",
"followers_url": "https://api.github.com/users/docfhsp/followers",
"following_url": "https://api.github.com/users/docfhsp/following{/other_user}",
"gists_url": "https://api.github.com/users/docfhsp/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/docfhsp",
"id": 157789664,
"login": "docfhsp",
"node_id": "U_kgDOCWet4A",
"organizations_url": "https://api.github.com/users/docfhsp/orgs",
"received_events_url": "https://api.github.com/users/docfhsp/received_events",
"repos_url": "https://api.github.com/users/docfhsp/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/docfhsp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/docfhsp/subscriptions",
"type": "User",
"url": "https://api.github.com/users/docfhsp",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | "2024-09-23T03:29:50Z" | "2024-10-09T03:52:49Z" | null | NONE | null | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7158/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7158/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7158.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7158",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7158.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7158"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7157/comments | https://api.github.com/repos/huggingface/datasets/issues/7157/events | https://github.com/huggingface/datasets/pull/7157 | 2,540,354,890 | PR_kwDODunzps58P-1R | 7,157 | Fix zero proba interleave datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7157). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | "2024-09-21T15:19:14Z" | "2024-09-24T14:33:54Z" | "2024-09-24T14:33:54Z" | MEMBER | null | fix https://github.com/huggingface/datasets/issues/7147 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7157/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7157/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7157.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7157",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7157.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7157"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7156 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7156/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7156/comments | https://api.github.com/repos/huggingface/datasets/issues/7156/events | https://github.com/huggingface/datasets/issues/7156 | 2,539,360,617 | I_kwDODunzps6XW5Fp | 7,156 | interleave_datasets resets shuffle state | {
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonathanasdf",
"id": 511073,
"login": "jonathanasdf",
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonathanasdf",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | "2024-09-20T17:57:54Z" | "2024-09-20T17:57:54Z" | null | NONE | null | ### Describe the bug
```
import datasets
import torch.utils.data
def gen(shards):
yield {"shards": shards}
def main():
dataset = datasets.IterableDataset.from_generator(
gen,
gen_kwargs={'shards': list(range(25))}
)
dataset = dataset.shuffle(buffer_size=1)
dataset = datasets.interleave_datasets(
[dataset, dataset], probabilities=[1, 0], stopping_strategy="all_exhausted"
)
dataloader = torch.utils.data.DataLoader(
dataset,
batch_size=8,
num_workers=8,
)
for i, batch in enumerate(dataloader):
print(batch)
if i >= 10:
break
if __name__ == "__main__":
main()
```
### Steps to reproduce the bug
Run the script, it will output
```
{'shards': [tensor([ 0, 8, 16, 24, 0, 8, 16, 24])]}
{'shards': [tensor([ 1, 9, 17, 1, 9, 17, 1, 9])]}
{'shards': [tensor([ 2, 10, 18, 2, 10, 18, 2, 10])]}
{'shards': [tensor([ 3, 11, 19, 3, 11, 19, 3, 11])]}
{'shards': [tensor([ 4, 12, 20, 4, 12, 20, 4, 12])]}
{'shards': [tensor([ 5, 13, 21, 5, 13, 21, 5, 13])]}
{'shards': [tensor([ 6, 14, 22, 6, 14, 22, 6, 14])]}
{'shards': [tensor([ 7, 15, 23, 7, 15, 23, 7, 15])]}
{'shards': [tensor([ 0, 8, 16, 24, 0, 8, 16, 24])]}
{'shards': [tensor([17, 1, 9, 17, 1, 9, 17, 1])]}
{'shards': [tensor([18, 2, 10, 18, 2, 10, 18, 2])]}
```
### Expected behavior
The shards should be shuffled.
### Environment info
- `datasets` version: 3.0.0
- Platform: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.25.0
- PyArrow version: 17.0.0
- Pandas version: 2.0.3
- `fsspec` version: 2023.6.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7156/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7156/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7155/comments | https://api.github.com/repos/huggingface/datasets/issues/7155/events | https://github.com/huggingface/datasets/issues/7155 | 2,533,641,870 | I_kwDODunzps6XBE6O | 7,155 | Dataset viewer not working! Failure due to more than 32 splits. | {
"avatar_url": "https://avatars.githubusercontent.com/u/81933585?v=4",
"events_url": "https://api.github.com/users/sleepingcat4/events{/privacy}",
"followers_url": "https://api.github.com/users/sleepingcat4/followers",
"following_url": "https://api.github.com/users/sleepingcat4/following{/other_user}",
"gists_url": "https://api.github.com/users/sleepingcat4/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sleepingcat4",
"id": 81933585,
"login": "sleepingcat4",
"node_id": "MDQ6VXNlcjgxOTMzNTg1",
"organizations_url": "https://api.github.com/users/sleepingcat4/orgs",
"received_events_url": "https://api.github.com/users/sleepingcat4/received_events",
"repos_url": "https://api.github.com/users/sleepingcat4/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sleepingcat4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sleepingcat4/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sleepingcat4",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I have fixed it! But I would appreciate a new feature wheere I could iterate over and see what each file looks like. "
] | "2024-09-18T12:43:21Z" | "2024-09-18T13:20:03Z" | "2024-09-18T13:20:03Z" | NONE | null | Hello guys,
I have a dataset and I didn't know I couldn't upload more than 32 splits. Now, my dataset viewer is not working. I don't have the dataset locally on my node anymore and recreating would take a week. And I have to publish the dataset coming Monday. I read about the practice, how I can resolve it and avoid this issue in the future. But, at the moment I need a hard fix for two of my datasets.
And I don't want to mess or change anything and allow everyone in public to see the dataset and interact with it. Can you please help me?
https://huggingface.co/datasets/laion/Wikipedia-X
https://huggingface.co/datasets/laion/Wikipedia-X-Full | {
"avatar_url": "https://avatars.githubusercontent.com/u/81933585?v=4",
"events_url": "https://api.github.com/users/sleepingcat4/events{/privacy}",
"followers_url": "https://api.github.com/users/sleepingcat4/followers",
"following_url": "https://api.github.com/users/sleepingcat4/following{/other_user}",
"gists_url": "https://api.github.com/users/sleepingcat4/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sleepingcat4",
"id": 81933585,
"login": "sleepingcat4",
"node_id": "MDQ6VXNlcjgxOTMzNTg1",
"organizations_url": "https://api.github.com/users/sleepingcat4/orgs",
"received_events_url": "https://api.github.com/users/sleepingcat4/received_events",
"repos_url": "https://api.github.com/users/sleepingcat4/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sleepingcat4/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sleepingcat4/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sleepingcat4",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7155/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7155/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7154 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7154/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7154/comments | https://api.github.com/repos/huggingface/datasets/issues/7154/events | https://github.com/huggingface/datasets/pull/7154 | 2,532,812,323 | PR_kwDODunzps572hBd | 7,154 | Support ndjson data files | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7154). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Thanks for your review, @severo.\r\n\r\nYes, I was aware of this. From internal conversation:\r\n> Please note that although NDJSON was planned to be submitted as an RFC standard spec, it is no longer maintained:\r\n> - See note from the author: https://github.com/ndjson/ndjson-spec/issues/35#issuecomment-1285673417\r\n> - See that their official website domain has expired: https://ndjson.org/ \r\n\r\nThe purpose of this PR is just supporting datasets with ndjson data files (e.g. Wikimedia Enterprise data files), but it should not imply any recommendation or endorsement of this format from our part."
] | "2024-09-18T06:10:10Z" | "2024-09-19T11:25:17Z" | "2024-09-19T11:25:14Z" | MEMBER | null | Support `ndjson` (Newline Delimited JSON) data files.
Fix #7153. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7154/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7154/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7154.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7154",
"merged_at": "2024-09-19T11:25:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7154.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7154"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7153/comments | https://api.github.com/repos/huggingface/datasets/issues/7153/events | https://github.com/huggingface/datasets/issues/7153 | 2,532,788,555 | I_kwDODunzps6W90lL | 7,153 | Support data files with .ndjson extension | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | "2024-09-18T05:54:45Z" | "2024-09-19T11:25:15Z" | "2024-09-19T11:25:15Z" | MEMBER | null | ### Feature request
Support data files with `.ndjson` extension.
### Motivation
We already support data files with `.jsonl` extension.
### Your contribution
I am opening a PR. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7153/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7153/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7151/comments | https://api.github.com/repos/huggingface/datasets/issues/7151/events | https://github.com/huggingface/datasets/pull/7151 | 2,527,577,048 | PR_kwDODunzps57kyY4 | 7,151 | Align filename prefix splitting with WebDataset library | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | "2024-09-16T06:07:39Z" | "2024-09-16T15:26:36Z" | "2024-09-16T15:26:34Z" | MEMBER | null | Align filename prefix splitting with WebDataset library.
This PR uses the same `base_plus_ext` function as the one used by the `webdataset` library.
Fix #7150.
Related to #7144. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7151/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7151/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7151.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7151",
"merged_at": "2024-09-16T15:26:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7151.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7151"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7150/comments | https://api.github.com/repos/huggingface/datasets/issues/7150/events | https://github.com/huggingface/datasets/issues/7150 | 2,527,571,175 | I_kwDODunzps6Wp6zn | 7,150 | WebDataset loader splits keys differently than WebDataset library | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | "2024-09-16T06:02:47Z" | "2024-09-16T15:26:35Z" | "2024-09-16T15:26:35Z" | MEMBER | null | As reported by @ragavsachdeva (see discussion here: https://github.com/huggingface/datasets/pull/7144#issuecomment-2348307792), our webdataset loader is not aligned with the `webdataset` library when splitting keys from filenames.
For example, we get a different key splitting for filename `/some/path/22.0/1.1.png`:
- datasets library: `/some/path/22` and `0/1.1.png`
- webdataset library: `/some/path/22.0/1`, `1.png`
```python
import webdataset as wds
wds.tariterators.base_plus_ext("/some/path/22.0/1.1.png")
# ('/some/path/22.0/1', '1.png')
```
| {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7150/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7150/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7149/comments | https://api.github.com/repos/huggingface/datasets/issues/7149/events | https://github.com/huggingface/datasets/issues/7149 | 2,524,497,448 | I_kwDODunzps6WeMYo | 7,149 | Datasets Unknown Keyword Argument Error - task_templates | {
"avatar_url": "https://avatars.githubusercontent.com/u/51288316?v=4",
"events_url": "https://api.github.com/users/varungupta31/events{/privacy}",
"followers_url": "https://api.github.com/users/varungupta31/followers",
"following_url": "https://api.github.com/users/varungupta31/following{/other_user}",
"gists_url": "https://api.github.com/users/varungupta31/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/varungupta31",
"id": 51288316,
"login": "varungupta31",
"node_id": "MDQ6VXNlcjUxMjg4MzE2",
"organizations_url": "https://api.github.com/users/varungupta31/orgs",
"received_events_url": "https://api.github.com/users/varungupta31/received_events",
"repos_url": "https://api.github.com/users/varungupta31/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/varungupta31/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varungupta31/subscriptions",
"type": "User",
"url": "https://api.github.com/users/varungupta31",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [
"Thanks, for reporting.\r\n\r\nWe have been fixing most Hub datasets to remove the deprecated (and now non-supported) task templates, but we missed the \"facebook/winoground\".\r\n\r\nIt is fixed now: https://huggingface.co/datasets/facebook/winoground/discussions/8\r\n\r\n"
] | "2024-09-13T10:30:57Z" | "2024-09-13T14:10:48Z" | "2024-09-13T14:10:48Z" | NONE | null | ### Describe the bug
Issue
```python
from datasets import load_dataset
examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>)
```
Gives error
```
TypeError: DatasetInfo.__init__() got an unexpected keyword argument 'task_templates'
```
A simple downgrade to lower `datasets v 2.21.0` solves it.
### Steps to reproduce the bug
1. `pip install datsets`
2.
```python
from datasets import load_dataset
examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>)
```
### Expected behavior
Should load the dataset correctly.
### Environment info
- Datasets version `3.0.0`
- `transformers` version: 4.45.0.dev0
- Platform: Linux-6.8.0-40-generic-x86_64-with-glibc2.35
- Python version: 3.12.4
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.5
- Accelerate version: 0.35.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
| {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7149/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7149/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7148/comments | https://api.github.com/repos/huggingface/datasets/issues/7148/events | https://github.com/huggingface/datasets/issues/7148 | 2,523,833,413 | I_kwDODunzps6WbqRF | 7,148 | Bug: Error when downloading mteb/mtop_domain | {
"avatar_url": "https://avatars.githubusercontent.com/u/77958037?v=4",
"events_url": "https://api.github.com/users/ZiyiXia/events{/privacy}",
"followers_url": "https://api.github.com/users/ZiyiXia/followers",
"following_url": "https://api.github.com/users/ZiyiXia/following{/other_user}",
"gists_url": "https://api.github.com/users/ZiyiXia/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ZiyiXia",
"id": 77958037,
"login": "ZiyiXia",
"node_id": "MDQ6VXNlcjc3OTU4MDM3",
"organizations_url": "https://api.github.com/users/ZiyiXia/orgs",
"received_events_url": "https://api.github.com/users/ZiyiXia/received_events",
"repos_url": "https://api.github.com/users/ZiyiXia/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ZiyiXia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZiyiXia/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ZiyiXia",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Could you please try with `force_redownload` instead?\r\nEDIT:\r\n```python\r\ndata = load_dataset(\"mteb/mtop_domain\", \"en\", download_mode=\"force_redownload\")\r\n```",
"Seems the error is still there",
"I am not able to reproduce the issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: data = load_dataset(\"mteb/mtop_domain\", \"en\")\r\n\r\nIn [3]: data\r\nOut[3]: DatasetDict({\r\n train: Dataset({\r\n features: ['id', 'text', 'label', 'label_text'],\r\n num_rows: 15667\r\n })\r\n validation: Dataset({\r\n features: ['id', 'text', 'label', 'label_text'],\r\n num_rows: 2235\r\n })\r\n test: Dataset({\r\n features: ['id', 'text', 'label', 'label_text'],\r\n num_rows: 4386\r\n })\r\n})\r\n```",
"Just solved this by reinstall Huggingface Hub and datasets. Thanks for your help!"
] | "2024-09-13T04:09:39Z" | "2024-09-14T15:11:35Z" | "2024-09-14T15:11:35Z" | NONE | null | ### Describe the bug
When downloading the dataset "mteb/mtop_domain", ran into the following error:
```
Traceback (most recent call last):
File "/share/project/xzy/test/test_download.py", line 3, in <module>
data = load_dataset("mteb/mtop_domain", "en", trust_remote_code=True)
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2606, in load_dataset
builder_instance = load_dataset_builder(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2277, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1923, in dataset_module_factory
raise e1 from None
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1896, in dataset_module_factory
).get_module()
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1507, in get_module
local_path = self.download_loading_script()
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 1467, in download_loading_script
return cached_path(file_path, download_config=download_config)
File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 211, in cached_path
output_path = get_from_cache(
File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 689, in get_from_cache
fsspec_get(
File "/opt/conda/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 395, in fsspec_get
fs.get_file(path, temp_file.name, callback=callback)
File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 648, in get_file
http_get(
File "/opt/conda/lib/python3.10/site-packages/huggingface_hub/file_download.py", line 578, in http_get
raise EnvironmentError(
OSError: Consistency check failed: file should be of size 2191 but has size 2190 ((…)ets/mteb/mtop_domain@main/mtop_domain.py).
We are sorry for the inconvenience. Please retry with `force_download=True`.
If the issue persists, please let us know by opening an issue on https://github.com/huggingface/huggingface_hub.
```
Try to download through HF datasets directly but got the same error as above.
```python
from datasets import load_dataset
data = load_dataset("mteb/mtop_domain", "en")
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
data = load_dataset("mteb/mtop_domain", "en", force_download=True)
```
With and without `force_download=True` both ran into the same error.
### Expected behavior
Should download the dataset successfully.
### Environment info
- datasets version: 2.21.0
- huggingface-hub version: 0.24.6 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7148/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7148/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7147/comments | https://api.github.com/repos/huggingface/datasets/issues/7147/events | https://github.com/huggingface/datasets/issues/7147 | 2,523,129,465 | I_kwDODunzps6WY-Z5 | 7,147 | IterableDataset strange deadlock | {
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonathanasdf",
"id": 511073,
"login": "jonathanasdf",
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonathanasdf",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Yes `interleave_datasets` seems to have an issue with shuffling, could you open a new issue on this ?\r\n\r\nThen regarding the deadlock, it has to do with interleave_dataset with probabilities=[1, 0] with workers that may contain an empty dataset in first position (it can be empty since you distribute 1024 shard to 8 workers, so some workers may not have an example that satisfies your condition `if shard < 25`). It creates an infinite loop, trying to get samples from empty datasets with probability 1.",
"Opened https://github.com/huggingface/datasets/issues/7156\r\n\r\nCan the deadlock be fixed somehow? The point of IterableDataset is so we don't need to preload the entire dataset, which loses some meaning if we need to see how many examples are in the dataset in order to set shards correctly.",
"~~And it is kinda strange that `Commenting out the final shuffle avoids the issue` since if the infinite loop is inside interleave_datasets you'd expect that to happen regardless of the additional shuffle call?~~\r\n\r\nEdit: oh I guess without the shuffle it's guaranteed every worker gets something, but the shuffle makes it so some workers could have nothing\r\n\r\n~~Edit2: maybe the shuffle can be changed so initially it gives one example to each worker, and only starts the random shuffle after that~~ wait it's not about the workers not getting any shards, it's about a worker getting shards but all of the shards it gets are empty shards\r\n\r\nEdit3: If it's trying to get samples from empty datasets, it should be getting back a StopIteration -- and \"all_exhausted\" should mean it eventually discovers all its datasets are empty, and then it should just raise a StopIteration itself. So it seems like there is a reasonable behavior result for this?",
"well the second dataset passed to interleave_datasets is never exhausted, since it's never sampled. But we could also state that the stream of examples from the second dataset is empty if it has probability 0, so I opened https://github.com/huggingface/datasets/pull/7157 to fix the infinite loop issue by ignoring datasets with probability 0, let me know what you think !",
"Thanks for taking a look!\r\n\r\nI think you're right that this is ultimately an issue that the user opts into by specifying a dataset with probability 0, because the user is basically saying \"I want to force this `interleave_datasets` call to run forever\" and yet one of the workers can end up having only empty shards to mix...\r\n\r\nThat said it's probably not a good idea to randomly change the behavior of `interleave_datasets` with probability 0, I can't be the only one that uses it to repeat many different datasets (since there is no `datasets.repeat()` function). https://xkcd.com/1172/\r\n\r\nI think just the knowledge that filtering out probability 0 datasets fixes the deadlock is good enough for me. I can filter it out on my side and add a restart loop around the dataloader instead.\r\n\r\nThanks again for investigating.",
"Ok I see ! We can also add .repeat() as well"
] | "2024-09-12T18:59:33Z" | "2024-09-23T09:32:27Z" | "2024-09-21T17:37:34Z" | NONE | null | ### Describe the bug
```
import datasets
import torch.utils.data
num_shards = 1024
def gen(shards):
for shard in shards:
if shard < 25:
yield {"shard": shard}
def main():
dataset = datasets.IterableDataset.from_generator(
gen,
gen_kwargs={"shards": list(range(num_shards))},
)
dataset = dataset.shuffle(buffer_size=1)
dataset = datasets.interleave_datasets(
[dataset, dataset], probabilities=[1, 0], stopping_strategy="all_exhausted"
)
dataset = dataset.shuffle(buffer_size=1)
dataloader = torch.utils.data.DataLoader(
dataset,
batch_size=8,
num_workers=8,
)
for i, batch in enumerate(dataloader):
print(batch)
if i >= 10:
break
print()
if __name__ == "__main__":
for _ in range(100):
main()
```
### Steps to reproduce the bug
Running the script above, at some point it will freeze.
- Changing `num_shards` from 1024 to 25 avoids the issue
- Commenting out the final shuffle avoids the issue
- Commenting out the interleave_datasets call avoids the issue
As an aside, if you comment out just the final shuffle, the output from interleave_datasets is not shuffled at all even though there's the shuffle before it. So something about that shuffle config is not being propagated to interleave_datasets.
### Expected behavior
The script should not freeze.
### Environment info
- `datasets` version: 3.0.0
- Platform: macOS-14.6.1-arm64-arm-64bit
- Python version: 3.12.5
- `huggingface_hub` version: 0.24.7
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1
I observed this with 2.21.0 initially, then tried upgrading to 3.0.0 and could still repro. | {
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonathanasdf",
"id": 511073,
"login": "jonathanasdf",
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonathanasdf",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7147/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7147/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7146/comments | https://api.github.com/repos/huggingface/datasets/issues/7146/events | https://github.com/huggingface/datasets/pull/7146 | 2,519,820,162 | PR_kwDODunzps57KqRV | 7,146 | Set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7146). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | "2024-09-11T13:53:27Z" | "2024-09-12T04:34:08Z" | "2024-09-12T04:34:06Z" | MEMBER | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7146/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7146/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7146.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7146",
"merged_at": "2024-09-12T04:34:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7146.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7146"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7145/comments | https://api.github.com/repos/huggingface/datasets/issues/7145/events | https://github.com/huggingface/datasets/pull/7145 | 2,519,789,724 | PR_kwDODunzps57Kjjc | 7,145 | Release: 3.0.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7145). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | "2024-09-11T13:41:47Z" | "2024-09-11T13:48:42Z" | "2024-09-11T13:48:41Z" | MEMBER | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7145/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7145/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7145.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7145",
"merged_at": "2024-09-11T13:48:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7145.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7145"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7144/comments | https://api.github.com/repos/huggingface/datasets/issues/7144/events | https://github.com/huggingface/datasets/pull/7144 | 2,519,393,560 | PR_kwDODunzps57JLmb | 7,144 | Fix key error in webdataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/26804893?v=4",
"events_url": "https://api.github.com/users/ragavsachdeva/events{/privacy}",
"followers_url": "https://api.github.com/users/ragavsachdeva/followers",
"following_url": "https://api.github.com/users/ragavsachdeva/following{/other_user}",
"gists_url": "https://api.github.com/users/ragavsachdeva/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ragavsachdeva",
"id": 26804893,
"login": "ragavsachdeva",
"node_id": "MDQ6VXNlcjI2ODA0ODkz",
"organizations_url": "https://api.github.com/users/ragavsachdeva/orgs",
"received_events_url": "https://api.github.com/users/ragavsachdeva/received_events",
"repos_url": "https://api.github.com/users/ragavsachdeva/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ragavsachdeva/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ragavsachdeva/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ragavsachdeva",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"hi ! What version of `datasets` are you using ? Is this issue also happening with `datasets==3.0.0` ?\r\nAsking because we made sure to replicate the official webdataset logic, which is to use the latest dot as separator between the sample base name and the key",
"Hi, yes this is still a problem on `datasets==3.0.0`.\r\n\r\nI was using `datasets=2.20.0` and in that version you get the key error.\r\n\r\nI just upgraded to `datasets==3.0.0` and in this version, you do not get a key error because it sets all keys to none by default in `_generate_examples` function:\r\n\r\n```python\r\nif field_name not in example:\r\n example[field_name] = None\r\n```\r\n\r\nHowever, the behaviour is still incorrect. This `if` condition is triggered because the filename is not split properly and it returns the data as `None` when it shouldn't.\r\n\r\n> we made sure to replicate the official webdataset logic, which is to use the latest dot as separator\r\n \r\nAh, but that's not what `split(, 1)` does though. This is exactly why I'm suggesting to use `rsplit` instead. In general, using `rsplit` should not be a breaking change I believe.",
"Hi @ragavsachdeva,\r\n\r\nWe already had this discussion in the issue you have linked:\r\n- #6880\r\n- I even opened a PR with your proposed fix:\r\n - #6888\r\n\r\nHowever, we decided not to implement this feature because it is NOT aligned with the behavior of the `webdataset` library:\r\n> The prefix of a file is all directory components of the file plus the file name component up to the *first* “.” in the file name.\r\n```python\r\nIn [1]: import webdataset as wds\r\n\r\nIn [2]: wds.tariterators.base_plus_ext(\"22.05.png\")\r\nOut[2]: ('22', '05.png')\r\n```\r\n\r\n",
"Ah, my apologies I missed https://github.com/huggingface/datasets/pull/6888 (clearly didn't do my due diligence). It's such a weird convention to have though. My keys are `/some/path/22.0/1.1.png` and it splits them at `/some/path/22` and `.0/1.1.png`(!) I'm okay with this PR not being merged though. Thanks for your time.",
"Actually `datasets` is not behaving correctly in this case and should not split as `.0/1.1.png` - even webdataset handles this correctly via their regex `^((?:.*/|)[^.]+)[.]([^/]*)$` in `wds.tariterators.base_plus_ext` here:\r\n\r\nhttps://github.com/webdataset/webdataset/blob/87bd5aa41602d57f070f65a670893ee625702f2f/webdataset/tariterators.py#L36",
"Oh.. the intention with that regex is to capture \"multi-part\" extensions e.g. `.tar.gz`. Makes sense. So `rsplit` isn't the solution then and neither is `split`. This expression makes so much more sense. Nice find! I'm assuming you'll add a patch?",
"Issue addressed by:\r\n- #7151"
] | "2024-09-11T10:50:17Z" | "2024-09-16T06:23:37Z" | "2024-09-13T04:31:37Z" | NONE | null | I was running into
```
example[field_name] = {"path": example["__key__"] + "." + field_name, "bytes": example[field_name]}
KeyError: 'png'
```
The issue is that a filename may have multiple "." e.g. `22.05.png`. Changing `split` to `rsplit` fixes it.
Related https://github.com/huggingface/datasets/issues/6880 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7144/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7144/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7144.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7144",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7144.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7144"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7143/comments | https://api.github.com/repos/huggingface/datasets/issues/7143/events | https://github.com/huggingface/datasets/pull/7143 | 2,512,327,211 | PR_kwDODunzps56xCm6 | 7,143 | Modify add_column() to optionally accept a FeatureType as param | {
"avatar_url": "https://avatars.githubusercontent.com/u/20443618?v=4",
"events_url": "https://api.github.com/users/varadhbhatnagar/events{/privacy}",
"followers_url": "https://api.github.com/users/varadhbhatnagar/followers",
"following_url": "https://api.github.com/users/varadhbhatnagar/following{/other_user}",
"gists_url": "https://api.github.com/users/varadhbhatnagar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/varadhbhatnagar",
"id": 20443618,
"login": "varadhbhatnagar",
"node_id": "MDQ6VXNlcjIwNDQzNjE4",
"organizations_url": "https://api.github.com/users/varadhbhatnagar/orgs",
"received_events_url": "https://api.github.com/users/varadhbhatnagar/received_events",
"repos_url": "https://api.github.com/users/varadhbhatnagar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/varadhbhatnagar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varadhbhatnagar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/varadhbhatnagar",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Requesting review @lhoestq \r\nI will also update the docs if this looks good.",
"Cool ! maybe you can rename the argument `feature` and with type `FeatureType` ? This way it would work the same way as `.cast_column()` ?",
"@lhoestq Since there is no way to get a `pyarrow.Schema` from a `FeatureType`, I had to go via `Features`. How does this look?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7143). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"@lhoestq done!",
"@lhoestq anything pending on this?"
] | "2024-09-08T10:56:57Z" | "2024-09-17T06:01:23Z" | "2024-09-16T15:11:01Z" | CONTRIBUTOR | null | Fix #7142.
**Before (Add + Cast)**:
```
from datasets import load_dataset, Value
ds = load_dataset("rotten_tomatoes", split="test")
lst = [i for i in range(len(ds))]
ds = ds.add_column("new_col", lst)
# Assigns int64 to new_col by default
print(ds.features)
ds = ds.cast_column("new_col", Value(dtype="uint16", id=None))
print(ds.features)
```
**Before (Numpy Workaround)**:
```
from datasets import load_dataset
import numpy as np
ds = load_dataset("rotten_tomatoes", split="test")
lst = [i for i in range(len(ds))]
ds = ds.add_column("new_col", np.array(lst, dtype=np.uint16))
print(ds.features)
```
**After**:
```
from datasets import load_dataset, Value
ds = load_dataset("rotten_tomatoes", split="test")
lst = [i for i in range(len(ds))]
val = Value(dtype="uint16", id=None))
ds = ds.add_column("new_col", lst, feature=val)
print(ds.features)
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7143/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7143/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7143.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7143",
"merged_at": "2024-09-16T15:11:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7143.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7143"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7142/comments | https://api.github.com/repos/huggingface/datasets/issues/7142/events | https://github.com/huggingface/datasets/issues/7142 | 2,512,244,938 | I_kwDODunzps6VvdDK | 7,142 | Specifying datatype when adding a column to a dataset. | {
"avatar_url": "https://avatars.githubusercontent.com/u/20443618?v=4",
"events_url": "https://api.github.com/users/varadhbhatnagar/events{/privacy}",
"followers_url": "https://api.github.com/users/varadhbhatnagar/followers",
"following_url": "https://api.github.com/users/varadhbhatnagar/following{/other_user}",
"gists_url": "https://api.github.com/users/varadhbhatnagar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/varadhbhatnagar",
"id": 20443618,
"login": "varadhbhatnagar",
"node_id": "MDQ6VXNlcjIwNDQzNjE4",
"organizations_url": "https://api.github.com/users/varadhbhatnagar/orgs",
"received_events_url": "https://api.github.com/users/varadhbhatnagar/received_events",
"repos_url": "https://api.github.com/users/varadhbhatnagar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/varadhbhatnagar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varadhbhatnagar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/varadhbhatnagar",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"#self-assign"
] | "2024-09-08T07:34:24Z" | "2024-09-17T03:46:32Z" | "2024-09-17T03:46:32Z" | CONTRIBUTOR | null | ### Feature request
There should be a way to specify the datatype of a column in `datasets.add_column()`.
### Motivation
To specify a custom datatype, we have to use `datasets.add_column()` followed by `datasets.cast_column()` which is slow for large datasets. Another workaround is to pass a `numpy.array()` of desired type to the `datasets.add_column()` function.
IMO this functionality should be natively supported.
https://discuss.huggingface.co/t/add-column-with-a-particular-type-in-datasets/95674
### Your contribution
I can submit a PR for this. | {
"avatar_url": "https://avatars.githubusercontent.com/u/20443618?v=4",
"events_url": "https://api.github.com/users/varadhbhatnagar/events{/privacy}",
"followers_url": "https://api.github.com/users/varadhbhatnagar/followers",
"following_url": "https://api.github.com/users/varadhbhatnagar/following{/other_user}",
"gists_url": "https://api.github.com/users/varadhbhatnagar/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/varadhbhatnagar",
"id": 20443618,
"login": "varadhbhatnagar",
"node_id": "MDQ6VXNlcjIwNDQzNjE4",
"organizations_url": "https://api.github.com/users/varadhbhatnagar/orgs",
"received_events_url": "https://api.github.com/users/varadhbhatnagar/received_events",
"repos_url": "https://api.github.com/users/varadhbhatnagar/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/varadhbhatnagar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varadhbhatnagar/subscriptions",
"type": "User",
"url": "https://api.github.com/users/varadhbhatnagar",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7142/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7142/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7141/comments | https://api.github.com/repos/huggingface/datasets/issues/7141/events | https://github.com/huggingface/datasets/issues/7141 | 2,510,797,653 | I_kwDODunzps6Vp7tV | 7,141 | Older datasets throwing safety errors with 2.21.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/1050316?v=4",
"events_url": "https://api.github.com/users/alvations/events{/privacy}",
"followers_url": "https://api.github.com/users/alvations/followers",
"following_url": "https://api.github.com/users/alvations/following{/other_user}",
"gists_url": "https://api.github.com/users/alvations/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvations",
"id": 1050316,
"login": "alvations",
"node_id": "MDQ6VXNlcjEwNTAzMTY=",
"organizations_url": "https://api.github.com/users/alvations/orgs",
"received_events_url": "https://api.github.com/users/alvations/received_events",
"repos_url": "https://api.github.com/users/alvations/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvations/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvations/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvations",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I am also getting this error with this dataset: https://huggingface.co/datasets/google/IFEval",
"Me too, didn't have this issue few hours ago.",
"same observation. I even downgraded `datasets==2.20.0` and `huggingface_hub==0.23.5` leading me to believe it's an issue on the server.\r\n\r\nany known workarounds?\r\n",
"Not a good idea, but commenting out the whole security block at `/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py` is a temporary workaround:\r\n\r\n```\r\n #security = kwargs.pop(\"security\", None)\r\n #if security is not None:\r\n # security = BlobSecurityInfo(\r\n # safe=security[\"safe\"], av_scan=security[\"avScan\"], pickle_import_scan=security[\"pickleImportScan\"]\r\n # )\r\n #self.security = security\r\n```\r\n",
"Uploading a dataset to Huggingface also results in the following error in the Dataset Preview:\r\n```\r\nThe full dataset viewer is not available (click to read why). Only showing a preview of the rows.\r\n'safe'\r\nError code: UnexpectedError\r\nNeed help to make the dataset viewer work? Make sure to review [how to configure the dataset viewer](link1), and [open a discussion](link2) for direct support.\r\n```\r\nI used jsonl format for the dataset in this case. Same exact dataset worked previously.",
"Same issue here. Even reverting to older version of `datasets` (e.g., `2.19.0`) results in same error:\r\n\r\n```python\r\n>>> datasets.load_dataset('allenai/ai2_arc', 'ARC-Easy')\r\n\r\nFile \"/Users/lucas/miniforge3/envs/oe-eval-internal/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 3048, in <listcomp>\r\n RepoFile(**path_info) if path_info[\"type\"] == \"file\" else RepoFolder(**path_info)\r\n File \"/Users/lucas/miniforge3/envs/oe-eval-internal/lib/python3.10/site-packages/huggingface_hub/hf_api.py\", line 534, in __init__\r\n safe=security[\"safe\"], av_scan=security[\"avScan\"], pickle_import_scan=security[\"pickleImportScan\"]\r\nKeyError: 'safe'\r\n```",
"i just had this issue a few minutes ago, crawled the internet and found nothing. came here to open an issue and found this. it is really frustrating. anyone found a fix?",
"hi, me and my team have the same problem",
"Yeah, this just suddenly appeared without client-side code changes, within the last hours.\r\n\r\nHere's a patch to fix the issue temporarily:\r\n```python\r\nimport huggingface_hub\r\ndef patched_repofolder_init(self, **kwargs):\r\n self.path = kwargs.pop(\"path\")\r\n self.tree_id = kwargs.pop(\"oid\")\r\n last_commit = kwargs.pop(\"lastCommit\", None) or kwargs.pop(\"last_commit\", None)\r\n if last_commit is not None:\r\n last_commit = huggingface_hub.hf_api.LastCommitInfo(\r\n oid=last_commit[\"id\"],\r\n title=last_commit[\"title\"],\r\n date=huggingface_hub.utils.parse_datetime(last_commit[\"date\"]),\r\n )\r\n self.last_commit = last_commit\r\n\r\n\r\ndef patched_repo_file_init(self, **kwargs):\r\n self.path = kwargs.pop(\"path\")\r\n self.size = kwargs.pop(\"size\")\r\n self.blob_id = kwargs.pop(\"oid\")\r\n lfs = kwargs.pop(\"lfs\", None)\r\n if lfs is not None:\r\n lfs = huggingface_hub.hf_api.BlobLfsInfo(size=lfs[\"size\"], sha256=lfs[\"oid\"], pointer_size=lfs[\"pointerSize\"])\r\n self.lfs = lfs\r\n last_commit = kwargs.pop(\"lastCommit\", None) or kwargs.pop(\"last_commit\", None)\r\n if last_commit is not None:\r\n last_commit = huggingface_hub.hf_api.LastCommitInfo(\r\n oid=last_commit[\"id\"],\r\n title=last_commit[\"title\"],\r\n date=huggingface_hub.utils.parse_datetime(last_commit[\"date\"]),\r\n )\r\n self.last_commit = last_commit\r\n self.security = None\r\n\r\n # backwards compatibility\r\n self.rfilename = self.path\r\n self.lastCommit = self.last_commit\r\n\r\n\r\nhuggingface_hub.hf_api.RepoFile.__init__ = patched_repo_file_init\r\nhuggingface_hub.hf_api.RepoFolder.__init__ = patched_repofolder_init\r\n```\r\n",
"Also discussed here:\r\nhttps://discuss.huggingface.co/t/i-keep-getting-keyerror-safe-when-loading-my-datasets/105669/1",
"i'm thinking this should be a server issue, i mean no client code was changed on my end. so weird!",
"As far as I can tell, this seems to be happening with **all** datasets that use RepoFolder (probably represents most datasets on huggingface, right?)",
"> Here is a temporary fix for the problem: https://discuss.huggingface.co/t/i-keep-getting-keyerror-safe-when-loading-my-datasets/105669/12?u=mlscientist\r\n\r\nthis doesn't seem to work!",
"In case you are using Colab or similar, remember to restart your session after modyfing the hf_api.py file",
"No need to modify the file directly, just monkey-patch.\r\n\r\nI'm now more sure that the error appears because the backend expects the api code to look like it does on `main`. If `RepoFile` and `RepoFolder` look about like they look on main, they work again.\r\n\r\nIf not fixed like above, a secondary error that will appear is \r\n```\r\n return self.info(path, expand_info=False)[\"type\"] == \"directory\"\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n \"tree_id\": path_info.tree_id,\r\n ^^^^^^^^^^^^^^^^^\r\nAttributeError: 'RepoFolder' object has no attribute 'tree_id'\r\n```\r\n",
"We've reverted the deployment, please let us know if the issue still persists!",
"thanks @muellerzr!"
] | "2024-09-06T16:26:30Z" | "2024-09-06T21:14:14Z" | "2024-09-06T19:09:29Z" | NONE | null | ### Describe the bug
The dataset loading was throwing some safety errors for this popular dataset `wmt14`.
[in]:
```
import datasets
# train_data = datasets.load_dataset("wmt14", "de-en", split="train")
train_data = datasets.load_dataset("wmt14", "de-en", split="train")
val_data = datasets.load_dataset("wmt14", "de-en", split="validation[:10%]")
```
[out]:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[<ipython-input-9-445f0ecc4817>](https://localhost:8080/#) in <cell line: 4>()
2
3 # train_data = datasets.load_dataset("wmt14", "de-en", split="train")
----> 4 train_data = datasets.load_dataset("wmt14", "de-en", split="train")
5 val_data = datasets.load_dataset("wmt14", "de-en", split="validation[:10%]")
12 frames
[/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py](https://localhost:8080/#) in __init__(self, **kwargs)
636 if security is not None:
637 security = BlobSecurityInfo(
--> 638 safe=security["safe"], av_scan=security["avScan"], pickle_import_scan=security["pickleImportScan"]
639 )
640 self.security = security
KeyError: 'safe'
```
### Steps to reproduce the bug
See above.
### Expected behavior
Dataset properly loaded.
### Environment info
version: 2.21.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/muellerzr",
"id": 7831895,
"login": "muellerzr",
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/muellerzr",
"user_view_type": "public"
} | {
"+1": 26,
"-1": 0,
"confused": 2,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 28,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7141/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7141/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7139/comments | https://api.github.com/repos/huggingface/datasets/issues/7139/events | https://github.com/huggingface/datasets/issues/7139 | 2,508,078,858 | I_kwDODunzps6Vfj8K | 7,139 | Use load_dataset to load imagenet-1K But find a empty dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/105094708?v=4",
"events_url": "https://api.github.com/users/fscdc/events{/privacy}",
"followers_url": "https://api.github.com/users/fscdc/followers",
"following_url": "https://api.github.com/users/fscdc/following{/other_user}",
"gists_url": "https://api.github.com/users/fscdc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fscdc",
"id": 105094708,
"login": "fscdc",
"node_id": "U_kgDOBkOeNA",
"organizations_url": "https://api.github.com/users/fscdc/orgs",
"received_events_url": "https://api.github.com/users/fscdc/received_events",
"repos_url": "https://api.github.com/users/fscdc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fscdc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fscdc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fscdc",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Imagenet-1k is a gated dataset which means you’ll have to agree to share your contact info to access it. Have you tried this yet? Once you have, you can sign in with your user token (you can find this in your Hugging Face account settings) when prompted by running.\r\n\r\n```\r\nhuggingface-cli login\r\ntrain_set = load_dataset('imagenet-1k', split='train', use_auth_token=True)\r\n``` ",
"Thanks a lot! It helps me"
] | "2024-09-05T15:12:22Z" | "2024-10-09T04:02:41Z" | null | NONE | null | ### Describe the bug
```python
def get_dataset(data_path, train_folder="train", val_folder="val"):
traindir = os.path.join(data_path, train_folder)
valdir = os.path.join(data_path, val_folder)
def transform_val_examples(examples):
transform = Compose([
Resize(256),
CenterCrop(224),
ToTensor(),
])
examples["image"] = [transform(image.convert("RGB")) for image in examples["image"]]
return examples
def transform_train_examples(examples):
transform = Compose([
RandomResizedCrop(224),
RandomHorizontalFlip(),
ToTensor(),
])
examples["image"] = [transform(image.convert("RGB")) for image in examples["image"]]
return examples
# @fengsicheng: This way is very slow for big dataset like ImageNet-1K (but can pass the network problem using local dataset)
# train_set = load_dataset("imagefolder", data_dir=traindir, num_proc=4)
# test_set = load_dataset("imagefolder", data_dir=valdir, num_proc=4)
train_set = load_dataset("imagenet-1K", split="train", trust_remote_code=True)
test_set = load_dataset("imagenet-1K", split="test", trust_remote_code=True)
print(train_set["label"])
train_set.set_transform(transform_train_examples)
test_set.set_transform(transform_val_examples)
return train_set, test_set
```
above the code, but output of the print is a list of None:
<img width="952" alt="image" src="https://github.com/user-attachments/assets/c4e2fdd8-3b8f-481e-8f86-9bbeb49d79fb">
### Steps to reproduce the bug
1. just ran the code
2. see the print
### Expected behavior
I do not know how to fix this, can anyone provide help or something? It is hurry for me
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-5.4.0-190-generic-x86_64-with-glibc2.31
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.6
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7139/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7139/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7138 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7138/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7138/comments | https://api.github.com/repos/huggingface/datasets/issues/7138/events | https://github.com/huggingface/datasets/issues/7138 | 2,507,738,308 | I_kwDODunzps6VeQzE | 7,138 | Cache only changed columns? | {
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"events_url": "https://api.github.com/users/Modexus/events{/privacy}",
"followers_url": "https://api.github.com/users/Modexus/followers",
"following_url": "https://api.github.com/users/Modexus/following{/other_user}",
"gists_url": "https://api.github.com/users/Modexus/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Modexus",
"id": 37351874,
"login": "Modexus",
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"organizations_url": "https://api.github.com/users/Modexus/orgs",
"received_events_url": "https://api.github.com/users/Modexus/received_events",
"repos_url": "https://api.github.com/users/Modexus/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Modexus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Modexus/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Modexus",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"so I guess a workaround to this is to simply remove all columns except the ones to cache and then add them back with `concatenate_datasets(..., axis=1)`.",
"yes this is the right workaround. We're keeping the cache like this to make it easier for people to delete intermediate cache files"
] | "2024-09-05T12:56:47Z" | "2024-09-20T13:27:20Z" | null | CONTRIBUTOR | null | ### Feature request
Cache only the actual changes to the dataset i.e. changed columns.
### Motivation
I realized that caching actually saves the complete dataset again.
This is especially problematic for image datasets if one wants to only change another column e.g. some metadata and then has to save 5 TB again.
### Your contribution
Is this even viable in the current architecture of the package?
I quickly looked into it and it seems it would require significant changes.
I would spend some time looking into this but maybe somebody could help with the feasibility and some plan to implement before spending too much time on it? | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7138/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7138/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7137/comments | https://api.github.com/repos/huggingface/datasets/issues/7137/events | https://github.com/huggingface/datasets/issues/7137 | 2,506,851,048 | I_kwDODunzps6Va4Lo | 7,137 | [BUG] dataset_info sequence unexpected behavior in README.md YAML | {
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "https://api.github.com/users/ain-soph/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ain-soph",
"id": 13214530,
"login": "ain-soph",
"node_id": "MDQ6VXNlcjEzMjE0NTMw",
"organizations_url": "https://api.github.com/users/ain-soph/orgs",
"received_events_url": "https://api.github.com/users/ain-soph/received_events",
"repos_url": "https://api.github.com/users/ain-soph/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ain-soph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ain-soph/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ain-soph",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"The non-sequence case works well (`dict[str, str]` instead of `list[dict[str, str]]`), which makes me believe it shall be a bug for `sequence` and my proposed behavior shall be expected.\r\n```\r\ndataset_info:\r\n- config_name: default\r\n features:\r\n - name: answers\r\n dtype:\r\n - name: text\r\n dtype: string\r\n - name: label\r\n dtype: string\r\n\r\n\r\n# data\r\n{\"answers\": {\"text\": \"ADDRESS\", \"label\": \"abc\"}}\r\n```"
] | "2024-09-05T06:06:06Z" | "2024-09-09T15:55:50Z" | null | NONE | null | ### Describe the bug
When working on `dataset_info` yaml, I find my data column with format `list[dict[str, str]]` cannot be coded correctly.
My data looks like
```
{"answers":[{"text": "ADDRESS", "label": "abc"}]}
```
My `dataset_info` in README.md is:
```
dataset_info:
- config_name: default
features:
- name: answers
sequence:
- name: text
dtype: string
- name: label
dtype: string
```
**Error log**:
```
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from list<item: struct<text: string, label: string>> to struct using function cast_struct
```
## Potential Reason
After some analysis, it turns out that my yaml config is requiring `dict[str, list[str]]` instead of `list[dict[str, str]]`. It would work if I change my data to
```
{"answers":{"text": ["ADDRESS"], "label": ["abc", "def"]}}
```
These following 2 different `dataset_info` are actually equivalent.
```
dataset_info:
- config_name: default
features:
- name: answers
dtype:
- name: text
sequence: string
- name: label
sequence: string
dataset_info:
- config_name: default
features:
- name: answers
sequence:
- name: text
dtype: string
- name: label
dtype: string
```
### Steps to reproduce the bug
```
# README.md
---
dataset_info:
- config_name: default
features:
- name: answers
sequence:
- name: text
dtype: string
- name: label
dtype: string
configs:
- config_name: default
default: true
data_files:
- split: train
path:
- "test.jsonl"
---
# test.jsonl
# expected but not working
{"answers":[{"text": "ADDRESS", "label": "abc"}]}
# unexpected but working
{"answers":{"text": ["ADDRESS"], "label": ["abc", "def"]}}
```
### Expected behavior
```
dataset_info:
- config_name: default
features:
- name: answers
sequence:
- name: text
dtype: string
- name: label
dtype: string
```
Should work on following data format:
```
{"answers":[{"text":"ADDRESS", "label": "abc"}]}
```
### Environment info
- `datasets` version: 2.21.0
- Platform: macOS-14.6.1-arm64-arm-64bit
- Python version: 3.12.4
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.6.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7137/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7137/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7136/comments | https://api.github.com/repos/huggingface/datasets/issues/7136/events | https://github.com/huggingface/datasets/pull/7136 | 2,506,115,857 | PR_kwDODunzps56b9R- | 7,136 | Do not consume unnecessary memory during sharding | {
"avatar_url": "https://avatars.githubusercontent.com/u/12694897?v=4",
"events_url": "https://api.github.com/users/janEbert/events{/privacy}",
"followers_url": "https://api.github.com/users/janEbert/followers",
"following_url": "https://api.github.com/users/janEbert/following{/other_user}",
"gists_url": "https://api.github.com/users/janEbert/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/janEbert",
"id": 12694897,
"login": "janEbert",
"node_id": "MDQ6VXNlcjEyNjk0ODk3",
"organizations_url": "https://api.github.com/users/janEbert/orgs",
"received_events_url": "https://api.github.com/users/janEbert/received_events",
"repos_url": "https://api.github.com/users/janEbert/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/janEbert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/janEbert/subscriptions",
"type": "User",
"url": "https://api.github.com/users/janEbert",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | "2024-09-04T19:26:06Z" | "2024-09-04T19:28:23Z" | null | NONE | null | When sharding `IterableDataset`s, a temporary list is created that is then indexed. There is no need to create a temporary list of a potentially very large step/world size, with standard `islice` functionality, so we avoid it.
```shell
pytest tests/test_distributed.py -k iterable
```
Runs successfully. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7136/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7136/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7136.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7136",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7136.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7136"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7135/comments | https://api.github.com/repos/huggingface/datasets/issues/7135/events | https://github.com/huggingface/datasets/issues/7135 | 2,503,318,328 | I_kwDODunzps6VNZs4 | 7,135 | Bug: Type Mismatch in Dataset Mapping | {
"avatar_url": "https://avatars.githubusercontent.com/u/45327989?v=4",
"events_url": "https://api.github.com/users/marko1616/events{/privacy}",
"followers_url": "https://api.github.com/users/marko1616/followers",
"following_url": "https://api.github.com/users/marko1616/following{/other_user}",
"gists_url": "https://api.github.com/users/marko1616/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/marko1616",
"id": 45327989,
"login": "marko1616",
"node_id": "MDQ6VXNlcjQ1MzI3OTg5",
"organizations_url": "https://api.github.com/users/marko1616/orgs",
"received_events_url": "https://api.github.com/users/marko1616/received_events",
"repos_url": "https://api.github.com/users/marko1616/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/marko1616/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marko1616/subscriptions",
"type": "User",
"url": "https://api.github.com/users/marko1616",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"By the way, following code is working. This show the inconsistentcy.\r\n```python\r\nfrom datasets import Dataset\r\n\r\n# Original data\r\ndata = {\r\n 'text': ['Hello', 'world', 'this', 'is', 'a', 'test'],\r\n 'label': [0, 1, 0, 1, 1, 0]\r\n}\r\n\r\n# Creating a Dataset object\r\ndataset = Dataset.from_dict(data)\r\n\r\n# Mapping function to convert label to string\r\ndef add_one(example):\r\n example['label'] += 1\r\n return example\r\n\r\n# Applying the mapping function\r\ndataset = dataset.map(add_one)\r\n\r\n# Iterating over the dataset to show results\r\nfor item in dataset:\r\n print(item)\r\n print(type(item['label']))\r\n```",
"Hello, thanks for submitting an issue.\r\n\r\nFWIU, the issue is that `datasets` tries to limit casting [ref](https://github.com/huggingface/datasets/blob/ca58154bba185c1916ca5eea4e33b27258642044/src/datasets/arrow_writer.py#L526) and as such will try to convert your strings back to int to preserve the `Features`. \r\n\r\nA quick solution would be to use `dataset.cast` or to supply `features` when calling `dataset.map`.\r\n\r\n\r\n```python\r\n# using Dataset.cast\r\ndataset = dataset.cast_column('label', Value('string'))\r\n\r\n# Alternative, supply features\r\ndataset = dataset.map(add_one, features=Features({**dataset.features, 'label': Value('string')}))\r\n```",
"LGTM! Thanks for the review.\r\n\r\nJust to clarify, is this intended behavior, or is it something that might be addressed in a future update?\r\nI'll leave this issue open until it's fixed if this is not the intended behavior."
] | "2024-09-03T16:37:01Z" | "2024-09-05T14:09:05Z" | null | NONE | null | # Issue: Type Mismatch in Dataset Mapping
## Description
There is an issue with the `map` function in the `datasets` library where the mapped output does not reflect the expected type change. After applying a mapping function to convert an integer label to a string, the resulting type remains an integer instead of a string.
## Reproduction Code
Below is a Python script that demonstrates the problem:
```python
from datasets import Dataset
# Original data
data = {
'text': ['Hello', 'world', 'this', 'is', 'a', 'test'],
'label': [0, 1, 0, 1, 1, 0]
}
# Creating a Dataset object
dataset = Dataset.from_dict(data)
# Mapping function to convert label to string
def add_one(example):
example['label'] = str(example['label'])
return example
# Applying the mapping function
dataset = dataset.map(add_one)
# Iterating over the dataset to show results
for item in dataset:
print(item)
print(type(item['label']))
```
## Expected Output
After applying the mapping function, the expected output should have the `label` field as strings:
```plaintext
{'text': 'Hello', 'label': '0'}
<class 'str'>
{'text': 'world', 'label': '1'}
<class 'str'>
{'text': 'this', 'label': '0'}
<class 'str'>
{'text': 'is', 'label': '1'}
<class 'str'>
{'text': 'a', 'label': '1'}
<class 'str'>
{'text': 'test', 'label': '0'}
<class 'str'>
```
## Actual Output
The actual output still shows the `label` field values as integers:
```plaintext
{'text': 'Hello', 'label': 0}
<class 'int'>
{'text': 'world', 'label': 1}
<class 'int'>
{'text': 'this', 'label': 0}
<class 'int'>
{'text': 'is', 'label': 1}
<class 'int'>
{'text': 'a', 'label': 1}
<class 'int'>
{'text': 'test', 'label': 0}
<class 'int'>
```
## Why necessary
In the case of Image process we often need to convert PIL to tensor with same column name.
Thank for every dev who review this issue. 🤗 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7135/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7135/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7134/comments | https://api.github.com/repos/huggingface/datasets/issues/7134/events | https://github.com/huggingface/datasets/issues/7134 | 2,499,484,041 | I_kwDODunzps6U-xmJ | 7,134 | Attempting to return a rank 3 grayscale image from dataset.map results in extreme slowdown | {
"avatar_url": "https://avatars.githubusercontent.com/u/46371349?v=4",
"events_url": "https://api.github.com/users/navidmafi/events{/privacy}",
"followers_url": "https://api.github.com/users/navidmafi/followers",
"following_url": "https://api.github.com/users/navidmafi/following{/other_user}",
"gists_url": "https://api.github.com/users/navidmafi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/navidmafi",
"id": 46371349,
"login": "navidmafi",
"node_id": "MDQ6VXNlcjQ2MzcxMzQ5",
"organizations_url": "https://api.github.com/users/navidmafi/orgs",
"received_events_url": "https://api.github.com/users/navidmafi/received_events",
"repos_url": "https://api.github.com/users/navidmafi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/navidmafi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/navidmafi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/navidmafi",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | "2024-09-01T13:55:41Z" | "2024-09-02T10:34:53Z" | null | NONE | null | ### Describe the bug
Background: Digital images are often represented as a (Height, Width, Channel) tensor. This is the same for huggingface datasets that contain images. These images are loaded in Pillow containers which offer, for example, the `.convert` method.
I can convert an image from a (H,W,3) shape to a grayscale (H,W) image and I have no problems with this. But when attempting to return a (H,W,1) shaped matrix from a map function, it never completes and sometimes even results in an OOM from the OS.
I've used various methods to expand a (H,W) shaped array to a (H,W,1) array. But they all resulted in extremely long map operations consuming a lot of CPU and RAM.
### Steps to reproduce the bug
Below is a minimal example using two methods to get the desired output. Both of which don't work
```py
import tensorflow as tf
import datasets
import numpy as np
ds = datasets.load_dataset("project-sloth/captcha-images")
to_gray_pillow = lambda sample: {'image': np.expand_dims(sample['image'].convert("L"), axis=-1)}
ds_gray = ds.map(to_gray_pillow)
# Alternatively
ds = datasets.load_dataset("project-sloth/captcha-images").with_format("tensorflow")
to_gray_tf = lambda sample: {'image': tf.expand_dims(tf.image.rgb_to_grayscale(sample['image']), axis=-1)}
ds_gray = ds.map(to_gray_tf)
```
### Expected behavior
I expect the map operation to complete and return a new dataset containing grayscale images in a (H,W,1) shape.
### Environment info
datasets 2.21.0
python tested with both 3.11 and 3.12
host os : linux | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7134/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7134/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7133/comments | https://api.github.com/repos/huggingface/datasets/issues/7133/events | https://github.com/huggingface/datasets/pull/7133 | 2,496,474,495 | PR_kwDODunzps557zng | 7,133 | remove filecheck to enable symlinks | {
"avatar_url": "https://avatars.githubusercontent.com/u/23191892?v=4",
"events_url": "https://api.github.com/users/fschlatt/events{/privacy}",
"followers_url": "https://api.github.com/users/fschlatt/followers",
"following_url": "https://api.github.com/users/fschlatt/following{/other_user}",
"gists_url": "https://api.github.com/users/fschlatt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fschlatt",
"id": 23191892,
"login": "fschlatt",
"node_id": "MDQ6VXNlcjIzMTkxODky",
"organizations_url": "https://api.github.com/users/fschlatt/orgs",
"received_events_url": "https://api.github.com/users/fschlatt/received_events",
"repos_url": "https://api.github.com/users/fschlatt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fschlatt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fschlatt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fschlatt",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7133). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"The CI is failing, looks like it breaks imagefolder loading.\r\n\r\nI just checked fsspec internals and maybe instead we can detect symlink by checking `islink` and `size` to make sure it's a file\r\n```python\r\nif info[\"type\"] == \"file\" or (info.get(\"islink\") and info[\"size\"])\r\n```\r\n",
"hmm actually `size` doesn't seem to filter symlinked directories, we need another way",
"Does fsspec perhaps allow resolving symlinks? Something like https://docs.python.org/3/library/pathlib.html#pathlib.Path.resolve",
"there is `info[\"destination\"]` in case of a symlink, so maybe\r\n\r\n\r\n```python\r\nif info[\"type\"] == \"file\" or (info.get(\"islink\") and info.get(\"destination\") and os.path.isfile(info[\"destination\"]))\r\n```"
] | "2024-08-30T07:36:56Z" | "2024-09-04T12:46:56Z" | null | CONTRIBUTOR | null | Enables streaming from local symlinks #7083
@lhoestq | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7133/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7133/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7133.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7133",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7133.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7133"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7132/comments | https://api.github.com/repos/huggingface/datasets/issues/7132/events | https://github.com/huggingface/datasets/pull/7132 | 2,494,510,464 | PR_kwDODunzps551k1C | 7,132 | Fix data file module inference | {
"avatar_url": "https://avatars.githubusercontent.com/u/1714412?v=4",
"events_url": "https://api.github.com/users/HennerM/events{/privacy}",
"followers_url": "https://api.github.com/users/HennerM/followers",
"following_url": "https://api.github.com/users/HennerM/following{/other_user}",
"gists_url": "https://api.github.com/users/HennerM/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HennerM",
"id": 1714412,
"login": "HennerM",
"node_id": "MDQ6VXNlcjE3MTQ0MTI=",
"organizations_url": "https://api.github.com/users/HennerM/orgs",
"received_events_url": "https://api.github.com/users/HennerM/received_events",
"repos_url": "https://api.github.com/users/HennerM/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HennerM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HennerM/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HennerM",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! datasets saved using `save_to_disk` should be loaded with `load_from_disk` ;)",
"It is convienient to just pass in a path to a local dataset or one from the hub and use the same function to load it. Is it not possible to get this fix merged in to allow this? ",
"We can modify `save_to_disk` to write the dataset in a structure supported by the Hub in this case, it's kind of a legacy function anyway"
] | "2024-08-29T13:48:16Z" | "2024-09-02T19:52:13Z" | null | NONE | null | I saved a dataset with two splits to disk with `DatasetDict.save_to_disk`. The train is bigger and ended up in 10 shards, whereas the test split only resulted in 1 split.
Now when trying to load the dataset, an error is raised that not all splits have the same data format:
> ValueError: Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('arrow', {}), NamedSplit('test'): ('json', {})}
This is not expected because both splits are saved as arrow files.
I did some debugging and found that this is the case because the list of data_files includes a `state.json` file.
Now this means for train split I get 10 ".arrow" and 1 ".json" file. Since datasets picks based on the most common extension this is correctly inferred as "arrow". In the test split, there is 1 .arrow and 1 .json file. Given the function description:
> It picks the module based on the most common file extension.
In case of a draw ".parquet" is the favorite, and then alphabetical order.
This is not quite true though, because in a tie the extensions are actually based on reverse-alphabetical order:
```
for (ext, _), _ in sorted(extensions_counter.items(), key=sort_key, *reverse=True*):
```
Which thus leads to the module wrongly inferred as "json", whereas it should be "arrow", matching the train split.
I first thought about adding "state.json" in the list of excluded files for the inference: https://github.com/huggingface/datasets/blob/main/src/datasets/load.py#L513. However, I think from digging into the code it looks like the right thing to do is to exclude it in the list of `data_files` to start with, because it is more of a metadata than a data file. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7132/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7132/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7132.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7132",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7132.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7132"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7129/comments | https://api.github.com/repos/huggingface/datasets/issues/7129/events | https://github.com/huggingface/datasets/issues/7129 | 2,491,942,650 | I_kwDODunzps6UiAb6 | 7,129 | Inconsistent output in documentation example: `num_classes` not displayed in `ClassLabel` output | {
"avatar_url": "https://avatars.githubusercontent.com/u/17179696?v=4",
"events_url": "https://api.github.com/users/sergiopaniego/events{/privacy}",
"followers_url": "https://api.github.com/users/sergiopaniego/followers",
"following_url": "https://api.github.com/users/sergiopaniego/following{/other_user}",
"gists_url": "https://api.github.com/users/sergiopaniego/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sergiopaniego",
"id": 17179696,
"login": "sergiopaniego",
"node_id": "MDQ6VXNlcjE3MTc5Njk2",
"organizations_url": "https://api.github.com/users/sergiopaniego/orgs",
"received_events_url": "https://api.github.com/users/sergiopaniego/received_events",
"repos_url": "https://api.github.com/users/sergiopaniego/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sergiopaniego/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sergiopaniego/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sergiopaniego",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | "2024-08-28T12:27:48Z" | "2024-08-28T12:27:48Z" | null | NONE | null | In the documentation for [ClassLabel](https://huggingface.co/docs/datasets/v2.21.0/en/package_reference/main_classes#datasets.ClassLabel), there is an example of usage with the following code:
````
from datasets import Features
features = Features({'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'])})
features
````
which expects to output (as stated in the documentation):
````
{'label': ClassLabel(num_classes=3, names=['bad', 'ok', 'good'], id=None)}
````
but it generates the following
````
{'label': ClassLabel(names=['bad', 'ok', 'good'], id=None)}
````
If my understanding is correct, this happens because although num_classes is used during the init of the object, it is afterward ignored:
https://github.com/huggingface/datasets/blob/be5cff059a2a5b89d7a97bc04739c4919ab8089f/src/datasets/features/features.py#L975
I would like to work on this issue if this is something needed 😄
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7129/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7129/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7128/comments | https://api.github.com/repos/huggingface/datasets/issues/7128/events | https://github.com/huggingface/datasets/issues/7128 | 2,490,274,775 | I_kwDODunzps6UbpPX | 7,128 | Filter Large Dataset Entry by Entry | {
"avatar_url": "https://avatars.githubusercontent.com/u/36057290?v=4",
"events_url": "https://api.github.com/users/QiyaoWei/events{/privacy}",
"followers_url": "https://api.github.com/users/QiyaoWei/followers",
"following_url": "https://api.github.com/users/QiyaoWei/following{/other_user}",
"gists_url": "https://api.github.com/users/QiyaoWei/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/QiyaoWei",
"id": 36057290,
"login": "QiyaoWei",
"node_id": "MDQ6VXNlcjM2MDU3Mjkw",
"organizations_url": "https://api.github.com/users/QiyaoWei/orgs",
"received_events_url": "https://api.github.com/users/QiyaoWei/received_events",
"repos_url": "https://api.github.com/users/QiyaoWei/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/QiyaoWei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/QiyaoWei/subscriptions",
"type": "User",
"url": "https://api.github.com/users/QiyaoWei",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi ! you can do\r\n\r\n```python\r\nfiltered_dataset = dataset.filter(filter_function)\r\n```\r\n\r\non a subset:\r\n\r\n```python\r\nfiltered_subset = dataset.select(range(10_000)).filter(filter_function)\r\n```\r\n",
"Jumping on this as it seems relevant - when I use the `filter` method, it often results in an OOM (or at least unacceptably high memory usage).\r\n\r\nFor example in the [this notebook](https://colab.research.google.com/drive/1N_rWko6jzGji3j_ayDR7ngT5lf4P8at_), we load an object detection dataset from HF and imagine I want to filter such that I only have images which contain a single annotation class. Each row has a JSON field that contains MS-COCO annotations for the image, so we could load that field and filter on it.\r\n\r\nThe test dataset is only about 440 images, probably less than 1GB, but running the following filter crashes the VM (over 12 GB RAM):\r\n\r\n```python\r\nimport json\r\ndef filter_single_class(example, target_class_id):\r\n \"\"\"Filters examples based on whether they contain annotations from a single class.\r\n\r\n Args:\r\n example: A dictionary representing a single example from the dataset.\r\n target_class_id: The target class ID to filter for.\r\n\r\n Returns:\r\n True if the example contains only annotations from the target class, False otherwise.\r\n \"\"\"\r\n if not example['coco_annotations']:\r\n return False\r\n\r\n annotation_category_ids = set([annotation['category_id'] for annotation in json.loads(example['coco_annotations'])])\r\n\r\n return len(annotation_category_ids) == 1 and target_class_id in annotation_category_ids\r\n\r\ntarget_class_id = 1 \r\nfiltered_dataset = dataset['test'].filter(lambda example: filter_single_class(example, target_class_id))\r\n```\r\n\r\n<img width=\"255\" alt=\"image\" src=\"https://github.com/user-attachments/assets/be475f15-5b6b-4df2-b5b5-a1f60ae2b05c\">\r\n\r\nIterating over the dataset works fine:\r\n\r\n```python\r\nfiltered_dataset = []\r\nfor example in dataset['test']:\r\n if filter_single_class(example, target_class_id):\r\n filtered_dataset.append(example)\r\n```\r\n\r\n<img width=\"129\" alt=\"image\" src=\"https://github.com/user-attachments/assets/34fa5612-0394-4c46-9f34-e94650f05d65\">\r\n\r\nIt would be great if there was guidance in the documentation on how to use filters efficiently, or if this is some performance bug that could be addressed. At the very least I would expect a filter operation to use at most 2x the footprint of the database plus some overhead for the lambda (i.e. worst case would be a duplicate copy with all entries retained). Even if the operation is parallelised, each thread/worker should only take a subset of the dataset - so I'm not sure where this ballooning in memory usage comes from.\r\n\r\nFrom some other comments there seems to be a workaround with `writer_batch_size` or caching to file, but in the [docs](https://huggingface.co/docs/datasets/v3.0.0/en/package_reference/main_classes#datasets.Dataset.filter) at least, `keep_in_memory` defaults to `False`.",
"You can try passing input_columns=[\"coco_annotations\"] to only load this column instead of all the columns. In that case your function should take coco_annotations as input instead of example",
"If your filter_function is large and computationally intensive, consider using multi-processing or multi-threading with concurrent.futures to filter the dataset. This approach allows you to process multiple tables concurrently, reducing overall processing time, especially for CPU-bound tasks. Use ThreadPoolExecutor for I/O-bound operations and ProcessPoolExecutor for CPU-bound operations.\r\n"
] | "2024-08-27T20:31:09Z" | "2024-10-07T23:37:44Z" | null | NONE | null | ### Feature request
I am not sure if this is a new feature, but I wanted to post this problem here, and hear if others have ways of optimizing and speeding up this process.
Let's say I have a really large dataset that I cannot load into memory. At this point, I am only aware of `streaming=True` to load the dataset. Now, the dataset consists of many tables. Ideally, I would want to have some simple filtering criterion, such that I only see the "good" tables. Here is an example of what the code might look like:
```
dataset = load_dataset(
"really-large-dataset",
streaming=True
)
# And let's say we process the dataset bit by bit because we want intermediate results
dataset = islice(dataset, 10000)
# Define a function to filter the data
def filter_function(table):
if some_condition:
return True
else:
return False
# Use the filter function on your dataset
filtered_dataset = (ex for ex in dataset if filter_function(ex))
```
And then I work on the processed dataset, which would be magnitudes faster than working on the original. I would love to hear if the problem setup + solution makes sense to people, and if anyone has suggestions!
### Motivation
See description above
### Your contribution
Happy to make PR if this is a new feature | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7128/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7128/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7127/comments | https://api.github.com/repos/huggingface/datasets/issues/7127/events | https://github.com/huggingface/datasets/issues/7127 | 2,486,524,966 | I_kwDODunzps6UNVwm | 7,127 | Caching shuffles by np.random.Generator results in unintiutive behavior | {
"avatar_url": "https://avatars.githubusercontent.com/u/11832922?v=4",
"events_url": "https://api.github.com/users/el-hult/events{/privacy}",
"followers_url": "https://api.github.com/users/el-hult/followers",
"following_url": "https://api.github.com/users/el-hult/following{/other_user}",
"gists_url": "https://api.github.com/users/el-hult/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/el-hult",
"id": 11832922,
"login": "el-hult",
"node_id": "MDQ6VXNlcjExODMyOTIy",
"organizations_url": "https://api.github.com/users/el-hult/orgs",
"received_events_url": "https://api.github.com/users/el-hult/received_events",
"repos_url": "https://api.github.com/users/el-hult/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/el-hult/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/el-hult/subscriptions",
"type": "User",
"url": "https://api.github.com/users/el-hult",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I first thought this was a mistake of mine, and also posted on stack overflow. https://stackoverflow.com/questions/78913797/iterating-a-huggingface-dataset-from-disk-using-generator-seems-broken-how-to-d \r\n\r\nIt seems to me the issue is the caching step in \r\n\r\nhttps://github.com/huggingface/datasets/blob/be5cff059a2a5b89d7a97bc04739c4919ab8089f/src/datasets/arrow_dataset.py#L4306-L4316\r\n\r\nbecause the shuffle happens after checking the cache, the rng state won't advance if the cache is used. This is VERY confusing. Also not documented.\r\n\r\nMy proposal is that you remove the API for using a Generator, and only keep the seed-based API since that is functional and cache-compatible."
] | "2024-08-26T10:29:48Z" | "2024-08-26T10:35:57Z" | null | NONE | null | ### Describe the bug
Create a dataset. Save it to disk. Load from disk. Shuffle, usning a `np.random.Generator`. Iterate. Shuffle again. Iterate. The iterates are different since the supplied np.random.Generator has progressed between the shuffles.
Load dataset from disk again. Shuffle and Iterate. See same result as before. Shuffle and iterate, and this time it does not have the same shuffling as ion previous run.
The motivation is I have a deep learning loop with
```
for epoch in range(10):
for batch in dataset.shuffle(generator=generator).iter(batch_size=32):
.... # do stuff
```
where I want a new shuffling at every epoch. Instead I get the same shuffling.
### Steps to reproduce the bug
Run the code below two times.
```python
import datasets
import numpy as np
generator = np.random.default_rng(0)
ds = datasets.Dataset.from_dict(mapping={"X":range(1000)})
ds.save_to_disk("tmp")
print("First loop: ", end="")
for _ in range(10):
print(next(ds.shuffle(generator=generator).iter(batch_size=1))['X'], end=", ")
print("")
print("Second loop: ", end="")
ds = datasets.Dataset.load_from_disk("tmp")
for _ in range(10):
print(next(ds.shuffle(generator=generator).iter(batch_size=1))['X'], end=", ")
print("")
```
The output is:
```
$ python main.py
Saving the dataset (1/1 shards): 100%|███████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 495019.95 examples/s]
First loop: 459, 739, 72, 943, 241, 181, 845, 830, 896, 334,
Second loop: 741, 847, 944, 795, 483, 842, 717, 865, 231, 840,
$ python main.py
Saving the dataset (1/1 shards): 100%|████████████████████████████████████████████████████████████████████████| 1000/1000 [00:00<00:00, 22243.40 examples/s]
First loop: 459, 739, 72, 943, 241, 181, 845, 830, 896, 334,
Second loop: 741, 741, 741, 741, 741, 741, 741, 741, 741, 741,
```
The second loop, on the second run, only spits out "741, 741, 741...." which is *not* the desired output
### Expected behavior
I want the dataset to shuffle at every epoch since I provide it with a generator for shuffling.
### Environment info
Datasets version 2.21.0
Ubuntu linux. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7127/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7127/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7126/comments | https://api.github.com/repos/huggingface/datasets/issues/7126/events | https://github.com/huggingface/datasets/pull/7126 | 2,485,939,495 | PR_kwDODunzps55Y-Ws | 7,126 | Disable implicit token in CI | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7126). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005232 / 0.011353 (-0.006121) | 0.003428 / 0.011008 (-0.007580) | 0.062673 / 0.038508 (0.024164) | 0.030111 / 0.023109 (0.007002) | 0.238017 / 0.275898 (-0.037881) | 0.262655 / 0.323480 (-0.060825) | 0.003015 / 0.007986 (-0.004971) | 0.002664 / 0.004328 (-0.001665) | 0.050010 / 0.004250 (0.045759) | 0.045620 / 0.037052 (0.008567) | 0.251800 / 0.258489 (-0.006689) | 0.278829 / 0.293841 (-0.015011) | 0.029838 / 0.128546 (-0.098709) | 0.011703 / 0.075646 (-0.063943) | 0.204503 / 0.419271 (-0.214768) | 0.036173 / 0.043533 (-0.007359) | 0.242850 / 0.255139 (-0.012289) | 0.263811 / 0.283200 (-0.019389) | 0.019027 / 0.141683 (-0.122656) | 1.168028 / 1.452155 (-0.284126) | 1.208975 / 1.492716 (-0.283742) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091309 / 0.018006 (0.073303) | 0.299583 / 0.000490 (0.299093) | 0.000215 / 0.000200 (0.000015) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018451 / 0.037411 (-0.018960) | 0.062516 / 0.014526 (0.047991) | 0.073983 / 0.176557 (-0.102573) | 0.120952 / 0.737135 (-0.616184) | 0.075275 / 0.296338 (-0.221063) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.286870 / 0.215209 (0.071661) | 2.810498 / 2.077655 (0.732843) | 1.490028 / 1.504120 (-0.014092) | 1.362249 / 1.541195 (-0.178946) | 1.368939 / 1.468490 (-0.099551) | 0.736643 / 4.584777 (-3.848134) | 2.414237 / 3.745712 (-1.331475) | 2.898911 / 5.269862 (-2.370951) | 1.840630 / 4.565676 (-2.725047) | 0.077872 / 0.424275 (-0.346403) | 0.005087 / 0.007607 (-0.002520) | 0.337054 / 0.226044 (0.111009) | 3.390734 / 2.268929 (1.121806) | 1.844174 / 55.444624 (-53.600451) | 1.532741 / 6.876477 (-5.343736) | 1.551650 / 2.142072 (-0.590422) | 0.778642 / 4.805227 (-4.026585) | 0.131899 / 6.500664 (-6.368765) | 0.041801 / 0.075469 (-0.033668) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.958362 / 1.841788 (-0.883425) | 11.323330 / 8.074308 (3.249022) | 9.396199 / 10.191392 (-0.795193) | 0.131154 / 0.680424 (-0.549270) | 0.014705 / 0.534201 (-0.519496) | 0.302424 / 0.579283 (-0.276859) | 0.261870 / 0.434364 (-0.172494) | 0.340788 / 0.540337 (-0.199550) | 0.433360 / 1.386936 (-0.953576) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005571 / 0.011353 (-0.005782) | 0.003388 / 0.011008 (-0.007621) | 0.050366 / 0.038508 (0.011858) | 0.032633 / 0.023109 (0.009524) | 0.261847 / 0.275898 (-0.014051) | 0.292197 / 0.323480 (-0.031283) | 0.005070 / 0.007986 (-0.002916) | 0.002753 / 0.004328 (-0.001575) | 0.048613 / 0.004250 (0.044363) | 0.040272 / 0.037052 (0.003219) | 0.275441 / 0.258489 (0.016952) | 0.309175 / 0.293841 (0.015334) | 0.032403 / 0.128546 (-0.096143) | 0.011734 / 0.075646 (-0.063912) | 0.059532 / 0.419271 (-0.359740) | 0.033886 / 0.043533 (-0.009647) | 0.263453 / 0.255139 (0.008314) | 0.281997 / 0.283200 (-0.001203) | 0.018522 / 0.141683 (-0.123161) | 1.150364 / 1.452155 (-0.301791) | 1.204090 / 1.492716 (-0.288627) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093129 / 0.018006 (0.075123) | 0.303691 / 0.000490 (0.303201) | 0.000231 / 0.000200 (0.000031) | 0.000062 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022084 / 0.037411 (-0.015327) | 0.076354 / 0.014526 (0.061828) | 0.087710 / 0.176557 (-0.088847) | 0.128907 / 0.737135 (-0.608228) | 0.088603 / 0.296338 (-0.207735) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.301161 / 0.215209 (0.085952) | 2.954780 / 2.077655 (0.877125) | 1.601366 / 1.504120 (0.097246) | 1.477225 / 1.541195 (-0.063970) | 1.482355 / 1.468490 (0.013865) | 0.722461 / 4.584777 (-3.862315) | 0.981439 / 3.745712 (-2.764273) | 2.927006 / 5.269862 (-2.342856) | 1.884444 / 4.565676 (-2.681233) | 0.079044 / 0.424275 (-0.345231) | 0.005530 / 0.007607 (-0.002077) | 0.347082 / 0.226044 (0.121037) | 3.491984 / 2.268929 (1.223056) | 1.944317 / 55.444624 (-53.500307) | 1.645792 / 6.876477 (-5.230685) | 1.649506 / 2.142072 (-0.492567) | 0.800822 / 4.805227 (-4.004405) | 0.133936 / 6.500664 (-6.366729) | 0.041198 / 0.075469 (-0.034271) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.029764 / 1.841788 (-0.812024) | 11.928840 / 8.074308 (3.854532) | 10.021390 / 10.191392 (-0.170002) | 0.141608 / 0.680424 (-0.538816) | 0.014921 / 0.534201 (-0.519280) | 0.302050 / 0.579283 (-0.277233) | 0.124151 / 0.434364 (-0.310213) | 0.347143 / 0.540337 (-0.193195) | 0.467649 / 1.386936 (-0.919287) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e4c87a6bf57b3aa094c28895c5b89b91b3509c58 \"CML watermark\")\n"
] | "2024-08-26T05:29:46Z" | "2024-08-26T06:05:01Z" | "2024-08-26T05:59:15Z" | MEMBER | null | Disable implicit token in CI.
This PR allows running CI tests locally without implicitly using the local user HF token. For example, run locally the tests in:
- #7124 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7126/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7126/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7126.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7126",
"merged_at": "2024-08-26T05:59:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7126.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7126"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7125/comments | https://api.github.com/repos/huggingface/datasets/issues/7125/events | https://github.com/huggingface/datasets/pull/7125 | 2,485,912,246 | PR_kwDODunzps55Y4TM | 7,125 | Fix wrong SHA in CI tests of HubDatasetModuleFactoryWithParquetExport | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7125). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005741 / 0.011353 (-0.005612) | 0.004011 / 0.011008 (-0.006998) | 0.063962 / 0.038508 (0.025454) | 0.031512 / 0.023109 (0.008403) | 0.242249 / 0.275898 (-0.033649) | 0.269601 / 0.323480 (-0.053879) | 0.004502 / 0.007986 (-0.003483) | 0.002835 / 0.004328 (-0.001494) | 0.049878 / 0.004250 (0.045628) | 0.048012 / 0.037052 (0.010959) | 0.250454 / 0.258489 (-0.008035) | 0.283266 / 0.293841 (-0.010575) | 0.030752 / 0.128546 (-0.097794) | 0.012655 / 0.075646 (-0.062991) | 0.211043 / 0.419271 (-0.208229) | 0.037165 / 0.043533 (-0.006367) | 0.246815 / 0.255139 (-0.008324) | 0.264306 / 0.283200 (-0.018893) | 0.018343 / 0.141683 (-0.123340) | 1.140452 / 1.452155 (-0.311702) | 1.214849 / 1.492716 (-0.277867) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098048 / 0.018006 (0.080042) | 0.292201 / 0.000490 (0.291712) | 0.000217 / 0.000200 (0.000017) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018732 / 0.037411 (-0.018679) | 0.062887 / 0.014526 (0.048361) | 0.074353 / 0.176557 (-0.102204) | 0.120794 / 0.737135 (-0.616341) | 0.077066 / 0.296338 (-0.219272) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.276335 / 0.215209 (0.061126) | 2.722905 / 2.077655 (0.645250) | 1.423080 / 1.504120 (-0.081040) | 1.305443 / 1.541195 (-0.235752) | 1.342142 / 1.468490 (-0.126348) | 0.741899 / 4.584777 (-3.842878) | 2.407567 / 3.745712 (-1.338145) | 3.070263 / 5.269862 (-2.199599) | 1.935732 / 4.565676 (-2.629944) | 0.081371 / 0.424275 (-0.342904) | 0.005207 / 0.007607 (-0.002401) | 0.328988 / 0.226044 (0.102943) | 3.240771 / 2.268929 (0.971842) | 1.801028 / 55.444624 (-53.643597) | 1.490593 / 6.876477 (-5.385884) | 1.521317 / 2.142072 (-0.620756) | 0.794051 / 4.805227 (-4.011176) | 0.136398 / 6.500664 (-6.364266) | 0.042902 / 0.075469 (-0.032567) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.974186 / 1.841788 (-0.867602) | 12.280011 / 8.074308 (4.205703) | 9.453389 / 10.191392 (-0.738003) | 0.132627 / 0.680424 (-0.547797) | 0.014608 / 0.534201 (-0.519593) | 0.309298 / 0.579283 (-0.269985) | 0.275911 / 0.434364 (-0.158452) | 0.348261 / 0.540337 (-0.192077) | 0.439031 / 1.386936 (-0.947905) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006248 / 0.011353 (-0.005105) | 0.004369 / 0.011008 (-0.006639) | 0.050588 / 0.038508 (0.012080) | 0.032880 / 0.023109 (0.009771) | 0.268979 / 0.275898 (-0.006919) | 0.294714 / 0.323480 (-0.028766) | 0.004518 / 0.007986 (-0.003467) | 0.002995 / 0.004328 (-0.001333) | 0.048776 / 0.004250 (0.044525) | 0.041696 / 0.037052 (0.004644) | 0.283413 / 0.258489 (0.024924) | 0.322137 / 0.293841 (0.028296) | 0.032809 / 0.128546 (-0.095737) | 0.012559 / 0.075646 (-0.063087) | 0.060456 / 0.419271 (-0.358815) | 0.034564 / 0.043533 (-0.008968) | 0.267263 / 0.255139 (0.012124) | 0.292633 / 0.283200 (0.009434) | 0.019011 / 0.141683 (-0.122672) | 1.199820 / 1.452155 (-0.252335) | 1.251829 / 1.492716 (-0.240887) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097615 / 0.018006 (0.079609) | 0.313764 / 0.000490 (0.313274) | 0.000220 / 0.000200 (0.000020) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024365 / 0.037411 (-0.013046) | 0.089301 / 0.014526 (0.074775) | 0.092964 / 0.176557 (-0.083592) | 0.131724 / 0.737135 (-0.605412) | 0.094792 / 0.296338 (-0.201546) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.305119 / 0.215209 (0.089910) | 2.932192 / 2.077655 (0.854537) | 1.610573 / 1.504120 (0.106453) | 1.487502 / 1.541195 (-0.053693) | 1.533300 / 1.468490 (0.064810) | 0.717223 / 4.584777 (-3.867554) | 0.964402 / 3.745712 (-2.781310) | 3.111398 / 5.269862 (-2.158464) | 1.957942 / 4.565676 (-2.607734) | 0.079160 / 0.424275 (-0.345116) | 0.005639 / 0.007607 (-0.001968) | 0.358971 / 0.226044 (0.132927) | 3.564401 / 2.268929 (1.295472) | 2.043079 / 55.444624 (-53.401546) | 1.742681 / 6.876477 (-5.133795) | 1.784758 / 2.142072 (-0.357314) | 0.798508 / 4.805227 (-4.006719) | 0.133905 / 6.500664 (-6.366759) | 0.043008 / 0.075469 (-0.032461) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.031715 / 1.841788 (-0.810073) | 13.374312 / 8.074308 (5.300004) | 10.789098 / 10.191392 (0.597706) | 0.133663 / 0.680424 (-0.546761) | 0.016692 / 0.534201 (-0.517509) | 0.304716 / 0.579283 (-0.274567) | 0.129074 / 0.434364 (-0.305290) | 0.346440 / 0.540337 (-0.193897) | 0.464593 / 1.386936 (-0.922343) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#880a52cea337032d39e90e6f0dcc55198a75a285 \"CML watermark\")\n"
] | "2024-08-26T05:09:35Z" | "2024-08-26T05:33:15Z" | "2024-08-26T05:27:09Z" | MEMBER | null | Fix wrong SHA in CI tests of HubDatasetModuleFactoryWithParquetExport. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7125/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7125/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7125.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7125",
"merged_at": "2024-08-26T05:27:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7125.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7125"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7124 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7124/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7124/comments | https://api.github.com/repos/huggingface/datasets/issues/7124/events | https://github.com/huggingface/datasets/pull/7124 | 2,485,890,442 | PR_kwDODunzps55YzWr | 7,124 | Test get_dataset_config_info with non-existing/gated/private dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7124). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005339 / 0.011353 (-0.006014) | 0.003640 / 0.011008 (-0.007368) | 0.064012 / 0.038508 (0.025504) | 0.030424 / 0.023109 (0.007314) | 0.239966 / 0.275898 (-0.035932) | 0.264361 / 0.323480 (-0.059119) | 0.004247 / 0.007986 (-0.003739) | 0.002847 / 0.004328 (-0.001481) | 0.049640 / 0.004250 (0.045390) | 0.044903 / 0.037052 (0.007851) | 0.250174 / 0.258489 (-0.008315) | 0.281423 / 0.293841 (-0.012418) | 0.029419 / 0.128546 (-0.099127) | 0.012221 / 0.075646 (-0.063426) | 0.205907 / 0.419271 (-0.213365) | 0.036654 / 0.043533 (-0.006878) | 0.245805 / 0.255139 (-0.009334) | 0.265029 / 0.283200 (-0.018170) | 0.018081 / 0.141683 (-0.123602) | 1.113831 / 1.452155 (-0.338324) | 1.156443 / 1.492716 (-0.336274) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.134389 / 0.018006 (0.116383) | 0.300637 / 0.000490 (0.300147) | 0.000240 / 0.000200 (0.000040) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019111 / 0.037411 (-0.018300) | 0.062585 / 0.014526 (0.048059) | 0.075909 / 0.176557 (-0.100647) | 0.121382 / 0.737135 (-0.615753) | 0.074980 / 0.296338 (-0.221359) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.285062 / 0.215209 (0.069853) | 2.850130 / 2.077655 (0.772476) | 1.519877 / 1.504120 (0.015757) | 1.388711 / 1.541195 (-0.152484) | 1.397284 / 1.468490 (-0.071206) | 0.723100 / 4.584777 (-3.861677) | 2.393184 / 3.745712 (-1.352529) | 2.908418 / 5.269862 (-2.361443) | 1.871024 / 4.565676 (-2.694653) | 0.078230 / 0.424275 (-0.346045) | 0.005158 / 0.007607 (-0.002449) | 0.345622 / 0.226044 (0.119577) | 3.357611 / 2.268929 (1.088683) | 1.844492 / 55.444624 (-53.600132) | 1.584237 / 6.876477 (-5.292240) | 1.577158 / 2.142072 (-0.564915) | 0.789702 / 4.805227 (-4.015525) | 0.132045 / 6.500664 (-6.368619) | 0.042304 / 0.075469 (-0.033165) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.977166 / 1.841788 (-0.864622) | 11.306118 / 8.074308 (3.231810) | 9.490778 / 10.191392 (-0.700614) | 0.143536 / 0.680424 (-0.536888) | 0.015304 / 0.534201 (-0.518897) | 0.313892 / 0.579283 (-0.265391) | 0.267009 / 0.434364 (-0.167355) | 0.345560 / 0.540337 (-0.194778) | 0.435649 / 1.386936 (-0.951287) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005700 / 0.011353 (-0.005653) | 0.003490 / 0.011008 (-0.007519) | 0.049990 / 0.038508 (0.011482) | 0.032070 / 0.023109 (0.008961) | 0.272622 / 0.275898 (-0.003276) | 0.298265 / 0.323480 (-0.025215) | 0.004379 / 0.007986 (-0.003606) | 0.002786 / 0.004328 (-0.001543) | 0.048271 / 0.004250 (0.044020) | 0.040102 / 0.037052 (0.003050) | 0.286433 / 0.258489 (0.027944) | 0.319306 / 0.293841 (0.025465) | 0.032872 / 0.128546 (-0.095675) | 0.011870 / 0.075646 (-0.063776) | 0.059886 / 0.419271 (-0.359385) | 0.034281 / 0.043533 (-0.009252) | 0.275588 / 0.255139 (0.020450) | 0.292951 / 0.283200 (0.009751) | 0.018095 / 0.141683 (-0.123588) | 1.130870 / 1.452155 (-0.321285) | 1.190761 / 1.492716 (-0.301955) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093346 / 0.018006 (0.075340) | 0.307506 / 0.000490 (0.307016) | 0.000214 / 0.000200 (0.000014) | 0.000048 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022873 / 0.037411 (-0.014538) | 0.077070 / 0.014526 (0.062544) | 0.089152 / 0.176557 (-0.087404) | 0.130186 / 0.737135 (-0.606949) | 0.090244 / 0.296338 (-0.206095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297950 / 0.215209 (0.082740) | 2.942360 / 2.077655 (0.864705) | 1.614324 / 1.504120 (0.110204) | 1.495795 / 1.541195 (-0.045400) | 1.506155 / 1.468490 (0.037665) | 0.730307 / 4.584777 (-3.854470) | 0.966312 / 3.745712 (-2.779400) | 2.928955 / 5.269862 (-2.340906) | 1.940049 / 4.565676 (-2.625627) | 0.079589 / 0.424275 (-0.344686) | 0.006004 / 0.007607 (-0.001604) | 0.356630 / 0.226044 (0.130585) | 3.516652 / 2.268929 (1.247724) | 1.963196 / 55.444624 (-53.481429) | 1.674489 / 6.876477 (-5.201988) | 1.677558 / 2.142072 (-0.464514) | 0.806447 / 4.805227 (-3.998780) | 0.133819 / 6.500664 (-6.366845) | 0.040762 / 0.075469 (-0.034707) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.038495 / 1.841788 (-0.803293) | 11.829186 / 8.074308 (3.754878) | 10.214158 / 10.191392 (0.022766) | 0.140590 / 0.680424 (-0.539834) | 0.014729 / 0.534201 (-0.519472) | 0.300557 / 0.579283 (-0.278726) | 0.122772 / 0.434364 (-0.311592) | 0.344618 / 0.540337 (-0.195720) | 0.460064 / 1.386936 (-0.926872) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#be5cff059a2a5b89d7a97bc04739c4919ab8089f \"CML watermark\")\n"
] | "2024-08-26T04:53:59Z" | "2024-08-26T06:15:33Z" | "2024-08-26T06:09:42Z" | MEMBER | null | Test get_dataset_config_info with non-existing/gated/private dataset.
Related to:
- #7109
See also:
- https://github.com/huggingface/dataset-viewer/pull/3037: https://github.com/huggingface/dataset-viewer/pull/3037/commits/bb1a7e00c53c242088597cab6572e4fd57797ecb | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7124/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7124/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7124.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7124",
"merged_at": "2024-08-26T06:09:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7124.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7124"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7123/comments | https://api.github.com/repos/huggingface/datasets/issues/7123/events | https://github.com/huggingface/datasets/issues/7123 | 2,484,003,937 | I_kwDODunzps6UDuRh | 7,123 | Make dataset viewer more flexible in displaying metadata alongside images | {
"avatar_url": "https://avatars.githubusercontent.com/u/38985481?v=4",
"events_url": "https://api.github.com/users/egrace479/events{/privacy}",
"followers_url": "https://api.github.com/users/egrace479/followers",
"following_url": "https://api.github.com/users/egrace479/following{/other_user}",
"gists_url": "https://api.github.com/users/egrace479/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/egrace479",
"id": 38985481,
"login": "egrace479",
"node_id": "MDQ6VXNlcjM4OTg1NDgx",
"organizations_url": "https://api.github.com/users/egrace479/orgs",
"received_events_url": "https://api.github.com/users/egrace479/received_events",
"repos_url": "https://api.github.com/users/egrace479/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/egrace479/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/egrace479/subscriptions",
"type": "User",
"url": "https://api.github.com/users/egrace479",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Note that you can already have one directory per subset just for the metadata, e.g.\r\n\r\n```\r\nconfigs:\r\n - config_name: subset0\r\n data_files:\r\n - subset0/metadata.csv\r\n - images/*.jpg\r\n - config_name: subset1\r\n data_files:\r\n - subset1/metadata.csv\r\n - images/*.jpg\r\n```\r\n\r\nEDIT: ah maybe it doesn't work because you'd have to provide relative paths from the metadata files to the images",
"Yes, that's part of the issue. Also, `metadata.csv` is a very ambiguous name and we generally try to avoid using the same name for different files within a dataset, as this can quickly lead to confusion.",
"I think supporting `**/*-metadata.csv` or `**/*_metadata.csv` makes sense to me. If it sounds good to you feel free to open a PR to update the patterns here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/d4422cc24a56dc7132ddc3fd6b285c5edbd60b8c/src/datasets/data_files.py#L104-L115"
] | "2024-08-23T22:56:01Z" | "2024-10-17T09:13:47Z" | null | NONE | null | ### Feature request
To display images with their associated metadata in the dataset viewer, a `metadata.csv` file is required. In the case of a dataset with multiple subsets, this would require the CSVs to be contained in the same folder as the images since they all need to be named `metadata.csv`. The request is that this be made more flexible for datasets with multiple subsets to avoid the need to put a `metadata.csv` into each image directory where they are not as easily accessed.
### Motivation
When creating datasets with multiple subsets I can't get the images to display alongside their associated metadata (it's usually one or the other that will show up). Since this requires a file specifically named `metadata.csv`, I then have to place that file within the image directory, which makes it much more difficult to access. Additionally, it still doesn't necessarily display the images alongside their metadata correctly (see, for instance, [this discussion](https://huggingface.co/datasets/imageomics/2018-NEON-beetles/discussions/8)).
It was suggested I bring this discussion to GitHub on another dataset struggling with a similar issue ([discussion](https://huggingface.co/datasets/imageomics/fish-vista/discussions/4)). In that case, it's a mix of data subsets, where some just reference the image URLs, while others actually have the images uploaded. The ones with images uploaded are not displaying images, but renaming that file to just `metadata.csv` would diminish the clarity of the construction of the dataset itself (and I'm not entirely convinced it would solve the issue).
### Your contribution
I can make a suggestion for one approach to address the issue:
For instance, even if it could just end in `_metadata.csv` or `-metadata.csv`, that would be very helpful to allow for more flexibility of dataset structure without impacting clarity. I would think that the functionality on the backend looking for `metadata.csv` could reasonably be adapted to look for such an ending on a filename (maybe also check that it has a `file_name` column?).
Presumably, requiring the `configs` in a setup like on [this dataset](https://huggingface.co/datasets/imageomics/rare-species/blob/main/README.md) could also help in figuring out how it should work?
```
configs:
- config_name: <image subset>
data_files:
- <image-metadata>.csv
- <path/to/images>/*.jpg
```
I'd also be happy to look at whatever solution is decided upon and contribute to the ideation.
Thanks for your time and consideration! The dataset viewer really is fabulous when it works :) | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7123/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7123/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7122/comments | https://api.github.com/repos/huggingface/datasets/issues/7122/events | https://github.com/huggingface/datasets/issues/7122 | 2,482,491,258 | I_kwDODunzps6T9896 | 7,122 | [interleave_dataset] sample batches from a single source at a time | {
"avatar_url": "https://avatars.githubusercontent.com/u/4197249?v=4",
"events_url": "https://api.github.com/users/memray/events{/privacy}",
"followers_url": "https://api.github.com/users/memray/followers",
"following_url": "https://api.github.com/users/memray/following{/other_user}",
"gists_url": "https://api.github.com/users/memray/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/memray",
"id": 4197249,
"login": "memray",
"node_id": "MDQ6VXNlcjQxOTcyNDk=",
"organizations_url": "https://api.github.com/users/memray/orgs",
"received_events_url": "https://api.github.com/users/memray/received_events",
"repos_url": "https://api.github.com/users/memray/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/memray/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/memray/subscriptions",
"type": "User",
"url": "https://api.github.com/users/memray",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | "2024-08-23T07:21:15Z" | "2024-08-23T07:21:15Z" | null | NONE | null | ### Feature request
interleave_dataset and [RandomlyCyclingMultiSourcesExamplesIterable](https://github.com/huggingface/datasets/blob/3813ce846e52824b38e53895810682f0a496a2e3/src/datasets/iterable_dataset.py#L816) enable us to sample data examples from different sources. But can we also sample batches in a similar manner (each batch only contains data from a single source)?
### Motivation
Some recent research [[1](https://blog.salesforceairesearch.com/sfr-embedded-mistral/), [2](https://arxiv.org/pdf/2310.07554)] shows that source homogenous batching can be helpful for contrastive learning. Can we add a function called `RandomlyCyclingMultiSourcesBatchesIterable` to support this functionality?
### Your contribution
I can contribute a PR. But I wonder what the best way is to test its correctness and robustness. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7122/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7122/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7121/comments | https://api.github.com/repos/huggingface/datasets/issues/7121/events | https://github.com/huggingface/datasets/pull/7121 | 2,480,978,483 | PR_kwDODunzps55Iukl | 7,121 | Fix typed examples iterable state dict | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7121). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005273 / 0.011353 (-0.006079) | 0.003789 / 0.011008 (-0.007219) | 0.062811 / 0.038508 (0.024303) | 0.031055 / 0.023109 (0.007946) | 0.238663 / 0.275898 (-0.037235) | 0.269706 / 0.323480 (-0.053774) | 0.004105 / 0.007986 (-0.003881) | 0.002781 / 0.004328 (-0.001547) | 0.048800 / 0.004250 (0.044549) | 0.045759 / 0.037052 (0.008707) | 0.260467 / 0.258489 (0.001978) | 0.288800 / 0.293841 (-0.005041) | 0.029341 / 0.128546 (-0.099205) | 0.012413 / 0.075646 (-0.063233) | 0.203493 / 0.419271 (-0.215778) | 0.037270 / 0.043533 (-0.006263) | 0.246130 / 0.255139 (-0.009009) | 0.269046 / 0.283200 (-0.014154) | 0.017788 / 0.141683 (-0.123895) | 1.175537 / 1.452155 (-0.276617) | 1.197909 / 1.492716 (-0.294808) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098258 / 0.018006 (0.080251) | 0.305283 / 0.000490 (0.304794) | 0.000216 / 0.000200 (0.000016) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019066 / 0.037411 (-0.018345) | 0.062723 / 0.014526 (0.048197) | 0.075827 / 0.176557 (-0.100730) | 0.121371 / 0.737135 (-0.615764) | 0.075167 / 0.296338 (-0.221171) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296650 / 0.215209 (0.081441) | 2.910593 / 2.077655 (0.832939) | 1.510798 / 1.504120 (0.006678) | 1.375461 / 1.541195 (-0.165733) | 1.386423 / 1.468490 (-0.082067) | 0.743818 / 4.584777 (-3.840959) | 2.437848 / 3.745712 (-1.307864) | 2.943661 / 5.269862 (-2.326201) | 1.888977 / 4.565676 (-2.676699) | 0.080126 / 0.424275 (-0.344149) | 0.005168 / 0.007607 (-0.002439) | 0.348699 / 0.226044 (0.122654) | 3.477686 / 2.268929 (1.208758) | 1.901282 / 55.444624 (-53.543343) | 1.574847 / 6.876477 (-5.301629) | 1.594359 / 2.142072 (-0.547714) | 0.793415 / 4.805227 (-4.011812) | 0.133982 / 6.500664 (-6.366682) | 0.042435 / 0.075469 (-0.033034) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963057 / 1.841788 (-0.878731) | 11.597217 / 8.074308 (3.522909) | 9.285172 / 10.191392 (-0.906220) | 0.130510 / 0.680424 (-0.549914) | 0.013964 / 0.534201 (-0.520237) | 0.299334 / 0.579283 (-0.279949) | 0.267775 / 0.434364 (-0.166589) | 0.336922 / 0.540337 (-0.203416) | 0.430493 / 1.386936 (-0.956443) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005701 / 0.011353 (-0.005652) | 0.003941 / 0.011008 (-0.007067) | 0.050204 / 0.038508 (0.011696) | 0.032275 / 0.023109 (0.009166) | 0.271076 / 0.275898 (-0.004822) | 0.295565 / 0.323480 (-0.027914) | 0.004393 / 0.007986 (-0.003592) | 0.002881 / 0.004328 (-0.001447) | 0.048032 / 0.004250 (0.043782) | 0.040430 / 0.037052 (0.003378) | 0.281631 / 0.258489 (0.023142) | 0.317964 / 0.293841 (0.024124) | 0.032318 / 0.128546 (-0.096228) | 0.012348 / 0.075646 (-0.063298) | 0.060336 / 0.419271 (-0.358936) | 0.034148 / 0.043533 (-0.009385) | 0.273803 / 0.255139 (0.018664) | 0.292068 / 0.283200 (0.008868) | 0.018693 / 0.141683 (-0.122990) | 1.155704 / 1.452155 (-0.296451) | 1.192245 / 1.492716 (-0.300472) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.097588 / 0.018006 (0.079582) | 0.311760 / 0.000490 (0.311270) | 0.000232 / 0.000200 (0.000032) | 0.000055 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022825 / 0.037411 (-0.014586) | 0.077698 / 0.014526 (0.063172) | 0.088567 / 0.176557 (-0.087989) | 0.129689 / 0.737135 (-0.607446) | 0.090626 / 0.296338 (-0.205712) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299791 / 0.215209 (0.084582) | 2.978558 / 2.077655 (0.900903) | 1.594095 / 1.504120 (0.089975) | 1.468476 / 1.541195 (-0.072719) | 1.482880 / 1.468490 (0.014390) | 0.717553 / 4.584777 (-3.867224) | 0.977501 / 3.745712 (-2.768211) | 2.954289 / 5.269862 (-2.315572) | 1.895473 / 4.565676 (-2.670203) | 0.078452 / 0.424275 (-0.345824) | 0.005508 / 0.007607 (-0.002099) | 0.350882 / 0.226044 (0.124837) | 3.480878 / 2.268929 (1.211949) | 1.965240 / 55.444624 (-53.479385) | 1.672448 / 6.876477 (-5.204029) | 1.674319 / 2.142072 (-0.467753) | 0.789049 / 4.805227 (-4.016178) | 0.132715 / 6.500664 (-6.367949) | 0.041081 / 0.075469 (-0.034388) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.022953 / 1.841788 (-0.818834) | 12.123349 / 8.074308 (4.049041) | 10.336115 / 10.191392 (0.144723) | 0.142233 / 0.680424 (-0.538191) | 0.015416 / 0.534201 (-0.518785) | 0.303088 / 0.579283 (-0.276195) | 0.124942 / 0.434364 (-0.309422) | 0.338454 / 0.540337 (-0.201883) | 0.460039 / 1.386936 (-0.926897) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#3813ce846e52824b38e53895810682f0a496a2e3 \"CML watermark\")\n"
] | "2024-08-22T14:45:03Z" | "2024-08-22T14:54:56Z" | "2024-08-22T14:49:06Z" | MEMBER | null | fix https://github.com/huggingface/datasets/issues/7085 as noted by @VeryLazyBoy and reported by @AjayP13 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7121/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7121/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7121.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7121",
"merged_at": "2024-08-22T14:49:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7121.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7121"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7120/comments | https://api.github.com/repos/huggingface/datasets/issues/7120/events | https://github.com/huggingface/datasets/pull/7120 | 2,480,674,237 | PR_kwDODunzps55HrBy | 7,120 | don't mention the script if trust_remote_code=False | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7120). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Note that in this case, we could even expect this kind of message:\r\n\r\n```\r\nDataFilesNotFoundError: Unable to find 'hf://datasets/Omega02gdfdd/bioclip-demo-zero-shot-mistakes@12b0313ba4c3189ee5a24cb76200959e9bf7492e/data.csv'\r\n```\r\n\r\nWe generally return `DataFilesNotFoundError` for this case (data files passed as an argument), not sure why it does not occur with this dataset.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005484 / 0.011353 (-0.005869) | 0.003932 / 0.011008 (-0.007077) | 0.063177 / 0.038508 (0.024669) | 0.031311 / 0.023109 (0.008202) | 0.254881 / 0.275898 (-0.021017) | 0.273818 / 0.323480 (-0.049662) | 0.003312 / 0.007986 (-0.004674) | 0.003251 / 0.004328 (-0.001078) | 0.049307 / 0.004250 (0.045057) | 0.046189 / 0.037052 (0.009137) | 0.268182 / 0.258489 (0.009693) | 0.303659 / 0.293841 (0.009818) | 0.029312 / 0.128546 (-0.099234) | 0.013649 / 0.075646 (-0.061997) | 0.204240 / 0.419271 (-0.215032) | 0.036607 / 0.043533 (-0.006926) | 0.252232 / 0.255139 (-0.002907) | 0.271960 / 0.283200 (-0.011239) | 0.018043 / 0.141683 (-0.123640) | 1.148601 / 1.452155 (-0.303553) | 1.212313 / 1.492716 (-0.280403) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096354 / 0.018006 (0.078348) | 0.302575 / 0.000490 (0.302085) | 0.000246 / 0.000200 (0.000046) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019023 / 0.037411 (-0.018389) | 0.064821 / 0.014526 (0.050295) | 0.077046 / 0.176557 (-0.099510) | 0.122896 / 0.737135 (-0.614239) | 0.078300 / 0.296338 (-0.218038) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283681 / 0.215209 (0.068472) | 2.801473 / 2.077655 (0.723818) | 1.505611 / 1.504120 (0.001491) | 1.385832 / 1.541195 (-0.155363) | 1.430284 / 1.468490 (-0.038206) | 0.752041 / 4.584777 (-3.832736) | 2.406138 / 3.745712 (-1.339574) | 2.941370 / 5.269862 (-2.328492) | 1.887681 / 4.565676 (-2.677996) | 0.078693 / 0.424275 (-0.345582) | 0.005266 / 0.007607 (-0.002341) | 0.336484 / 0.226044 (0.110440) | 3.372262 / 2.268929 (1.103334) | 1.861541 / 55.444624 (-53.583084) | 1.572782 / 6.876477 (-5.303694) | 1.592387 / 2.142072 (-0.549685) | 0.796557 / 4.805227 (-4.008670) | 0.134923 / 6.500664 (-6.365741) | 0.043007 / 0.075469 (-0.032462) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982690 / 1.841788 (-0.859097) | 11.700213 / 8.074308 (3.625905) | 9.122642 / 10.191392 (-1.068750) | 0.141430 / 0.680424 (-0.538994) | 0.014971 / 0.534201 (-0.519230) | 0.300938 / 0.579283 (-0.278345) | 0.268315 / 0.434364 (-0.166049) | 0.339891 / 0.540337 (-0.200447) | 0.428302 / 1.386936 (-0.958634) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005732 / 0.011353 (-0.005621) | 0.003905 / 0.011008 (-0.007103) | 0.049900 / 0.038508 (0.011392) | 0.032255 / 0.023109 (0.009145) | 0.267929 / 0.275898 (-0.007969) | 0.295595 / 0.323480 (-0.027885) | 0.004437 / 0.007986 (-0.003549) | 0.003008 / 0.004328 (-0.001321) | 0.048357 / 0.004250 (0.044107) | 0.040118 / 0.037052 (0.003066) | 0.282859 / 0.258489 (0.024370) | 0.319243 / 0.293841 (0.025402) | 0.032793 / 0.128546 (-0.095754) | 0.012091 / 0.075646 (-0.063555) | 0.060082 / 0.419271 (-0.359189) | 0.034426 / 0.043533 (-0.009107) | 0.273668 / 0.255139 (0.018529) | 0.292110 / 0.283200 (0.008910) | 0.019002 / 0.141683 (-0.122680) | 1.165850 / 1.452155 (-0.286304) | 1.209195 / 1.492716 (-0.283521) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099267 / 0.018006 (0.081261) | 0.316746 / 0.000490 (0.316256) | 0.000267 / 0.000200 (0.000067) | 0.000053 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023117 / 0.037411 (-0.014294) | 0.076691 / 0.014526 (0.062165) | 0.092190 / 0.176557 (-0.084367) | 0.130620 / 0.737135 (-0.606515) | 0.091068 / 0.296338 (-0.205271) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296419 / 0.215209 (0.081210) | 2.933964 / 2.077655 (0.856309) | 1.595015 / 1.504120 (0.090895) | 1.467610 / 1.541195 (-0.073585) | 1.487386 / 1.468490 (0.018896) | 0.730927 / 4.584777 (-3.853850) | 0.971276 / 3.745712 (-2.774436) | 2.969735 / 5.269862 (-2.300127) | 1.916126 / 4.565676 (-2.649550) | 0.078863 / 0.424275 (-0.345412) | 0.005506 / 0.007607 (-0.002101) | 0.345191 / 0.226044 (0.119147) | 3.407481 / 2.268929 (1.138553) | 1.955966 / 55.444624 (-53.488659) | 1.677365 / 6.876477 (-5.199112) | 1.716052 / 2.142072 (-0.426020) | 0.797208 / 4.805227 (-4.008020) | 0.132853 / 6.500664 (-6.367811) | 0.041691 / 0.075469 (-0.033778) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.042331 / 1.841788 (-0.799456) | 12.186080 / 8.074308 (4.111772) | 10.288961 / 10.191392 (0.097569) | 0.141897 / 0.680424 (-0.538526) | 0.015321 / 0.534201 (-0.518880) | 0.308302 / 0.579283 (-0.270981) | 0.123292 / 0.434364 (-0.311072) | 0.348515 / 0.540337 (-0.191823) | 0.473045 / 1.386936 (-0.913891) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#cedffa52879ebc5e4df43f0bcf8660ee7229f0dc \"CML watermark\")\n"
] | "2024-08-22T12:32:32Z" | "2024-08-22T14:39:52Z" | "2024-08-22T14:33:52Z" | COLLABORATOR | null | See https://huggingface.co/datasets/Omega02gdfdd/bioclip-demo-zero-shot-mistakes for example. The error is:
```
FileNotFoundError: Couldn't find a dataset script at /src/services/worker/Omega02gdfdd/bioclip-demo-zero-shot-mistakes/bioclip-demo-zero-shot-mistakes.py or any data file in the same directory. Couldn't find 'Omega02gdfdd/bioclip-demo-zero-shot-mistakes' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/Omega02gdfdd/bioclip-demo-zero-shot-mistakes@12b0313ba4c3189ee5a24cb76200959e9bf7492e/data.csv' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']
```
The issue there is that a `configs` parameter is set in the README, while the mentioned data file (`data.csv`) does not exist. | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7120/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7120/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7120.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7120",
"merged_at": "2024-08-22T14:33:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7120.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7120"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7119/comments | https://api.github.com/repos/huggingface/datasets/issues/7119/events | https://github.com/huggingface/datasets/pull/7119 | 2,477,766,493 | PR_kwDODunzps54-GjY | 7,119 | Install transformers with numpy-2 CI | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7119). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005156 / 0.011353 (-0.006197) | 0.003365 / 0.011008 (-0.007643) | 0.063451 / 0.038508 (0.024943) | 0.029510 / 0.023109 (0.006401) | 0.244825 / 0.275898 (-0.031074) | 0.265157 / 0.323480 (-0.058323) | 0.004239 / 0.007986 (-0.003747) | 0.002732 / 0.004328 (-0.001596) | 0.050412 / 0.004250 (0.046162) | 0.043608 / 0.037052 (0.006556) | 0.256635 / 0.258489 (-0.001854) | 0.277472 / 0.293841 (-0.016369) | 0.029329 / 0.128546 (-0.099217) | 0.012318 / 0.075646 (-0.063329) | 0.204751 / 0.419271 (-0.214520) | 0.036468 / 0.043533 (-0.007065) | 0.246773 / 0.255139 (-0.008366) | 0.263932 / 0.283200 (-0.019268) | 0.017053 / 0.141683 (-0.124629) | 1.173249 / 1.452155 (-0.278905) | 1.234186 / 1.492716 (-0.258531) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.092398 / 0.018006 (0.074391) | 0.309473 / 0.000490 (0.308983) | 0.000220 / 0.000200 (0.000020) | 0.000050 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018553 / 0.037411 (-0.018858) | 0.062546 / 0.014526 (0.048020) | 0.073943 / 0.176557 (-0.102613) | 0.120498 / 0.737135 (-0.616638) | 0.075185 / 0.296338 (-0.221153) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296899 / 0.215209 (0.081690) | 2.919088 / 2.077655 (0.841433) | 1.533146 / 1.504120 (0.029026) | 1.395441 / 1.541195 (-0.145754) | 1.399089 / 1.468490 (-0.069401) | 0.742750 / 4.584777 (-3.842027) | 2.390317 / 3.745712 (-1.355395) | 2.883166 / 5.269862 (-2.386695) | 1.854003 / 4.565676 (-2.711674) | 0.077140 / 0.424275 (-0.347136) | 0.005176 / 0.007607 (-0.002432) | 0.349391 / 0.226044 (0.123347) | 3.466043 / 2.268929 (1.197114) | 1.870619 / 55.444624 (-53.574005) | 1.559173 / 6.876477 (-5.317303) | 1.605480 / 2.142072 (-0.536592) | 0.786753 / 4.805227 (-4.018474) | 0.134869 / 6.500664 (-6.365795) | 0.042176 / 0.075469 (-0.033293) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.954256 / 1.841788 (-0.887532) | 11.194758 / 8.074308 (3.120449) | 9.129670 / 10.191392 (-1.061722) | 0.138318 / 0.680424 (-0.542106) | 0.014299 / 0.534201 (-0.519902) | 0.303704 / 0.579283 (-0.275579) | 0.262513 / 0.434364 (-0.171851) | 0.346539 / 0.540337 (-0.193798) | 0.429524 / 1.386936 (-0.957412) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005692 / 0.011353 (-0.005661) | 0.003423 / 0.011008 (-0.007586) | 0.050618 / 0.038508 (0.012110) | 0.031053 / 0.023109 (0.007944) | 0.275901 / 0.275898 (0.000003) | 0.294404 / 0.323480 (-0.029076) | 0.004303 / 0.007986 (-0.003682) | 0.002728 / 0.004328 (-0.001600) | 0.049757 / 0.004250 (0.045507) | 0.039997 / 0.037052 (0.002945) | 0.287291 / 0.258489 (0.028802) | 0.319186 / 0.293841 (0.025345) | 0.032558 / 0.128546 (-0.095988) | 0.012088 / 0.075646 (-0.063558) | 0.060746 / 0.419271 (-0.358525) | 0.034046 / 0.043533 (-0.009486) | 0.276170 / 0.255139 (0.021031) | 0.293673 / 0.283200 (0.010474) | 0.018018 / 0.141683 (-0.123665) | 1.158453 / 1.452155 (-0.293701) | 1.198599 / 1.492716 (-0.294118) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093134 / 0.018006 (0.075127) | 0.304511 / 0.000490 (0.304021) | 0.000216 / 0.000200 (0.000016) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022991 / 0.037411 (-0.014421) | 0.077548 / 0.014526 (0.063022) | 0.087887 / 0.176557 (-0.088670) | 0.131786 / 0.737135 (-0.605349) | 0.088747 / 0.296338 (-0.207591) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302811 / 0.215209 (0.087602) | 2.959276 / 2.077655 (0.881621) | 1.591348 / 1.504120 (0.087229) | 1.464731 / 1.541195 (-0.076464) | 1.474112 / 1.468490 (0.005622) | 0.741573 / 4.584777 (-3.843204) | 0.959229 / 3.745712 (-2.786483) | 2.895750 / 5.269862 (-2.374111) | 1.896051 / 4.565676 (-2.669625) | 0.079012 / 0.424275 (-0.345264) | 0.005494 / 0.007607 (-0.002113) | 0.355699 / 0.226044 (0.129655) | 3.524833 / 2.268929 (1.255905) | 1.972358 / 55.444624 (-53.472266) | 1.667249 / 6.876477 (-5.209228) | 1.658635 / 2.142072 (-0.483438) | 0.813184 / 4.805227 (-3.992044) | 0.134226 / 6.500664 (-6.366438) | 0.041087 / 0.075469 (-0.034382) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.038963 / 1.841788 (-0.802824) | 11.785835 / 8.074308 (3.711526) | 10.397027 / 10.191392 (0.205635) | 0.141748 / 0.680424 (-0.538676) | 0.014738 / 0.534201 (-0.519463) | 0.300056 / 0.579283 (-0.279227) | 0.127442 / 0.434364 (-0.306922) | 0.345013 / 0.540337 (-0.195324) | 0.449598 / 1.386936 (-0.937338) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#70bac27ef861b2b11f581a291a6b76adeee24f98 \"CML watermark\")\n"
] | "2024-08-21T11:14:59Z" | "2024-08-21T11:42:35Z" | "2024-08-21T11:36:50Z" | MEMBER | null | Install transformers with numpy-2 CI.
Note that transformers no longer pins numpy < 2 since transformers-4.43.0:
- https://github.com/huggingface/transformers/pull/32018
- https://github.com/huggingface/transformers/releases/tag/v4.43.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7119/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7119/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7119.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7119",
"merged_at": "2024-08-21T11:36:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7119.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7119"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7118 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7118/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7118/comments | https://api.github.com/repos/huggingface/datasets/issues/7118/events | https://github.com/huggingface/datasets/pull/7118 | 2,477,676,893 | PR_kwDODunzps549yu4 | 7,118 | Allow numpy-2.1 and test it without audio extra | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7118). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005674 / 0.011353 (-0.005679) | 0.003919 / 0.011008 (-0.007089) | 0.062665 / 0.038508 (0.024157) | 0.031750 / 0.023109 (0.008641) | 0.234809 / 0.275898 (-0.041089) | 0.264454 / 0.323480 (-0.059026) | 0.004265 / 0.007986 (-0.003720) | 0.002757 / 0.004328 (-0.001572) | 0.048921 / 0.004250 (0.044671) | 0.050765 / 0.037052 (0.013713) | 0.246185 / 0.258489 (-0.012305) | 0.287011 / 0.293841 (-0.006829) | 0.030754 / 0.128546 (-0.097792) | 0.012368 / 0.075646 (-0.063278) | 0.203841 / 0.419271 (-0.215431) | 0.037579 / 0.043533 (-0.005953) | 0.238165 / 0.255139 (-0.016974) | 0.264375 / 0.283200 (-0.018824) | 0.018663 / 0.141683 (-0.123020) | 1.143897 / 1.452155 (-0.308258) | 1.218130 / 1.492716 (-0.274586) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102112 / 0.018006 (0.084106) | 0.303214 / 0.000490 (0.302724) | 0.000232 / 0.000200 (0.000032) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019401 / 0.037411 (-0.018010) | 0.062444 / 0.014526 (0.047919) | 0.076497 / 0.176557 (-0.100060) | 0.122309 / 0.737135 (-0.614826) | 0.077178 / 0.296338 (-0.219160) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.282931 / 0.215209 (0.067722) | 2.783587 / 2.077655 (0.705932) | 1.464076 / 1.504120 (-0.040044) | 1.333912 / 1.541195 (-0.207282) | 1.367391 / 1.468490 (-0.101099) | 0.736702 / 4.584777 (-3.848075) | 2.413625 / 3.745712 (-1.332087) | 2.949549 / 5.269862 (-2.320313) | 1.910308 / 4.565676 (-2.655369) | 0.077419 / 0.424275 (-0.346856) | 0.005159 / 0.007607 (-0.002448) | 0.345595 / 0.226044 (0.119551) | 3.433205 / 2.268929 (1.164277) | 1.844443 / 55.444624 (-53.600181) | 1.527475 / 6.876477 (-5.349002) | 1.544315 / 2.142072 (-0.597758) | 0.803942 / 4.805227 (-4.001285) | 0.134131 / 6.500664 (-6.366533) | 0.042638 / 0.075469 (-0.032831) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975158 / 1.841788 (-0.866629) | 11.726187 / 8.074308 (3.651879) | 9.403347 / 10.191392 (-0.788045) | 0.131583 / 0.680424 (-0.548840) | 0.014358 / 0.534201 (-0.519843) | 0.301360 / 0.579283 (-0.277923) | 0.266529 / 0.434364 (-0.167835) | 0.341669 / 0.540337 (-0.198668) | 0.425751 / 1.386936 (-0.961186) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005911 / 0.011353 (-0.005442) | 0.004093 / 0.011008 (-0.006915) | 0.049936 / 0.038508 (0.011428) | 0.031828 / 0.023109 (0.008719) | 0.273874 / 0.275898 (-0.002025) | 0.296871 / 0.323480 (-0.026609) | 0.004470 / 0.007986 (-0.003516) | 0.002902 / 0.004328 (-0.001426) | 0.048848 / 0.004250 (0.044597) | 0.042320 / 0.037052 (0.005268) | 0.287957 / 0.258489 (0.029468) | 0.321033 / 0.293841 (0.027192) | 0.032996 / 0.128546 (-0.095550) | 0.012244 / 0.075646 (-0.063403) | 0.060493 / 0.419271 (-0.358779) | 0.034630 / 0.043533 (-0.008902) | 0.277254 / 0.255139 (0.022115) | 0.292822 / 0.283200 (0.009623) | 0.017966 / 0.141683 (-0.123717) | 1.167432 / 1.452155 (-0.284723) | 1.231837 / 1.492716 (-0.260880) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099970 / 0.018006 (0.081964) | 0.313240 / 0.000490 (0.312750) | 0.000217 / 0.000200 (0.000017) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022928 / 0.037411 (-0.014483) | 0.077058 / 0.014526 (0.062532) | 0.090147 / 0.176557 (-0.086409) | 0.129416 / 0.737135 (-0.607720) | 0.091021 / 0.296338 (-0.205318) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.300697 / 0.215209 (0.085488) | 2.944649 / 2.077655 (0.866995) | 1.609106 / 1.504120 (0.104986) | 1.483762 / 1.541195 (-0.057433) | 1.519433 / 1.468490 (0.050943) | 0.714129 / 4.584777 (-3.870648) | 0.991848 / 3.745712 (-2.753864) | 2.966340 / 5.269862 (-2.303521) | 1.905427 / 4.565676 (-2.660249) | 0.079041 / 0.424275 (-0.345234) | 0.005671 / 0.007607 (-0.001936) | 0.356037 / 0.226044 (0.129993) | 3.504599 / 2.268929 (1.235670) | 1.979207 / 55.444624 (-53.465417) | 1.695030 / 6.876477 (-5.181447) | 1.703978 / 2.142072 (-0.438095) | 0.800871 / 4.805227 (-4.004357) | 0.134414 / 6.500664 (-6.366250) | 0.041743 / 0.075469 (-0.033726) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.029879 / 1.841788 (-0.811909) | 12.132252 / 8.074308 (4.057944) | 10.596576 / 10.191392 (0.405184) | 0.132237 / 0.680424 (-0.548187) | 0.016239 / 0.534201 (-0.517962) | 0.301831 / 0.579283 (-0.277452) | 0.127966 / 0.434364 (-0.306398) | 0.341081 / 0.540337 (-0.199256) | 0.448996 / 1.386936 (-0.937940) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#0a0fa48a68c3502edfa50273b881f909e4e6e70c \"CML watermark\")\n"
] | "2024-08-21T10:29:35Z" | "2024-08-21T11:05:03Z" | "2024-08-21T10:58:15Z" | MEMBER | null | Allow numpy-2.1 and test it without audio extra.
This PR reverts:
- #7114
Note that audio extra tests can be included again with numpy-2.1 once next numba-0.61.0 version is released. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7118/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7118/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7118.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7118",
"merged_at": "2024-08-21T10:58:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7118.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7118"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7117 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7117/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7117/comments | https://api.github.com/repos/huggingface/datasets/issues/7117/events | https://github.com/huggingface/datasets/issues/7117 | 2,476,555,659 | I_kwDODunzps6TnT2L | 7,117 | Audio dataset load everything in RAM and is very slow | {
"avatar_url": "https://avatars.githubusercontent.com/u/64205064?v=4",
"events_url": "https://api.github.com/users/Jourdelune/events{/privacy}",
"followers_url": "https://api.github.com/users/Jourdelune/followers",
"following_url": "https://api.github.com/users/Jourdelune/following{/other_user}",
"gists_url": "https://api.github.com/users/Jourdelune/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Jourdelune",
"id": 64205064,
"login": "Jourdelune",
"node_id": "MDQ6VXNlcjY0MjA1MDY0",
"organizations_url": "https://api.github.com/users/Jourdelune/orgs",
"received_events_url": "https://api.github.com/users/Jourdelune/received_events",
"repos_url": "https://api.github.com/users/Jourdelune/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Jourdelune/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jourdelune/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Jourdelune",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi ! I think the issue comes from the fact that you return `row` entirely, and therefore the dataset has to re-encode the audio data in `row`.\r\n\r\nCan you try this instead ?\r\n\r\n```python\r\n# map the dataset\r\ndef transcribe_audio(row):\r\n audio = row[\"audio\"] # get the audio but do nothing with it\r\n return {\"transcribed\": True}\r\n```\r\n\r\nPS: no need to iter on the dataset to trigger the `map` function on a `Dataset` - `map` runs directly when it's called (contrary to `IterableDataset` taht you can get when streaming, which are lazy)",
"No, that doesn't change anything, I manage to solve this problem by setting with_indices=True in the map function and directly retrieving the audio corresponding to the index.\r\n```py\r\nfrom datasets import load_dataset\r\nimport time\r\n\r\nds = load_dataset(\"WaveGenAI/audios2\", split=\"train[:50]\")\r\n\r\n\r\n# map the dataset\r\ndef transcribe_audio(row, idx):\r\n audio = ds[idx][\"audio\"] # get the audio but do nothing with it\r\n row[\"transcribed\"] = True\r\n return row\r\n\r\n\r\ntime1 = time.time()\r\nds = ds.map(\r\n transcribe_audio, with_indices=True\r\n) # set low writer_batch_size to avoid memory issues\r\n\r\nfor row in ds:\r\n pass # do nothing, just iterate to trigger the map function\r\n\r\nprint(f\"Time taken: {time.time() - time1:.2f} seconds\")\r\n```",
"Hmm maybe accessing `row[\"audio\"]` makes `map()` reencode what's inside `row[\"audio\"]` in case there are in-place modifications"
] | "2024-08-20T21:18:12Z" | "2024-08-26T13:11:55Z" | null | NONE | null | Hello, I'm working with an audio dataset. I want to transcribe the audio that the dataset contain, and for that I use whisper. My issue is that the dataset load everything in the RAM when I map the dataset, obviously, when RAM usage is too high, the program crashes.
To fix this issue, I'm using writer_batch_size that I set to 10, but in this case, the mapping of the dataset is extremely slow.
To illustrate this, on 50 examples, with `writer_batch_size` set to 10, it takes 123.24 seconds to process the dataset, but without `writer_batch_size` set to 10, it takes about ten seconds to process the dataset, but then the process remains blocked (I assume that it is writing the dataset and therefore suffers from the same problem as `writer_batch_size`)
### Steps to reproduce the bug
Hug ram usage but fast (but actually slow when saving the dataset):
```py
from datasets import load_dataset
import time
ds = load_dataset("WaveGenAI/audios2", split="train[:50]")
# map the dataset
def transcribe_audio(row):
audio = row["audio"] # get the audio but do nothing with it
row["transcribed"] = True
return row
time1 = time.time()
ds = ds.map(
transcribe_audio
)
for row in ds:
pass # do nothing, just iterate to trigger the map function
print(f"Time taken: {time.time() - time1:.2f} seconds")
```
Low ram usage but very very slow:
```py
from datasets import load_dataset
import time
ds = load_dataset("WaveGenAI/audios2", split="train[:50]")
# map the dataset
def transcribe_audio(row):
audio = row["audio"] # get the audio but do nothing with it
row["transcribed"] = True
return row
time1 = time.time()
ds = ds.map(
transcribe_audio, writer_batch_size=10
) # set low writer_batch_size to avoid memory issues
for row in ds:
pass # do nothing, just iterate to trigger the map function
print(f"Time taken: {time.time() - time1:.2f} seconds")
```
### Expected behavior
I think the processing should be much faster, on only 50 audio examples, the mapping takes several minutes while nothing is done (just loading the audio).
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-6.10.5-arch1-1-x86_64-with-glibc2.40
- Python version: 3.10.4
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2024.6.1
# Extra
The dataset has been generated by using audio folder, so I don't think anything specific in my code is causing this problem.
```py
import argparse
from datasets import load_dataset
parser = argparse.ArgumentParser()
parser.add_argument("--folder", help="folder path", default="/media/works/test/")
args = parser.parse_args()
dataset = load_dataset("audiofolder", data_dir=args.folder)
# push the dataset to hub
dataset.push_to_hub("WaveGenAI/audios")
```
Also, it's the combination of `audio = row["audio"]` and `row["transcribed"] = True` which causes problems, `row["transcribed"] = True `alone does nothing and `audio = row["audio"]` alone sometimes causes problems, sometimes not. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7117/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7117/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7116/comments | https://api.github.com/repos/huggingface/datasets/issues/7116/events | https://github.com/huggingface/datasets/issues/7116 | 2,475,522,721 | I_kwDODunzps6TjXqh | 7,116 | datasets cannot handle nested json if features is given. | {
"avatar_url": "https://avatars.githubusercontent.com/u/38550511?v=4",
"events_url": "https://api.github.com/users/ljw20180420/events{/privacy}",
"followers_url": "https://api.github.com/users/ljw20180420/followers",
"following_url": "https://api.github.com/users/ljw20180420/following{/other_user}",
"gists_url": "https://api.github.com/users/ljw20180420/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ljw20180420",
"id": 38550511,
"login": "ljw20180420",
"node_id": "MDQ6VXNlcjM4NTUwNTEx",
"organizations_url": "https://api.github.com/users/ljw20180420/orgs",
"received_events_url": "https://api.github.com/users/ljw20180420/received_events",
"repos_url": "https://api.github.com/users/ljw20180420/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ljw20180420/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ljw20180420/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ljw20180420",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi ! `Sequence` has a weird behavior for dictionaries (from tensorflow-datasets), use a regular list instead:\r\n\r\n```python\r\nds = datasets.load_dataset('json', data_files=\"./temp.json\", features=datasets.Features({\r\n 'ref1': datasets.Value('string'),\r\n 'ref2': datasets.Value('string'),\r\n 'cuts': [{\r\n \"cut1\": datasets.Value(\"uint16\"),\r\n \"cut2\": datasets.Value(\"uint16\")\r\n }]\r\n}))\r\n```",
"> Hi ! `Sequence` has a weird behavior for dictionaries (from tensorflow-datasets), use a regular list instead:\r\n> \r\n> ```python\r\n> ds = datasets.load_dataset('json', data_files=\"./temp.json\", features=datasets.Features({\r\n> 'ref1': datasets.Value('string'),\r\n> 'ref2': datasets.Value('string'),\r\n> 'cuts': [{\r\n> \"cut1\": datasets.Value(\"uint16\"),\r\n> \"cut2\": datasets.Value(\"uint16\")\r\n> }]\r\n> }))\r\n> ```\r\nThank you!\r\n",
"It works."
] | "2024-08-20T12:27:49Z" | "2024-09-03T10:18:23Z" | "2024-09-03T10:18:07Z" | NONE | null | ### Describe the bug
I have a json named temp.json.
```json
{"ref1": "ABC", "ref2": "DEF", "cuts":[{"cut1": 3, "cut2": 5}]}
```
I want to load it.
```python
ds = datasets.load_dataset('json', data_files="./temp.json", features=datasets.Features({
'ref1': datasets.Value('string'),
'ref2': datasets.Value('string'),
'cuts': datasets.Sequence({
"cut1": datasets.Value("uint16"),
"cut2": datasets.Value("uint16")
})
}))
```
The above code does not work. However, I can load it without giving features.
```python
ds = datasets.load_dataset('json', data_files="./temp.json")
```
Is it possible to load integers as uint16 to save some memory?
### Steps to reproduce the bug
As in the bug description.
### Expected behavior
The data are loaded and integers are uint16.
### Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.21.0
- Platform: Linux-5.15.0-118-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/38550511?v=4",
"events_url": "https://api.github.com/users/ljw20180420/events{/privacy}",
"followers_url": "https://api.github.com/users/ljw20180420/followers",
"following_url": "https://api.github.com/users/ljw20180420/following{/other_user}",
"gists_url": "https://api.github.com/users/ljw20180420/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ljw20180420",
"id": 38550511,
"login": "ljw20180420",
"node_id": "MDQ6VXNlcjM4NTUwNTEx",
"organizations_url": "https://api.github.com/users/ljw20180420/orgs",
"received_events_url": "https://api.github.com/users/ljw20180420/received_events",
"repos_url": "https://api.github.com/users/ljw20180420/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ljw20180420/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ljw20180420/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ljw20180420",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7116/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7116/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7115/comments | https://api.github.com/repos/huggingface/datasets/issues/7115/events | https://github.com/huggingface/datasets/issues/7115 | 2,475,363,142 | I_kwDODunzps6TiwtG | 7,115 | module 'pyarrow.lib' has no attribute 'ListViewType' | {
"avatar_url": "https://avatars.githubusercontent.com/u/175128880?v=4",
"events_url": "https://api.github.com/users/neurafusionai/events{/privacy}",
"followers_url": "https://api.github.com/users/neurafusionai/followers",
"following_url": "https://api.github.com/users/neurafusionai/following{/other_user}",
"gists_url": "https://api.github.com/users/neurafusionai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neurafusionai",
"id": 175128880,
"login": "neurafusionai",
"node_id": "U_kgDOCnBBMA",
"organizations_url": "https://api.github.com/users/neurafusionai/orgs",
"received_events_url": "https://api.github.com/users/neurafusionai/received_events",
"repos_url": "https://api.github.com/users/neurafusionai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neurafusionai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neurafusionai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neurafusionai",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"https://github.com/neurafusionai/Hugging_Face/blob/main/meta_opt_350m_customer_support_lora_v1.ipynb\r\n\r\ncouldnt train because of GPU\r\nI didnt pip install datasets -U\r\nbut looks like restarting worked"
] | "2024-08-20T11:05:44Z" | "2024-09-10T06:51:08Z" | "2024-09-10T06:51:08Z" | NONE | null | ### Describe the bug
Code:
`!pipuninstall -y pyarrow
!pip install --no-cache-dir pyarrow
!pip uninstall -y pyarrow
!pip install pyarrow --no-cache-dir
!pip install --upgrade datasets transformers pyarrow
!pip install pyarrow.parquet
! pip install pyarrow-core libparquet
!pip install pyarrow --no-cache-dir
!pip install pyarrow
!pip install transformers
!pip install --upgrade datasets
!pip install datasets
! pip install pyarrow
! pip install pyarrow.lib
! pip install pyarrow.parquet
!pip install transformers
import pyarrow as pa
print(pa.__version__)
from datasets import load_dataset
import pyarrow.parquet as pq
import pyarrow.lib as lib
import pandas as pd
from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments
from datasets import load_dataset
from transformers import AutoTokenizer
! pip install pyarrow-core libparquet
# Load the dataset for content moderation
dataset = load_dataset("PolyAI/banking77") # Example dataset for customer support
# Initialize the tokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
# Tokenize the dataset
def tokenize_function(examples):
return tokenizer(examples['text'], padding="max_length", truncation=True)
# Apply tokenization to the entire dataset
tokenized_datasets = dataset.map(tokenize_function, batched=True)
# Check the first few tokenized samples
print(tokenized_datasets['train'][0])
from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments
# Load the model
model = AutoModelForSequenceClassification.from_pretrained("facebook/opt-350m", num_labels=77)
# Define training arguments
training_args = TrainingArguments(
output_dir="./results",
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=3,
eval_strategy="epoch", #
save_strategy="epoch",
logging_dir="./logs",
learning_rate=2e-5,
)
# Initialize the Trainer
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["test"],
)
# Train the model
trainer.train()
# Evaluate the model
trainer.evaluate()
`
AttributeError Traceback (most recent call last)
[<ipython-input-23-60bed3143a93>](https://localhost:8080/#) in <cell line: 22>()
20
21
---> 22 from datasets import load_dataset
23 import pyarrow.parquet as pq
24 import pyarrow.lib as lib
5 frames
[/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module>
15 __version__ = "2.21.0"
16
---> 17 from .arrow_dataset import Dataset
18 from .arrow_reader import ReadInstruction
19 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module>
74
75 from . import config
---> 76 from .arrow_reader import ArrowReader
77 from .arrow_writer import ArrowWriter, OptimizedTypedSequence
78 from .data_files import sanitize_patterns
[/usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py](https://localhost:8080/#) in <module>
27
28 import pyarrow as pa
---> 29 import pyarrow.parquet as pq
30 from tqdm.contrib.concurrent import thread_map
31
[/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/__init__.py](https://localhost:8080/#) in <module>
18 # flake8: noqa
19
---> 20 from .core import *
[/usr/local/lib/python3.10/dist-packages/pyarrow/parquet/core.py](https://localhost:8080/#) in <module>
31
32 try:
---> 33 import pyarrow._parquet as _parquet
34 except ImportError as exc:
35 raise ImportError(
/usr/local/lib/python3.10/dist-packages/pyarrow/_parquet.pyx in init pyarrow._parquet()
AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType'
### Steps to reproduce the bug
https://colab.research.google.com/drive/1HNbsg3tHxUJOHVtYIaRnNGY4T2PnLn4a?usp=sharing
### Expected behavior
Looks like there is an issue with datasets and pyarrow
### Environment info
google colab
python
huggingface
Found existing installation: pyarrow 17.0.0
Uninstalling pyarrow-17.0.0:
Successfully uninstalled pyarrow-17.0.0
Collecting pyarrow
Downloading pyarrow-17.0.0-cp310-cp310-manylinux_2_28_x86_64.whl.metadata (3.3 kB)
Requirement already satisfied: numpy>=1.16.6 in /usr/local/lib/python3.10/dist-packages (from pyarrow) (1.26.4)
Downloading pyarrow-17.0.0-cp310-cp310-manylinux_2_28_x86_64.whl (39.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 39.9/39.9 MB 188.9 MB/s eta 0:00:00
Installing collected packages: pyarrow
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
cudf-cu12 24.4.1 requires pyarrow<15.0.0a0,>=14.0.1, but you have pyarrow 17.0.0 which is incompatible.
ibis-framework 8.0.0 requires pyarrow<16,>=2, but you have pyarrow 17.0.0 which is incompatible.
Successfully installed pyarrow-17.0.0
WARNING: The following packages were previously imported in this runtime:
[pyarrow]
You must restart the runtime in order to use newly installed versions. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7115/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7115/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7114 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7114/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7114/comments | https://api.github.com/repos/huggingface/datasets/issues/7114/events | https://github.com/huggingface/datasets/pull/7114 | 2,475,062,252 | PR_kwDODunzps5404mO | 7,114 | Temporarily pin numpy<2.1 to fix CI | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7114). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005381 / 0.011353 (-0.005972) | 0.003929 / 0.011008 (-0.007079) | 0.062505 / 0.038508 (0.023997) | 0.031048 / 0.023109 (0.007938) | 0.244794 / 0.275898 (-0.031104) | 0.270997 / 0.323480 (-0.052483) | 0.003186 / 0.007986 (-0.004799) | 0.002750 / 0.004328 (-0.001579) | 0.048289 / 0.004250 (0.044039) | 0.042617 / 0.037052 (0.005565) | 0.262607 / 0.258489 (0.004118) | 0.281778 / 0.293841 (-0.012063) | 0.029426 / 0.128546 (-0.099120) | 0.012466 / 0.075646 (-0.063181) | 0.205221 / 0.419271 (-0.214051) | 0.035535 / 0.043533 (-0.007998) | 0.247866 / 0.255139 (-0.007273) | 0.269121 / 0.283200 (-0.014079) | 0.018557 / 0.141683 (-0.123125) | 1.147982 / 1.452155 (-0.304173) | 1.188998 / 1.492716 (-0.303718) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096550 / 0.018006 (0.078544) | 0.300497 / 0.000490 (0.300007) | 0.000219 / 0.000200 (0.000019) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019150 / 0.037411 (-0.018261) | 0.063518 / 0.014526 (0.048993) | 0.076643 / 0.176557 (-0.099914) | 0.122958 / 0.737135 (-0.614177) | 0.078511 / 0.296338 (-0.217828) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278163 / 0.215209 (0.062953) | 2.733514 / 2.077655 (0.655859) | 1.434335 / 1.504120 (-0.069785) | 1.318976 / 1.541195 (-0.222219) | 1.352498 / 1.468490 (-0.115992) | 0.717326 / 4.584777 (-3.867450) | 2.403683 / 3.745712 (-1.342029) | 2.930366 / 5.269862 (-2.339495) | 1.879938 / 4.565676 (-2.685739) | 0.079016 / 0.424275 (-0.345259) | 0.005156 / 0.007607 (-0.002451) | 0.331099 / 0.226044 (0.105055) | 3.305878 / 2.268929 (1.036949) | 1.804185 / 55.444624 (-53.640439) | 1.508785 / 6.876477 (-5.367692) | 1.570102 / 2.142072 (-0.571970) | 0.796348 / 4.805227 (-4.008879) | 0.135737 / 6.500664 (-6.364927) | 0.042902 / 0.075469 (-0.032567) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.979923 / 1.841788 (-0.861865) | 11.656257 / 8.074308 (3.581949) | 9.745611 / 10.191392 (-0.445781) | 0.144497 / 0.680424 (-0.535927) | 0.022457 / 0.534201 (-0.511744) | 0.317251 / 0.579283 (-0.262032) | 0.264956 / 0.434364 (-0.169408) | 0.341873 / 0.540337 (-0.198464) | 0.439734 / 1.386936 (-0.947202) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006137 / 0.011353 (-0.005216) | 0.003999 / 0.011008 (-0.007009) | 0.049994 / 0.038508 (0.011486) | 0.032401 / 0.023109 (0.009292) | 0.272210 / 0.275898 (-0.003688) | 0.296038 / 0.323480 (-0.027442) | 0.004429 / 0.007986 (-0.003557) | 0.002894 / 0.004328 (-0.001434) | 0.049296 / 0.004250 (0.045045) | 0.041390 / 0.037052 (0.004337) | 0.288951 / 0.258489 (0.030462) | 0.321733 / 0.293841 (0.027892) | 0.033553 / 0.128546 (-0.094994) | 0.012122 / 0.075646 (-0.063524) | 0.060661 / 0.419271 (-0.358610) | 0.034752 / 0.043533 (-0.008781) | 0.272866 / 0.255139 (0.017727) | 0.292436 / 0.283200 (0.009237) | 0.018822 / 0.141683 (-0.122861) | 1.167758 / 1.452155 (-0.284397) | 1.207977 / 1.492716 (-0.284739) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095862 / 0.018006 (0.077855) | 0.313746 / 0.000490 (0.313256) | 0.000219 / 0.000200 (0.000020) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022940 / 0.037411 (-0.014472) | 0.076833 / 0.014526 (0.062307) | 0.088209 / 0.176557 (-0.088348) | 0.130154 / 0.737135 (-0.606981) | 0.089948 / 0.296338 (-0.206390) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.305393 / 0.215209 (0.090184) | 3.001629 / 2.077655 (0.923975) | 1.629378 / 1.504120 (0.125258) | 1.496022 / 1.541195 (-0.045173) | 1.542937 / 1.468490 (0.074447) | 0.734249 / 4.584777 (-3.850528) | 0.966226 / 3.745712 (-2.779486) | 3.051986 / 5.269862 (-2.217876) | 1.954694 / 4.565676 (-2.610982) | 0.081538 / 0.424275 (-0.342737) | 0.005198 / 0.007607 (-0.002409) | 0.355837 / 0.226044 (0.129793) | 3.537454 / 2.268929 (1.268525) | 2.036157 / 55.444624 (-53.408467) | 1.719255 / 6.876477 (-5.157222) | 1.744899 / 2.142072 (-0.397174) | 0.816034 / 4.805227 (-3.989193) | 0.135650 / 6.500664 (-6.365014) | 0.042206 / 0.075469 (-0.033263) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.055518 / 1.841788 (-0.786269) | 12.654622 / 8.074308 (4.580313) | 10.450807 / 10.191392 (0.259415) | 0.153567 / 0.680424 (-0.526857) | 0.016114 / 0.534201 (-0.518087) | 0.301182 / 0.579283 (-0.278101) | 0.130043 / 0.434364 (-0.304321) | 0.341289 / 0.540337 (-0.199048) | 0.434573 / 1.386936 (-0.952363) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#fb8ae4d2c3dda8c770fe48a40195775a7b517b6b \"CML watermark\")\n"
] | "2024-08-20T08:42:57Z" | "2024-08-20T09:09:27Z" | "2024-08-20T09:02:35Z" | MEMBER | null | Temporarily pin numpy<2.1 to fix CI.
Fix #7111. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7114/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7114/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7114.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7114",
"merged_at": "2024-08-20T09:02:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7114.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7114"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7113 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7113/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7113/comments | https://api.github.com/repos/huggingface/datasets/issues/7113/events | https://github.com/huggingface/datasets/issues/7113 | 2,475,029,640 | I_kwDODunzps6ThfSI | 7,113 | Stream dataset does not iterate if the batch size is larger than the dataset size (related to drop_last_batch) | {
"avatar_url": "https://avatars.githubusercontent.com/u/4197249?v=4",
"events_url": "https://api.github.com/users/memray/events{/privacy}",
"followers_url": "https://api.github.com/users/memray/followers",
"following_url": "https://api.github.com/users/memray/following{/other_user}",
"gists_url": "https://api.github.com/users/memray/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/memray",
"id": 4197249,
"login": "memray",
"node_id": "MDQ6VXNlcjQxOTcyNDk=",
"organizations_url": "https://api.github.com/users/memray/orgs",
"received_events_url": "https://api.github.com/users/memray/received_events",
"repos_url": "https://api.github.com/users/memray/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/memray/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/memray/subscriptions",
"type": "User",
"url": "https://api.github.com/users/memray",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"That's expected behavior, it's also the same in `torch`:\r\n\r\n```python\r\n>>> list(DataLoader(list(range(5)), batch_size=10, drop_last=True))\r\n[]\r\n```"
] | "2024-08-20T08:26:40Z" | "2024-08-26T04:24:11Z" | "2024-08-26T04:24:10Z" | NONE | null | ### Describe the bug
Hi there,
I use streaming and interleaving to combine multiple datasets saved in jsonl files. The size of dataset can vary (from 100ish to 100k-ish). I use dataset.map() and a big batch size to reduce the IO cost. It was working fine with datasets-2.16.1 but this problem shows up after I upgraded to datasets-2.19.2. With 2.21.0 the problem remains.
Please see the code below to reproduce the problem.
The dataset can iterate correctly if we set either streaming=False or drop_last_batch=False.
I have to use drop_last_batch=True since it's for distributed training.
### Steps to reproduce the bug
```python
# datasets==2.21.0
import datasets
def data_prepare(examples):
print(examples["sentence1"][0])
return examples
batch_size = 101
# the size of the dataset is 100
# the dataset iterates correctly if we set either streaming=False or drop_last_batch=False
dataset = datasets.load_dataset("mteb/biosses-sts", split="test", streaming=True)
dataset = dataset.map(lambda x: data_prepare(x),
drop_last_batch=True,
batched=True, batch_size=batch_size)
for ex in dataset:
print(ex)
pass
```
### Expected behavior
The dataset iterates regardless of the batch size.
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-6.1.58+-x86_64-with-glibc2.35
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.5
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.2.0
| {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7113/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7113/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7112 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7112/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7112/comments | https://api.github.com/repos/huggingface/datasets/issues/7112/events | https://github.com/huggingface/datasets/issues/7112 | 2,475,004,644 | I_kwDODunzps6ThZLk | 7,112 | cudf-cu12 24.4.1, ibis-framework 8.0.0 requires pyarrow<15.0.0a0,>=14.0.1,pyarrow<16,>=2 and datasets 2.21.0 requires pyarrow>=15.0.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/174590283?v=4",
"events_url": "https://api.github.com/users/SoumyaMB10/events{/privacy}",
"followers_url": "https://api.github.com/users/SoumyaMB10/followers",
"following_url": "https://api.github.com/users/SoumyaMB10/following{/other_user}",
"gists_url": "https://api.github.com/users/SoumyaMB10/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SoumyaMB10",
"id": 174590283,
"login": "SoumyaMB10",
"node_id": "U_kgDOCmgJSw",
"organizations_url": "https://api.github.com/users/SoumyaMB10/orgs",
"received_events_url": "https://api.github.com/users/SoumyaMB10/received_events",
"repos_url": "https://api.github.com/users/SoumyaMB10/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SoumyaMB10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SoumyaMB10/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SoumyaMB10",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"@sayakpaul please advice ",
"Hits the same dependency conflict"
] | "2024-08-20T08:13:55Z" | "2024-09-20T15:30:03Z" | null | NONE | null | ### Describe the bug
!pip install accelerate>=0.16.0 torchvision transformers>=4.25.1 datasets>=2.19.1 ftfy tensorboard Jinja2 peft==0.7.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
cudf-cu12 24.4.1 requires pyarrow<15.0.0a0,>=14.0.1, but you have pyarrow 17.0.0 which is incompatible.
ibis-framework 8.0.0 requires pyarrow<16,>=2, but you have pyarrow 17.0.0 which is incompatible.
to solve above error
!pip install pyarrow==14.0.1
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
datasets 2.21.0 requires pyarrow>=15.0.0, but you have pyarrow 14.0.1 which is incompatible.
### Steps to reproduce the bug
!pip install datasets>=2.19.1
### Expected behavior
run without dependency error
### Environment info
Diffusers version: 0.31.0.dev0
Platform: Linux-6.1.85+-x86_64-with-glibc2.35
Running on Google Colab?: Yes
Python version: 3.10.12
PyTorch version (GPU?): 2.3.1+cu121 (True)
Flax version (CPU?/GPU?/TPU?): 0.8.4 (gpu)
Jax version: 0.4.26
JaxLib version: 0.4.26
Huggingface_hub version: 0.23.5
Transformers version: 4.42.4
Accelerate version: 0.32.1
PEFT version: 0.7.0
Bitsandbytes version: not installed
Safetensors version: 0.4.4
xFormers version: not installed
Accelerator: Tesla T4, 15360 MiB
Using GPU in script?:
Using distributed or parallel set-up in script?: | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7112/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7112/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7111/comments | https://api.github.com/repos/huggingface/datasets/issues/7111/events | https://github.com/huggingface/datasets/issues/7111 | 2,474,915,845 | I_kwDODunzps6ThDgF | 7,111 | CI is broken for numpy-2: Failed to fetch wheel: llvmlite==0.34.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [
"Note that the CI before was using:\r\n- llvmlite: 0.43.0\r\n- numba: 0.60.0\r\n\r\nNow it tries to use:\r\n- llvmlite: 0.34.0\r\n- numba: 0.51.2",
"The issue is because numba-0.60.0 pins numpy<2.1 and `uv` tries to install latest numpy-2.1.0 with an old numba-0.51.0 version (and llvmlite-0.34.0). See discussion in their repo:\r\n- https://github.com/numba/numba/issues/9708\r\n\r\nLatest numpy-2.1.0 will be supported by the next numba-0.61.0 release in September.\r\n\r\nNote that our CI requires numba with the \"audio\" extra:\r\n- librosa > numba"
] | "2024-08-20T07:27:28Z" | "2024-08-21T05:05:36Z" | "2024-08-20T09:02:36Z" | MEMBER | null | Ci is broken with error `Failed to fetch wheel: llvmlite==0.34.0`: https://github.com/huggingface/datasets/actions/runs/10466825281/job/28984414269
```
Run uv pip install --system "datasets[tests_numpy2] @ ."
Resolved 150 packages in 4.42s
error: Failed to prepare distributions
Caused by: Failed to fetch wheel: llvmlite==0.34.0
Caused by: Build backend failed to build wheel through `build_wheel()` with exit status: 1
--- stdout:
running bdist_wheel
/home/runner/.cache/uv/builds-v0/.tmpcyKh8S/bin/python /home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py
LLVM version...
--- stderr:
Traceback (most recent call last):
File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 105, in main_posix
out = subprocess.check_output([llvm_config, '--version'])
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 503, in run
with Popen(*popenargs, **kwargs) as process:
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/opt/hostedtoolcache/Python/3.10.14/x64/lib/python3.10/subprocess.py", line 1863, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
FileNotFoundError: [Errno 2] No such file or directory: 'llvm-config'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 191, in <module>
main()
File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 181, in main
main_posix('linux', '.so')
File "/home/runner/.cache/uv/built-wheels-v3/pypi/llvmlite/0.34.0/wrk1bNwq1gleSiznvrSEZ/llvmlite-0.34.0.tar.gz/ffi/build.py", line 107, in main_posix
raise RuntimeError("%s failed executing, please point LLVM_CONFIG "
RuntimeError: llvm-config failed executing, please point LLVM_CONFIG to the path for llvm-config
error: command '/home/runner/.cache/uv/builds-v0/.tmpcyKh8S/bin/python' failed with exit code 1
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7111/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7111/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7110 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7110/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7110/comments | https://api.github.com/repos/huggingface/datasets/issues/7110/events | https://github.com/huggingface/datasets/pull/7110 | 2,474,747,695 | PR_kwDODunzps54zz3r | 7,110 | Fix ConnectionError for gated datasets and unauthenticated users | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7110). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Note that the CI error is unrelated to this PR and should be addressed in another PR. See:\r\n- #7111",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005354 / 0.011353 (-0.005999) | 0.004031 / 0.011008 (-0.006977) | 0.062470 / 0.038508 (0.023962) | 0.030882 / 0.023109 (0.007773) | 0.244816 / 0.275898 (-0.031082) | 0.264324 / 0.323480 (-0.059156) | 0.004164 / 0.007986 (-0.003822) | 0.002858 / 0.004328 (-0.001471) | 0.049008 / 0.004250 (0.044758) | 0.042139 / 0.037052 (0.005086) | 0.279496 / 0.258489 (0.021007) | 0.279408 / 0.293841 (-0.014433) | 0.029701 / 0.128546 (-0.098845) | 0.012501 / 0.075646 (-0.063145) | 0.203267 / 0.419271 (-0.216004) | 0.035964 / 0.043533 (-0.007569) | 0.239361 / 0.255139 (-0.015778) | 0.258942 / 0.283200 (-0.024257) | 0.017956 / 0.141683 (-0.123727) | 1.160468 / 1.452155 (-0.291687) | 1.203475 / 1.492716 (-0.289242) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.004639 / 0.018006 (-0.013367) | 0.298020 / 0.000490 (0.297530) | 0.000212 / 0.000200 (0.000012) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019371 / 0.037411 (-0.018040) | 0.063311 / 0.014526 (0.048785) | 0.076412 / 0.176557 (-0.100145) | 0.122574 / 0.737135 (-0.614561) | 0.078076 / 0.296338 (-0.218263) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.275381 / 0.215209 (0.060172) | 2.713220 / 2.077655 (0.635565) | 1.441940 / 1.504120 (-0.062179) | 1.325545 / 1.541195 (-0.215650) | 1.363859 / 1.468490 (-0.104631) | 0.715147 / 4.584777 (-3.869630) | 2.356482 / 3.745712 (-1.389230) | 2.882792 / 5.269862 (-2.387069) | 1.833399 / 4.565676 (-2.732278) | 0.077872 / 0.424275 (-0.346403) | 0.005172 / 0.007607 (-0.002435) | 0.326361 / 0.226044 (0.100316) | 3.239202 / 2.268929 (0.970273) | 1.837745 / 55.444624 (-53.606879) | 1.517299 / 6.876477 (-5.359178) | 1.552938 / 2.142072 (-0.589134) | 0.801496 / 4.805227 (-4.003731) | 0.133351 / 6.500664 (-6.367314) | 0.042052 / 0.075469 (-0.033418) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.957887 / 1.841788 (-0.883901) | 11.625291 / 8.074308 (3.550983) | 9.679413 / 10.191392 (-0.511979) | 0.140271 / 0.680424 (-0.540153) | 0.013991 / 0.534201 (-0.520210) | 0.299874 / 0.579283 (-0.279409) | 0.267164 / 0.434364 (-0.167200) | 0.338143 / 0.540337 (-0.202194) | 0.434105 / 1.386936 (-0.952831) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005833 / 0.011353 (-0.005520) | 0.003761 / 0.011008 (-0.007247) | 0.049699 / 0.038508 (0.011191) | 0.032786 / 0.023109 (0.009677) | 0.265100 / 0.275898 (-0.010798) | 0.291045 / 0.323480 (-0.032435) | 0.004281 / 0.007986 (-0.003705) | 0.002737 / 0.004328 (-0.001591) | 0.048524 / 0.004250 (0.044274) | 0.040783 / 0.037052 (0.003731) | 0.281122 / 0.258489 (0.022633) | 0.311349 / 0.293841 (0.017508) | 0.032143 / 0.128546 (-0.096403) | 0.011747 / 0.075646 (-0.063899) | 0.059432 / 0.419271 (-0.359840) | 0.034362 / 0.043533 (-0.009171) | 0.261061 / 0.255139 (0.005922) | 0.279536 / 0.283200 (-0.003663) | 0.019172 / 0.141683 (-0.122510) | 1.160069 / 1.452155 (-0.292086) | 1.224160 / 1.492716 (-0.268556) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093596 / 0.018006 (0.075590) | 0.302862 / 0.000490 (0.302372) | 0.000208 / 0.000200 (0.000008) | 0.000047 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022785 / 0.037411 (-0.014626) | 0.079263 / 0.014526 (0.064737) | 0.091340 / 0.176557 (-0.085216) | 0.129453 / 0.737135 (-0.607682) | 0.091349 / 0.296338 (-0.204989) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298166 / 0.215209 (0.082957) | 3.003146 / 2.077655 (0.925491) | 1.575903 / 1.504120 (0.071783) | 1.445231 / 1.541195 (-0.095963) | 1.477116 / 1.468490 (0.008625) | 0.726496 / 4.584777 (-3.858281) | 0.959827 / 3.745712 (-2.785885) | 2.941142 / 5.269862 (-2.328720) | 1.878581 / 4.565676 (-2.687096) | 0.078475 / 0.424275 (-0.345800) | 0.005137 / 0.007607 (-0.002470) | 0.352078 / 0.226044 (0.126034) | 3.486113 / 2.268929 (1.217184) | 1.965024 / 55.444624 (-53.479600) | 1.667223 / 6.876477 (-5.209254) | 1.665254 / 2.142072 (-0.476819) | 0.803543 / 4.805227 (-4.001684) | 0.133003 / 6.500664 (-6.367661) | 0.041462 / 0.075469 (-0.034008) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.045534 / 1.841788 (-0.796254) | 12.124988 / 8.074308 (4.050680) | 10.418723 / 10.191392 (0.227331) | 0.142453 / 0.680424 (-0.537971) | 0.015686 / 0.534201 (-0.518515) | 0.300557 / 0.579283 (-0.278726) | 0.119851 / 0.434364 (-0.314512) | 0.342297 / 0.540337 (-0.198040) | 0.441263 / 1.386936 (-0.945673) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#90b1d94ef419cb26f0bb24d982897dca39aa8a46 \"CML watermark\")\n",
"lgtm!"
] | "2024-08-20T05:26:54Z" | "2024-08-20T15:11:35Z" | "2024-08-20T09:14:35Z" | MEMBER | null | Fix `ConnectionError` for gated datasets and unauthenticated users. See:
- https://github.com/huggingface/dataset-viewer/issues/3025
Note that a recent change in the Hub returns dataset info for gated datasets and unauthenticated users, instead of raising a `GatedRepoError` as before. See:
- https://github.com/huggingface/huggingface_hub/issues/2457
This PR adds an additional check (/auth-check) for gated datasets and raises `DatasetNotFoundError` for unauthenticated users, as it was the case before the change in the Hub.
- Fix suggested by @Pierrci (thanks @Wauplin for pointing it out).
Fix #7109. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7110/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7110/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7110.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7110",
"merged_at": "2024-08-20T09:14:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7110.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7110"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7109/comments | https://api.github.com/repos/huggingface/datasets/issues/7109/events | https://github.com/huggingface/datasets/issues/7109 | 2,473,367,848 | I_kwDODunzps6TbJko | 7,109 | ConnectionError for gated datasets and unauthenticated users | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [] | "2024-08-19T13:27:45Z" | "2024-08-20T09:14:36Z" | "2024-08-20T09:14:35Z" | MEMBER | null | Since the Hub returns dataset info for gated datasets and unauthenticated users, there is dead code: https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/load.py#L1846-L1852
We should remove the dead code and properly handle this case: currently we are raising a `ConnectionError` instead of a `DatasetNotFoundError` (as before).
See:
- https://github.com/huggingface/dataset-viewer/issues/3025
- https://github.com/huggingface/huggingface_hub/issues/2457 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7109/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7109/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7108 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7108/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7108/comments | https://api.github.com/repos/huggingface/datasets/issues/7108/events | https://github.com/huggingface/datasets/issues/7108 | 2,470,665,327 | I_kwDODunzps6TQ1xv | 7,108 | website broken: Create a new dataset repository, doesn't create a new repo in Firefox | {
"avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4",
"events_url": "https://api.github.com/users/neoneye/events{/privacy}",
"followers_url": "https://api.github.com/users/neoneye/followers",
"following_url": "https://api.github.com/users/neoneye/following{/other_user}",
"gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neoneye",
"id": 147971,
"login": "neoneye",
"node_id": "MDQ6VXNlcjE0Nzk3MQ==",
"organizations_url": "https://api.github.com/users/neoneye/orgs",
"received_events_url": "https://api.github.com/users/neoneye/received_events",
"repos_url": "https://api.github.com/users/neoneye/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neoneye/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neoneye",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"I don't reproduce, I was able to create a new repo: https://huggingface.co/datasets/severo/reproduce-datasets-issues-7108. Can you confirm it's still broken?",
"I have just tried again.\r\n\r\nFirefox: The `Create dataset` doesn't work. It has worked in the past. It's my preferred browser.\r\n\r\nChrome: The `Create dataset` works.\r\n\r\nIt seems to be a Firefox specific issue.",
"I have updated Firefox 129.0 (64 bit), and now the `Create dataset` is working again in Firefox.\r\n\r\nUX: It would be nice with better error messages on HuggingFace.",
"maybe an issue with the cookie. cc @Wauplin @coyotte508 "
] | "2024-08-16T17:23:00Z" | "2024-08-19T13:21:12Z" | "2024-08-19T06:52:48Z" | NONE | null | ### Describe the bug
This issue is also reported here:
https://discuss.huggingface.co/t/create-a-new-dataset-repository-broken-page/102644
This page is broken.
https://huggingface.co/new-dataset
I fill in the form with my text, and click `Create Dataset`.
![Screenshot 2024-08-16 at 15 55 37](https://github.com/user-attachments/assets/de16627b-7a55-4bcf-9f0b-a48227aabfe6)
Then the form gets wiped. And no repo got created. No error message visible in the developer console.
![Screenshot 2024-08-16 at 15 56 54](https://github.com/user-attachments/assets/0520164b-431c-40a5-9634-11fd62c4f4c3)
# Idea for improvement
For better UX, if the repo cannot be created, then show an error message, that something went wrong.
# Work around, that works for me
```python
from huggingface_hub import HfApi, HfFolder
repo_id = 'simon-arc-solve-fractal-v3'
api = HfApi()
username = api.whoami()['name']
repo_url = api.create_repo(repo_id=repo_id, exist_ok=True, private=True, repo_type="dataset")
```
### Steps to reproduce the bug
Go https://huggingface.co/new-dataset
Fill in the form.
Click `Create dataset`.
Now the form is cleared. And the page doesn't jump anywhere.
### Expected behavior
The moment the user clicks `Create dataset`, the repo gets created and the page jumps to the created repo.
### Environment info
Firefox 128.0.3 (64-bit)
macOS Sonoma 14.5
| {
"avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4",
"events_url": "https://api.github.com/users/neoneye/events{/privacy}",
"followers_url": "https://api.github.com/users/neoneye/followers",
"following_url": "https://api.github.com/users/neoneye/following{/other_user}",
"gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neoneye",
"id": 147971,
"login": "neoneye",
"node_id": "MDQ6VXNlcjE0Nzk3MQ==",
"organizations_url": "https://api.github.com/users/neoneye/orgs",
"received_events_url": "https://api.github.com/users/neoneye/received_events",
"repos_url": "https://api.github.com/users/neoneye/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neoneye/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neoneye",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7108/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7108/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7107/comments | https://api.github.com/repos/huggingface/datasets/issues/7107/events | https://github.com/huggingface/datasets/issues/7107 | 2,470,444,732 | I_kwDODunzps6TP_68 | 7,107 | load_dataset broken in 2.21.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/1911631?v=4",
"events_url": "https://api.github.com/users/anjor/events{/privacy}",
"followers_url": "https://api.github.com/users/anjor/followers",
"following_url": "https://api.github.com/users/anjor/following{/other_user}",
"gists_url": "https://api.github.com/users/anjor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/anjor",
"id": 1911631,
"login": "anjor",
"node_id": "MDQ6VXNlcjE5MTE2MzE=",
"organizations_url": "https://api.github.com/users/anjor/orgs",
"received_events_url": "https://api.github.com/users/anjor/received_events",
"repos_url": "https://api.github.com/users/anjor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/anjor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anjor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/anjor",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"There seems to be a PR related to the load_dataset path that went into 2.21.0 -- https://github.com/huggingface/datasets/pull/6862/files\r\n\r\nTaking a look at it now",
"+1\r\n\r\nDowngrading to 2.20.0 fixed my issue, hopefully helpful for others.",
"I tried adding a simple test to `test_load.py` with the alpaca eval dataset but the test didn't fail :(. \r\n\r\nSo looks like this might have something to do with the environment? ",
"There was an issue with the script of the \"tatsu-lab/alpaca_eval\" dataset.\r\n\r\nI was fixed with this PR: \r\n- [Fix FileNotFoundError](https://huggingface.co/datasets/tatsu-lab/alpaca_eval/discussions/2)\r\n\r\nIt should work now if you retry to load the dataset."
] | "2024-08-16T14:59:51Z" | "2024-08-18T09:28:43Z" | "2024-08-18T09:27:12Z" | NONE | null | ### Describe the bug
`eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)`
used to work till 2.20.0 but doesn't work in 2.21.0
In 2.20.0:
![Screenshot 2024-08-16 at 3 57 10 PM](https://github.com/user-attachments/assets/0516489b-8187-486d-bee8-88af3381dee9)
in 2.21.0:
![Screenshot 2024-08-16 at 3 57 24 PM](https://github.com/user-attachments/assets/bc257570-f461-41e4-8717-90a69ed7c24f)
### Steps to reproduce the bug
1. Spin up a new google collab
2. `pip install datasets==2.21.0`
3. `import datasets`
4. `eval_set = datasets.load_dataset("tatsu-lab/alpaca_eval", "alpaca_eval_gpt4_baseline", trust_remote_code=True)`
5. Will throw an error.
### Expected behavior
Try steps 1-5 again but replace datasets version with 2.20.0, it will work
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.10.12
- `huggingface_hub` version: 0.23.5
- PyArrow version: 17.0.0
- Pandas version: 2.1.4
- `fsspec` version: 2024.5.0
| {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7107/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7107/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7106/comments | https://api.github.com/repos/huggingface/datasets/issues/7106/events | https://github.com/huggingface/datasets/pull/7106 | 2,469,854,262 | PR_kwDODunzps54jntM | 7,106 | Rename LargeList.dtype to LargeList.feature | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7106). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005598 / 0.011353 (-0.005755) | 0.004327 / 0.011008 (-0.006681) | 0.063961 / 0.038508 (0.025453) | 0.031039 / 0.023109 (0.007930) | 0.245586 / 0.275898 (-0.030312) | 0.273765 / 0.323480 (-0.049715) | 0.003463 / 0.007986 (-0.004523) | 0.002871 / 0.004328 (-0.001457) | 0.049169 / 0.004250 (0.044918) | 0.049342 / 0.037052 (0.012290) | 0.259255 / 0.258489 (0.000766) | 0.295688 / 0.293841 (0.001847) | 0.029527 / 0.128546 (-0.099019) | 0.012507 / 0.075646 (-0.063139) | 0.209420 / 0.419271 (-0.209851) | 0.036666 / 0.043533 (-0.006866) | 0.272031 / 0.255139 (0.016892) | 0.272585 / 0.283200 (-0.010614) | 0.020004 / 0.141683 (-0.121679) | 1.158605 / 1.452155 (-0.293550) | 1.230930 / 1.492716 (-0.261787) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.109196 / 0.018006 (0.091189) | 0.377759 / 0.000490 (0.377270) | 0.000222 / 0.000200 (0.000022) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018961 / 0.037411 (-0.018450) | 0.063189 / 0.014526 (0.048663) | 0.075253 / 0.176557 (-0.101303) | 0.122912 / 0.737135 (-0.614223) | 0.077961 / 0.296338 (-0.218378) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.278425 / 0.215209 (0.063216) | 2.748336 / 2.077655 (0.670681) | 1.468410 / 1.504120 (-0.035710) | 1.347859 / 1.541195 (-0.193336) | 1.389175 / 1.468490 (-0.079315) | 0.742833 / 4.584777 (-3.841943) | 2.358930 / 3.745712 (-1.386782) | 3.062720 / 5.269862 (-2.207141) | 1.912264 / 4.565676 (-2.653412) | 0.079263 / 0.424275 (-0.345012) | 0.005212 / 0.007607 (-0.002396) | 0.332482 / 0.226044 (0.106438) | 3.287045 / 2.268929 (1.018116) | 1.827862 / 55.444624 (-53.616762) | 1.525087 / 6.876477 (-5.351390) | 1.581742 / 2.142072 (-0.560330) | 0.791737 / 4.805227 (-4.013490) | 0.135774 / 6.500664 (-6.364890) | 0.043700 / 0.075469 (-0.031769) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.982104 / 1.841788 (-0.859683) | 12.227639 / 8.074308 (4.153331) | 9.492719 / 10.191392 (-0.698673) | 0.144792 / 0.680424 (-0.535632) | 0.014844 / 0.534201 (-0.519357) | 0.304919 / 0.579283 (-0.274364) | 0.262955 / 0.434364 (-0.171409) | 0.339517 / 0.540337 (-0.200821) | 0.430929 / 1.386936 (-0.956007) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005982 / 0.011353 (-0.005371) | 0.004199 / 0.011008 (-0.006809) | 0.050674 / 0.038508 (0.012166) | 0.032713 / 0.023109 (0.009604) | 0.270071 / 0.275898 (-0.005827) | 0.300469 / 0.323480 (-0.023011) | 0.005159 / 0.007986 (-0.002826) | 0.002961 / 0.004328 (-0.001368) | 0.048403 / 0.004250 (0.044152) | 0.042024 / 0.037052 (0.004971) | 0.288927 / 0.258489 (0.030438) | 0.321412 / 0.293841 (0.027571) | 0.032436 / 0.128546 (-0.096110) | 0.012472 / 0.075646 (-0.063175) | 0.060527 / 0.419271 (-0.358744) | 0.034222 / 0.043533 (-0.009311) | 0.276259 / 0.255139 (0.021120) | 0.293168 / 0.283200 (0.009969) | 0.019245 / 0.141683 (-0.122438) | 1.180766 / 1.452155 (-0.271388) | 1.220269 / 1.492716 (-0.272447) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.110082 / 0.018006 (0.092076) | 0.364221 / 0.000490 (0.363731) | 0.000221 / 0.000200 (0.000021) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022923 / 0.037411 (-0.014488) | 0.078022 / 0.014526 (0.063496) | 0.089543 / 0.176557 (-0.087013) | 0.129855 / 0.737135 (-0.607280) | 0.090891 / 0.296338 (-0.205448) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304169 / 0.215209 (0.088960) | 2.969772 / 2.077655 (0.892117) | 1.582647 / 1.504120 (0.078527) | 1.464446 / 1.541195 (-0.076749) | 1.485422 / 1.468490 (0.016932) | 0.720105 / 4.584777 (-3.864672) | 0.966730 / 3.745712 (-2.778982) | 3.017549 / 5.269862 (-2.252313) | 1.924574 / 4.565676 (-2.641103) | 0.079938 / 0.424275 (-0.344337) | 0.005684 / 0.007607 (-0.001923) | 0.364093 / 0.226044 (0.138048) | 3.569470 / 2.268929 (1.300541) | 1.956535 / 55.444624 (-53.488089) | 1.669432 / 6.876477 (-5.207045) | 1.687596 / 2.142072 (-0.454476) | 0.802725 / 4.805227 (-4.002502) | 0.132874 / 6.500664 (-6.367790) | 0.041403 / 0.075469 (-0.034067) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.033317 / 1.841788 (-0.808471) | 12.590652 / 8.074308 (4.516344) | 10.618609 / 10.191392 (0.427217) | 0.131833 / 0.680424 (-0.548591) | 0.015675 / 0.534201 (-0.518526) | 0.300804 / 0.579283 (-0.278479) | 0.127253 / 0.434364 (-0.307111) | 0.342559 / 0.540337 (-0.197779) | 0.464302 / 1.386936 (-0.922634) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#88f646c418b408ace2494c02b9502f516a565e2b \"CML watermark\")\n"
] | "2024-08-16T09:12:04Z" | "2024-08-26T04:31:59Z" | "2024-08-26T04:26:02Z" | MEMBER | null | Rename `LargeList.dtype` to `LargeList.feature`.
Note that `dtype` is usually used for NumPy data types ("int64", "float32",...): see `Value.dtype`.
However, `LargeList` attribute (like `Sequence.feature`) expects a `FeatureType` instead.
With this renaming:
- we avoid confusion about the expected type and
- we also align `LargeList` with `Sequence`. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7106/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7106/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7106.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7106",
"merged_at": "2024-08-26T04:26:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7106.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7106"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7105 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7105/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7105/comments | https://api.github.com/repos/huggingface/datasets/issues/7105/events | https://github.com/huggingface/datasets/pull/7105 | 2,468,207,039 | PR_kwDODunzps54eZ0D | 7,105 | Use `huggingface_hub` cache | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7105). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"Nice\r\n\r\n<img width=\"141\" alt=\"Capture d’écran 2024-08-19 à 15 25 00\" src=\"https://github.com/user-attachments/assets/18c7b3ec-a57e-45d7-9b19-0b12df9feccd\">\r\n",
"fyi the CI failure on test_py310_numpy2 is unrelated to this PR (it's a dependency install failure)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005677 / 0.011353 (-0.005676) | 0.004054 / 0.011008 (-0.006954) | 0.063101 / 0.038508 (0.024592) | 0.031665 / 0.023109 (0.008556) | 0.243332 / 0.275898 (-0.032566) | 0.271067 / 0.323480 (-0.052413) | 0.004283 / 0.007986 (-0.003703) | 0.002889 / 0.004328 (-0.001440) | 0.049269 / 0.004250 (0.045018) | 0.048707 / 0.037052 (0.011654) | 0.258599 / 0.258489 (0.000110) | 0.307715 / 0.293841 (0.013874) | 0.029850 / 0.128546 (-0.098696) | 0.012299 / 0.075646 (-0.063347) | 0.207616 / 0.419271 (-0.211656) | 0.037655 / 0.043533 (-0.005878) | 0.246602 / 0.255139 (-0.008537) | 0.268518 / 0.283200 (-0.014682) | 0.018128 / 0.141683 (-0.123555) | 1.181569 / 1.452155 (-0.270586) | 1.250641 / 1.492716 (-0.242075) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.143911 / 0.018006 (0.125905) | 0.305608 / 0.000490 (0.305118) | 0.000250 / 0.000200 (0.000050) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019208 / 0.037411 (-0.018204) | 0.062502 / 0.014526 (0.047976) | 0.075896 / 0.176557 (-0.100661) | 0.123422 / 0.737135 (-0.613713) | 0.077311 / 0.296338 (-0.219028) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283108 / 0.215209 (0.067899) | 2.783509 / 2.077655 (0.705855) | 1.466358 / 1.504120 (-0.037762) | 1.350989 / 1.541195 (-0.190206) | 1.370517 / 1.468490 (-0.097973) | 0.732706 / 4.584777 (-3.852071) | 2.366710 / 3.745712 (-1.379002) | 2.988913 / 5.269862 (-2.280949) | 1.892204 / 4.565676 (-2.673473) | 0.079077 / 0.424275 (-0.345198) | 0.005158 / 0.007607 (-0.002449) | 0.336620 / 0.226044 (0.110576) | 3.423556 / 2.268929 (1.154628) | 1.848732 / 55.444624 (-53.595892) | 1.544996 / 6.876477 (-5.331480) | 1.550051 / 2.142072 (-0.592022) | 0.798235 / 4.805227 (-4.006993) | 0.132945 / 6.500664 (-6.367719) | 0.041785 / 0.075469 (-0.033684) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.963359 / 1.841788 (-0.878429) | 11.699994 / 8.074308 (3.625686) | 9.311998 / 10.191392 (-0.879394) | 0.140493 / 0.680424 (-0.539931) | 0.013834 / 0.534201 (-0.520367) | 0.302569 / 0.579283 (-0.276714) | 0.267377 / 0.434364 (-0.166987) | 0.341093 / 0.540337 (-0.199244) | 0.431941 / 1.386936 (-0.954995) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005744 / 0.011353 (-0.005608) | 0.003668 / 0.011008 (-0.007340) | 0.049837 / 0.038508 (0.011329) | 0.032051 / 0.023109 (0.008941) | 0.271725 / 0.275898 (-0.004173) | 0.302612 / 0.323480 (-0.020867) | 0.004455 / 0.007986 (-0.003531) | 0.002816 / 0.004328 (-0.001512) | 0.049036 / 0.004250 (0.044785) | 0.041233 / 0.037052 (0.004181) | 0.287900 / 0.258489 (0.029411) | 0.326204 / 0.293841 (0.032363) | 0.032027 / 0.128546 (-0.096519) | 0.012033 / 0.075646 (-0.063613) | 0.060822 / 0.419271 (-0.358449) | 0.033830 / 0.043533 (-0.009703) | 0.274855 / 0.255139 (0.019716) | 0.294191 / 0.283200 (0.010992) | 0.017979 / 0.141683 (-0.123704) | 1.151353 / 1.452155 (-0.300801) | 1.215384 / 1.492716 (-0.277333) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.102552 / 0.018006 (0.084546) | 0.314148 / 0.000490 (0.313658) | 0.000217 / 0.000200 (0.000017) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024565 / 0.037411 (-0.012846) | 0.076968 / 0.014526 (0.062442) | 0.087982 / 0.176557 (-0.088574) | 0.129844 / 0.737135 (-0.607292) | 0.091370 / 0.296338 (-0.204968) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.296767 / 0.215209 (0.081558) | 2.910716 / 2.077655 (0.833062) | 1.579526 / 1.504120 (0.075406) | 1.453457 / 1.541195 (-0.087737) | 1.466296 / 1.468490 (-0.002194) | 0.728372 / 4.584777 (-3.856405) | 0.963852 / 3.745712 (-2.781861) | 2.946582 / 5.269862 (-2.323280) | 1.936199 / 4.565676 (-2.629478) | 0.078886 / 0.424275 (-0.345389) | 0.005537 / 0.007607 (-0.002071) | 0.346315 / 0.226044 (0.120270) | 3.440774 / 2.268929 (1.171845) | 1.937549 / 55.444624 (-53.507076) | 1.649507 / 6.876477 (-5.226970) | 1.653386 / 2.142072 (-0.488686) | 0.806598 / 4.805227 (-3.998629) | 0.133384 / 6.500664 (-6.367280) | 0.040552 / 0.075469 (-0.034917) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.030515 / 1.841788 (-0.811272) | 12.129888 / 8.074308 (4.055580) | 10.287069 / 10.191392 (0.095677) | 0.141512 / 0.680424 (-0.538912) | 0.015483 / 0.534201 (-0.518718) | 0.300053 / 0.579283 (-0.279230) | 0.120825 / 0.434364 (-0.313539) | 0.342681 / 0.540337 (-0.197656) | 0.470616 / 1.386936 (-0.916320) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#28780197dd3e4c125defae29ac8ef5346c41350a \"CML watermark\")\n",
"yay! is this in a shipped release?",
"we can do one in the coming days once @albertvillanova is back",
"We have made a release and this feature is now included."
] | "2024-08-15T14:45:22Z" | "2024-09-12T04:36:08Z" | "2024-08-21T15:47:16Z" | MEMBER | null | - use `hf_hub_download()` from `huggingface_hub` for HF files
- `datasets` cache_dir is still used for:
- caching datasets as Arrow files (that back `Dataset` objects)
- extracted archives, uncompressed files
- files downloaded via http (datasets with scripts)
- I removed code that were made for http files (and also the dummy_data / mock_download_manager stuff that happened to rely on them and have been legacy for a while now) | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 0,
"hooray": 2,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7105/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7105/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7105.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7105",
"merged_at": "2024-08-21T15:47:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7105.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7105"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7104 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7104/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7104/comments | https://api.github.com/repos/huggingface/datasets/issues/7104/events | https://github.com/huggingface/datasets/pull/7104 | 2,467,788,212 | PR_kwDODunzps54dAhE | 7,104 | remove more script docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7104). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005343 / 0.011353 (-0.006010) | 0.003562 / 0.011008 (-0.007447) | 0.062785 / 0.038508 (0.024277) | 0.031459 / 0.023109 (0.008349) | 0.246497 / 0.275898 (-0.029401) | 0.268258 / 0.323480 (-0.055222) | 0.003201 / 0.007986 (-0.004785) | 0.004153 / 0.004328 (-0.000175) | 0.049003 / 0.004250 (0.044753) | 0.042780 / 0.037052 (0.005728) | 0.263857 / 0.258489 (0.005368) | 0.278578 / 0.293841 (-0.015263) | 0.030357 / 0.128546 (-0.098190) | 0.012341 / 0.075646 (-0.063305) | 0.206010 / 0.419271 (-0.213262) | 0.036244 / 0.043533 (-0.007289) | 0.245799 / 0.255139 (-0.009340) | 0.265467 / 0.283200 (-0.017733) | 0.019473 / 0.141683 (-0.122210) | 1.147913 / 1.452155 (-0.304242) | 1.209968 / 1.492716 (-0.282749) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099393 / 0.018006 (0.081387) | 0.300898 / 0.000490 (0.300408) | 0.000258 / 0.000200 (0.000058) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018888 / 0.037411 (-0.018523) | 0.062452 / 0.014526 (0.047926) | 0.073799 / 0.176557 (-0.102757) | 0.121297 / 0.737135 (-0.615839) | 0.074855 / 0.296338 (-0.221484) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283969 / 0.215209 (0.068760) | 2.808820 / 2.077655 (0.731165) | 1.446106 / 1.504120 (-0.058014) | 1.321622 / 1.541195 (-0.219573) | 1.348317 / 1.468490 (-0.120173) | 0.738369 / 4.584777 (-3.846408) | 2.349825 / 3.745712 (-1.395887) | 2.913964 / 5.269862 (-2.355897) | 1.870585 / 4.565676 (-2.695092) | 0.080141 / 0.424275 (-0.344134) | 0.005174 / 0.007607 (-0.002433) | 0.335977 / 0.226044 (0.109933) | 3.356267 / 2.268929 (1.087338) | 1.811149 / 55.444624 (-53.633475) | 1.510685 / 6.876477 (-5.365792) | 1.524960 / 2.142072 (-0.617112) | 0.803900 / 4.805227 (-4.001328) | 0.138294 / 6.500664 (-6.362370) | 0.042241 / 0.075469 (-0.033229) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.975597 / 1.841788 (-0.866191) | 11.395109 / 8.074308 (3.320801) | 9.837724 / 10.191392 (-0.353668) | 0.141474 / 0.680424 (-0.538950) | 0.015075 / 0.534201 (-0.519126) | 0.304285 / 0.579283 (-0.274998) | 0.267845 / 0.434364 (-0.166519) | 0.342808 / 0.540337 (-0.197529) | 0.434299 / 1.386936 (-0.952637) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005612 / 0.011353 (-0.005741) | 0.003808 / 0.011008 (-0.007201) | 0.050533 / 0.038508 (0.012024) | 0.032635 / 0.023109 (0.009526) | 0.265522 / 0.275898 (-0.010376) | 0.289763 / 0.323480 (-0.033716) | 0.004395 / 0.007986 (-0.003590) | 0.002868 / 0.004328 (-0.001460) | 0.048443 / 0.004250 (0.044193) | 0.040047 / 0.037052 (0.002995) | 0.279013 / 0.258489 (0.020524) | 0.314499 / 0.293841 (0.020658) | 0.032321 / 0.128546 (-0.096225) | 0.011902 / 0.075646 (-0.063744) | 0.059827 / 0.419271 (-0.359445) | 0.034388 / 0.043533 (-0.009145) | 0.270660 / 0.255139 (0.015521) | 0.290776 / 0.283200 (0.007576) | 0.017875 / 0.141683 (-0.123808) | 1.188085 / 1.452155 (-0.264070) | 1.221384 / 1.492716 (-0.271332) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095619 / 0.018006 (0.077613) | 0.305331 / 0.000490 (0.304841) | 0.000217 / 0.000200 (0.000018) | 0.000049 / 0.000054 (-0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022481 / 0.037411 (-0.014930) | 0.076957 / 0.014526 (0.062431) | 0.087830 / 0.176557 (-0.088726) | 0.128290 / 0.737135 (-0.608845) | 0.090565 / 0.296338 (-0.205774) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291861 / 0.215209 (0.076652) | 2.869776 / 2.077655 (0.792121) | 1.575114 / 1.504120 (0.070994) | 1.449873 / 1.541195 (-0.091322) | 1.450333 / 1.468490 (-0.018158) | 0.723319 / 4.584777 (-3.861458) | 0.972603 / 3.745712 (-2.773109) | 2.940909 / 5.269862 (-2.328953) | 1.889664 / 4.565676 (-2.676012) | 0.078654 / 0.424275 (-0.345621) | 0.005197 / 0.007607 (-0.002410) | 0.344380 / 0.226044 (0.118336) | 3.387509 / 2.268929 (1.118580) | 1.981590 / 55.444624 (-53.463034) | 1.643214 / 6.876477 (-5.233263) | 1.640435 / 2.142072 (-0.501638) | 0.802037 / 4.805227 (-4.003191) | 0.133016 / 6.500664 (-6.367648) | 0.040861 / 0.075469 (-0.034608) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.026372 / 1.841788 (-0.815416) | 11.959931 / 8.074308 (3.885623) | 10.122523 / 10.191392 (-0.068869) | 0.144443 / 0.680424 (-0.535981) | 0.015629 / 0.534201 (-0.518572) | 0.304802 / 0.579283 (-0.274481) | 0.120538 / 0.434364 (-0.313826) | 0.343394 / 0.540337 (-0.196943) | 0.437544 / 1.386936 (-0.949392) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#84832c07f614e5f51a762166b2fa9ac27e988173 \"CML watermark\")\n"
] | "2024-08-15T10:13:26Z" | "2024-08-15T10:24:13Z" | "2024-08-15T10:18:25Z" | MEMBER | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7104/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7104/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7104.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7104",
"merged_at": "2024-08-15T10:18:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7104.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7104"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7103 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7103/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7103/comments | https://api.github.com/repos/huggingface/datasets/issues/7103/events | https://github.com/huggingface/datasets/pull/7103 | 2,467,664,581 | PR_kwDODunzps54clrp | 7,103 | Fix args of feature docstrings | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7103). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005255 / 0.011353 (-0.006098) | 0.003344 / 0.011008 (-0.007664) | 0.062062 / 0.038508 (0.023554) | 0.030154 / 0.023109 (0.007045) | 0.233728 / 0.275898 (-0.042170) | 0.258799 / 0.323480 (-0.064681) | 0.004105 / 0.007986 (-0.003880) | 0.002708 / 0.004328 (-0.001621) | 0.048689 / 0.004250 (0.044439) | 0.041864 / 0.037052 (0.004812) | 0.247221 / 0.258489 (-0.011268) | 0.274067 / 0.293841 (-0.019774) | 0.029108 / 0.128546 (-0.099439) | 0.011867 / 0.075646 (-0.063779) | 0.203181 / 0.419271 (-0.216090) | 0.035162 / 0.043533 (-0.008371) | 0.239723 / 0.255139 (-0.015416) | 0.256679 / 0.283200 (-0.026521) | 0.018362 / 0.141683 (-0.123321) | 1.139974 / 1.452155 (-0.312181) | 1.193946 / 1.492716 (-0.298770) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.135477 / 0.018006 (0.117471) | 0.298500 / 0.000490 (0.298011) | 0.000225 / 0.000200 (0.000025) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018743 / 0.037411 (-0.018668) | 0.062999 / 0.014526 (0.048474) | 0.073466 / 0.176557 (-0.103090) | 0.119227 / 0.737135 (-0.617908) | 0.074338 / 0.296338 (-0.222000) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280747 / 0.215209 (0.065538) | 2.750660 / 2.077655 (0.673006) | 1.461004 / 1.504120 (-0.043116) | 1.348439 / 1.541195 (-0.192756) | 1.365209 / 1.468490 (-0.103281) | 0.718416 / 4.584777 (-3.866361) | 2.333568 / 3.745712 (-1.412144) | 2.854639 / 5.269862 (-2.415223) | 1.821144 / 4.565676 (-2.744532) | 0.077234 / 0.424275 (-0.347041) | 0.005111 / 0.007607 (-0.002497) | 0.330749 / 0.226044 (0.104705) | 3.277189 / 2.268929 (1.008260) | 1.825886 / 55.444624 (-53.618739) | 1.515078 / 6.876477 (-5.361399) | 1.527288 / 2.142072 (-0.614785) | 0.786922 / 4.805227 (-4.018305) | 0.131539 / 6.500664 (-6.369125) | 0.042365 / 0.075469 (-0.033104) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.961809 / 1.841788 (-0.879979) | 11.184540 / 8.074308 (3.110232) | 9.473338 / 10.191392 (-0.718054) | 0.138460 / 0.680424 (-0.541964) | 0.014588 / 0.534201 (-0.519613) | 0.301503 / 0.579283 (-0.277780) | 0.261092 / 0.434364 (-0.173271) | 0.336480 / 0.540337 (-0.203857) | 0.427665 / 1.386936 (-0.959271) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005517 / 0.011353 (-0.005836) | 0.003417 / 0.011008 (-0.007591) | 0.049338 / 0.038508 (0.010830) | 0.033411 / 0.023109 (0.010302) | 0.264328 / 0.275898 (-0.011570) | 0.286750 / 0.323480 (-0.036730) | 0.004299 / 0.007986 (-0.003686) | 0.002506 / 0.004328 (-0.001823) | 0.049511 / 0.004250 (0.045260) | 0.041471 / 0.037052 (0.004418) | 0.276732 / 0.258489 (0.018243) | 0.311908 / 0.293841 (0.018067) | 0.031683 / 0.128546 (-0.096863) | 0.011700 / 0.075646 (-0.063946) | 0.060084 / 0.419271 (-0.359188) | 0.037757 / 0.043533 (-0.005776) | 0.265342 / 0.255139 (0.010203) | 0.287782 / 0.283200 (0.004583) | 0.018692 / 0.141683 (-0.122990) | 1.163462 / 1.452155 (-0.288692) | 1.219236 / 1.492716 (-0.273481) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094102 / 0.018006 (0.076096) | 0.303976 / 0.000490 (0.303487) | 0.000208 / 0.000200 (0.000008) | 0.000042 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023252 / 0.037411 (-0.014160) | 0.076986 / 0.014526 (0.062461) | 0.088831 / 0.176557 (-0.087726) | 0.128661 / 0.737135 (-0.608475) | 0.089082 / 0.296338 (-0.207256) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.297428 / 0.215209 (0.082218) | 2.951568 / 2.077655 (0.873913) | 1.597627 / 1.504120 (0.093508) | 1.466556 / 1.541195 (-0.074639) | 1.455522 / 1.468490 (-0.012968) | 0.723576 / 4.584777 (-3.861201) | 0.951113 / 3.745712 (-2.794599) | 2.889671 / 5.269862 (-2.380190) | 1.877330 / 4.565676 (-2.688347) | 0.079124 / 0.424275 (-0.345151) | 0.005146 / 0.007607 (-0.002461) | 0.344063 / 0.226044 (0.118018) | 3.432190 / 2.268929 (1.163261) | 1.927049 / 55.444624 (-53.517576) | 1.638552 / 6.876477 (-5.237924) | 1.647791 / 2.142072 (-0.494282) | 0.800526 / 4.805227 (-4.004701) | 0.131858 / 6.500664 (-6.368806) | 0.040852 / 0.075469 (-0.034618) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.025536 / 1.841788 (-0.816252) | 11.798302 / 8.074308 (3.723994) | 10.012051 / 10.191392 (-0.179341) | 0.137701 / 0.680424 (-0.542723) | 0.015151 / 0.534201 (-0.519050) | 0.298972 / 0.579283 (-0.280311) | 0.123816 / 0.434364 (-0.310548) | 0.337292 / 0.540337 (-0.203046) | 0.432729 / 1.386936 (-0.954207) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#bececdac927160b5c7e883736d7cc79d5699ad0a \"CML watermark\")\n"
] | "2024-08-15T08:46:08Z" | "2024-08-16T09:18:29Z" | "2024-08-15T10:33:30Z" | MEMBER | null | Fix Args section of feature docstrings.
Currently, some args do not appear in the docs because they are not properly parsed due to the lack of their type (between parentheses). | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7103/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7103/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7103.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7103",
"merged_at": "2024-08-15T10:33:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7103.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7103"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7102 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7102/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7102/comments | https://api.github.com/repos/huggingface/datasets/issues/7102/events | https://github.com/huggingface/datasets/issues/7102 | 2,466,893,106 | I_kwDODunzps6TCc0y | 7,102 | Slow iteration speeds when using IterableDataset.shuffle with load_dataset(data_files=..., streaming=True) | {
"avatar_url": "https://avatars.githubusercontent.com/u/13192126?v=4",
"events_url": "https://api.github.com/users/lajd/events{/privacy}",
"followers_url": "https://api.github.com/users/lajd/followers",
"following_url": "https://api.github.com/users/lajd/following{/other_user}",
"gists_url": "https://api.github.com/users/lajd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lajd",
"id": 13192126,
"login": "lajd",
"node_id": "MDQ6VXNlcjEzMTkyMTI2",
"organizations_url": "https://api.github.com/users/lajd/orgs",
"received_events_url": "https://api.github.com/users/lajd/received_events",
"repos_url": "https://api.github.com/users/lajd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lajd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lajd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lajd",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Hi @lajd , I was skeptical about how we are saving the shards each as their own dataset (arrow file) in the script above, and so I updated the script to try out saving the shards in a few different file formats. From the experiments I ran, I saw binary format show significantly the best performance, with arrow and parquet about the same. However, I was unable to reproduce a drastically slower iteration speed after shuffling in any case when using the revised script -- pasting below:\r\n\r\n```python\r\nimport time\r\nfrom datasets import load_dataset, Dataset, IterableDataset\r\nfrom pathlib import Path\r\nimport torch\r\nimport pandas as pd\r\nimport pickle\r\nimport pyarrow as pa\r\nimport pyarrow.parquet as pq\r\n\r\n\r\ndef generate_random_example():\r\n return {\r\n 'inputs': torch.randn(128).tolist(),\r\n 'indices': torch.randint(0, 10000, (2, 20000)).tolist(),\r\n 'values': torch.randn(20000).tolist(),\r\n }\r\n\r\n\r\ndef generate_shard_data(examples_per_shard: int = 512):\r\n return [generate_random_example() for _ in range(examples_per_shard)]\r\n\r\n\r\ndef save_shard_as_arrow(shard_idx, save_dir, examples_per_shard):\r\n # Generate shard data\r\n shard_data = generate_shard_data(examples_per_shard)\r\n\r\n # Convert data to a Hugging Face Dataset\r\n dataset = Dataset.from_dict({\r\n 'inputs': [example['inputs'] for example in shard_data],\r\n 'indices': [example['indices'] for example in shard_data],\r\n 'values': [example['values'] for example in shard_data],\r\n })\r\n\r\n # Define the shard save path\r\n shard_write_path = Path(save_dir) / f\"shard_{shard_idx}\"\r\n\r\n # Save the dataset to disk using the Arrow format\r\n dataset.save_to_disk(str(shard_write_path))\r\n\r\n return str(shard_write_path)\r\n\r\n\r\ndef save_shard_as_parquet(shard_idx, save_dir, examples_per_shard):\r\n # Generate shard data\r\n shard_data = generate_shard_data(examples_per_shard)\r\n\r\n # Convert data to a pandas DataFrame for easy conversion to Parquet\r\n df = pd.DataFrame(shard_data)\r\n\r\n # Define the shard save path\r\n shard_write_path = Path(save_dir) / f\"shard_{shard_idx}.parquet\"\r\n\r\n # Convert DataFrame to PyArrow Table for Parquet saving\r\n table = pa.Table.from_pandas(df)\r\n\r\n # Save the table as a Parquet file\r\n pq.write_table(table, shard_write_path)\r\n\r\n return str(shard_write_path)\r\n\r\n\r\ndef save_shard_as_binary(shard_idx, save_dir, examples_per_shard):\r\n # Generate shard data\r\n shard_data = generate_shard_data(examples_per_shard)\r\n\r\n # Define the shard save path\r\n shard_write_path = Path(save_dir) / f\"shard_{shard_idx}.bin\"\r\n\r\n # Save each example as a serialized binary object using pickle\r\n with open(shard_write_path, 'wb') as f:\r\n for example in shard_data:\r\n f.write(pickle.dumps(example))\r\n\r\n return str(shard_write_path)\r\n\r\n\r\ndef generate_split_shards(save_dir, filetype=\"parquet\", num_shards: int = 16, examples_per_shard: int = 512):\r\n shard_filepaths = []\r\n for shard_idx in range(num_shards):\r\n if filetype == \"parquet\":\r\n shard_filepaths.append(save_shard_as_parquet(shard_idx, save_dir, examples_per_shard))\r\n elif filetype == \"binary\":\r\n shard_filepaths.append(save_shard_as_binary(shard_idx, save_dir, examples_per_shard))\r\n elif filetype == \"arrow\":\r\n shard_filepaths.append(save_shard_as_arrow(shard_idx, save_dir, examples_per_shard))\r\n else:\r\n raise ValueError(f\"Unsupported filetype: {filetype}. Choose either 'parquet' or 'binary'.\")\r\n return shard_filepaths\r\n\r\n\r\ndef _binary_dataset_generator(files):\r\n for filepath in files:\r\n with open(filepath, 'rb') as f:\r\n while True:\r\n try:\r\n example = pickle.load(f)\r\n yield example\r\n except EOFError:\r\n break\r\n\r\n\r\ndef load_binary_dataset(shard_filepaths):\r\n return IterableDataset.from_generator(\r\n _binary_dataset_generator, gen_kwargs={\"files\": shard_filepaths},\r\n )\r\n\r\n\r\ndef load_parquet_dataset(shard_filepaths):\r\n # Load the dataset as an IterableDataset\r\n return load_dataset(\r\n \"parquet\",\r\n data_files={split: shard_filepaths},\r\n streaming=True,\r\n split=split,\r\n )\r\n\r\n\r\ndef load_arrow_dataset(shard_filepaths):\r\n # Load the dataset as an IterableDataset\r\n shard_filepaths = [f + \"/data-00000-of-00001.arrow\" for f in shard_filepaths]\r\n return load_dataset(\r\n \"arrow\",\r\n data_files={split: shard_filepaths},\r\n streaming=True,\r\n split=split,\r\n )\r\n\r\n\r\ndef load_dataset_wrapper(filetype: str, shard_filepaths: list[str]):\r\n if filetype == \"parquet\":\r\n return load_parquet_dataset(shard_filepaths)\r\n if filetype == \"binary\":\r\n return load_binary_dataset(shard_filepaths)\r\n if filetype == \"arrow\":\r\n return load_arrow_dataset(shard_filepaths)\r\n else:\r\n raise ValueError(\"Unsupported filetype\")\r\n\r\n\r\n# Example usage:\r\nsplit = \"train\"\r\nsplit_save_dir = \"/tmp/random_split\"\r\n\r\nfiletype = \"binary\" # or \"parquet\", or \"arrow\"\r\nnum_shards = 16\r\n\r\nshard_filepaths = generate_split_shards(split_save_dir, filetype=filetype, num_shards=num_shards)\r\ndataset = load_dataset_wrapper(filetype=filetype, shard_filepaths=shard_filepaths)\r\n\r\ndataset = dataset.shuffle(buffer_size=100, seed=42)\r\n\r\nstart_time = time.time()\r\nfor count, item in enumerate(dataset):\r\n if count > 0 and count % 100 == 0:\r\n elapsed_time = time.time() - start_time\r\n iterations_per_second = count / elapsed_time\r\n print(f\"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second\")\r\n```",
"update: I was able to reproduce the issue you described -- but ONLY if I do \r\n\r\n```\r\nrandom_dataset = random_dataset.with_format(\"numpy\")\r\n```\r\n\r\nIf I do this, I see similar numbers as what you reported. If I do not use numpy format, parquet and arrow are about 17 iterations per second regardless of whether or not we shuffle. Using binary, (again no numpy format tried with this yet), still shows the fastest speeds on average (shuffle and no shuffle) of about 850 it/sec.\r\n\r\nI suspect some issues with arrow and numpy being optimized for sequential reads, and shuffling cuases issuses... hmm"
] | "2024-08-14T21:44:44Z" | "2024-08-15T16:17:31Z" | null | NONE | null | ### Describe the bug
When I load a dataset from a number of arrow files, as in:
```
random_dataset = load_dataset(
"arrow",
data_files={split: shard_filepaths},
streaming=True,
split=split,
)
```
I'm able to get fast iteration speeds when iterating over the dataset without shuffling.
When I shuffle the dataset, the iteration speed is reduced by ~1000x.
It's very possible the way I'm loading dataset shards is not appropriate; if so please advise!
Thanks for the help
### Steps to reproduce the bug
Here's full code to reproduce the issue:
- Generate a random dataset
- Create shards of data independently using Dataset.save_to_disk()
- The below will generate 16 shards (arrow files), of 512 examples each
```
import time
from pathlib import Path
from multiprocessing import Pool, cpu_count
import torch
from datasets import Dataset, load_dataset
split = "train"
split_save_dir = "/tmp/random_split"
def generate_random_example():
return {
'inputs': torch.randn(128).tolist(),
'indices': torch.randint(0, 10000, (2, 20000)).tolist(),
'values': torch.randn(20000).tolist(),
}
def generate_shard_dataset(examples_per_shard: int = 512):
dataset_dict = {
'inputs': [],
'indices': [],
'values': []
}
for _ in range(examples_per_shard):
example = generate_random_example()
dataset_dict['inputs'].append(example['inputs'])
dataset_dict['indices'].append(example['indices'])
dataset_dict['values'].append(example['values'])
return Dataset.from_dict(dataset_dict)
def save_shard(shard_idx, save_dir, examples_per_shard):
shard_dataset = generate_shard_dataset(examples_per_shard)
shard_write_path = Path(save_dir) / f"shard_{shard_idx}"
shard_dataset.save_to_disk(shard_write_path)
return str(Path(shard_write_path) / "data-00000-of-00001.arrow")
def generate_split_shards(save_dir, num_shards: int = 16, examples_per_shard: int = 512):
with Pool(cpu_count()) as pool:
args = [(m, save_dir, examples_per_shard) for m in range(num_shards)]
shard_filepaths = pool.starmap(save_shard, args)
return shard_filepaths
shard_filepaths = generate_split_shards(split_save_dir)
```
Load the dataset as IterableDataset:
```
random_dataset = load_dataset(
"arrow",
data_files={split: shard_filepaths},
streaming=True,
split=split,
)
random_dataset = random_dataset.with_format("numpy")
```
Observe the iterations/second when iterating over the dataset directly, and applying shuffling before iterating:
Without shuffling, this gives ~1500 iterations/second
```
start_time = time.time()
for count, item in enumerate(random_dataset):
if count > 0 and count % 100 == 0:
elapsed_time = time.time() - start_time
iterations_per_second = count / elapsed_time
print(f"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second")
```
```
Processed 100 items at an average of 705.74 iterations/second
Processed 200 items at an average of 1169.68 iterations/second
Processed 300 items at an average of 1497.97 iterations/second
Processed 400 items at an average of 1739.62 iterations/second
Processed 500 items at an average of 1931.11 iterations/second`
```
When shuffling, this gives ~3 iterations/second:
```
random_dataset = random_dataset.shuffle(buffer_size=100,seed=42)
start_time = time.time()
for count, item in enumerate(random_dataset):
if count > 0 and count % 100 == 0:
elapsed_time = time.time() - start_time
iterations_per_second = count / elapsed_time
print(f"Processed {count} items at an average of {iterations_per_second:.2f} iterations/second")
```
```
Processed 100 items at an average of 3.75 iterations/second
Processed 200 items at an average of 3.93 iterations/second
```
### Expected behavior
Iterations per second should be barely affected by shuffling, especially with a small buffer size
### Environment info
Datasets version: 2.21.0
Python 3.10
Ubuntu 22.04 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7102/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7102/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7101 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7101/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7101/comments | https://api.github.com/repos/huggingface/datasets/issues/7101/events | https://github.com/huggingface/datasets/issues/7101 | 2,466,510,783 | I_kwDODunzps6TA_e_ | 7,101 | `load_dataset` from Hub with `name` to specify `config` using incorrect builder type when multiple data formats are present | {
"avatar_url": "https://avatars.githubusercontent.com/u/106811348?v=4",
"events_url": "https://api.github.com/users/hlky/events{/privacy}",
"followers_url": "https://api.github.com/users/hlky/followers",
"following_url": "https://api.github.com/users/hlky/following{/other_user}",
"gists_url": "https://api.github.com/users/hlky/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hlky",
"id": 106811348,
"login": "hlky",
"node_id": "U_kgDOBl3P1A",
"organizations_url": "https://api.github.com/users/hlky/orgs",
"received_events_url": "https://api.github.com/users/hlky/received_events",
"repos_url": "https://api.github.com/users/hlky/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hlky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hlky/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hlky",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Having looked into this further it seems the core of the issue is with two different formats in the same repo.\r\n\r\nWhen the `parquet` config is first, the `WebDataset`s are loaded as `parquet`, if the `WebDataset` configs are first, the `parquet` is loaded as `WebDataset`.\r\n\r\nA workaround in my case would be to just turn the `parquet` into a `WebDataset`, although I'd still need the Dataset Viewer config limit increasing. In other cases using the same format may not be possible.\r\n\r\nRelevant code: \r\n- [HubDatasetModuleFactoryWithoutScript](https://github.com/huggingface/datasets/blob/5f42139a2c5583a55d34a2f60d537f5fba285c28/src/datasets/load.py#L964)\r\n- [get_data_patterns](https://github.com/huggingface/datasets/blob/5f42139a2c5583a55d34a2f60d537f5fba285c28/src/datasets/data_files.py#L415)"
] | "2024-08-14T18:12:25Z" | "2024-08-18T10:33:38Z" | null | NONE | null | Following [documentation](https://huggingface.co/docs/datasets/repository_structure#define-your-splits-and-subsets-in-yaml) I had defined different configs for [`Dataception`](https://huggingface.co/datasets/bigdata-pw/Dataception), a dataset of datasets:
```yaml
configs:
- config_name: dataception
data_files:
- path: dataception.parquet
split: train
default: true
- config_name: dataset_5423
data_files:
- path: datasets/5423.tar
split: train
...
- config_name: dataset_721736
data_files:
- path: datasets/721736.tar
split: train
```
The intent was for metadata to be browsable via Dataset Viewer, in addition to each individual dataset, and to allow datasets to be loaded by specifying the config/name to `load_dataset`.
While testing `load_dataset` I encountered the following error:
```python
>>> dataset = load_dataset("bigdata-pw/Dataception", "dataset_7691")
Downloading readme: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 467k/467k [00:00<00:00, 1.99MB/s]
Downloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 71.0M/71.0M [00:02<00:00, 26.8MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "datasets\load.py", line 2145, in load_dataset
builder_instance.download_and_prepare(
File "datasets\builder.py", line 1027, in download_and_prepare
self._download_and_prepare(
File "datasets\builder.py", line 1100, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "datasets\packaged_modules\parquet\parquet.py", line 58, in _split_generators
self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f))
^^^^^^^^^^^^^^^^^
File "pyarrow\parquet\core.py", line 2325, in read_schema
file = ParquetFile(
^^^^^^^^^^^^
File "pyarrow\parquet\core.py", line 318, in __init__
self.reader.open(
File "pyarrow\_parquet.pyx", line 1470, in pyarrow._parquet.ParquetReader.open
File "pyarrow\error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
```
The correct file is downloaded, however the incorrect builder type is detected; `parquet` due to other content of the repository. It would appear that the config needs to be taken into account.
Note that I have removed the additional configs from the repository because of this issue and there is a limit of 3000 configs anyway so the Dataset Viewer doesn't work as I intended. I'll add them back in if it assists with testing.
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7101/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7101/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7100/comments | https://api.github.com/repos/huggingface/datasets/issues/7100/events | https://github.com/huggingface/datasets/issues/7100 | 2,465,529,414 | I_kwDODunzps6S9P5G | 7,100 | IterableDataset: cannot resolve features from list of numpy arrays | {
"avatar_url": "https://avatars.githubusercontent.com/u/18899212?v=4",
"events_url": "https://api.github.com/users/VeryLazyBoy/events{/privacy}",
"followers_url": "https://api.github.com/users/VeryLazyBoy/followers",
"following_url": "https://api.github.com/users/VeryLazyBoy/following{/other_user}",
"gists_url": "https://api.github.com/users/VeryLazyBoy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VeryLazyBoy",
"id": 18899212,
"login": "VeryLazyBoy",
"node_id": "MDQ6VXNlcjE4ODk5MjEy",
"organizations_url": "https://api.github.com/users/VeryLazyBoy/orgs",
"received_events_url": "https://api.github.com/users/VeryLazyBoy/received_events",
"repos_url": "https://api.github.com/users/VeryLazyBoy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VeryLazyBoy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VeryLazyBoy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VeryLazyBoy",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"Assign this issue to me under Hacktoberfest with hacktoberfest label inserted on the issue"
] | "2024-08-14T11:01:51Z" | "2024-10-03T05:47:23Z" | null | NONE | null | ### Describe the bug
when resolve features of `IterableDataset`, got `pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values` error.
```
Traceback (most recent call last):
File "test.py", line 6
iter_ds = iter_ds._resolve_features()
File "lib/python3.10/site-packages/datasets/iterable_dataset.py", line 2876, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
File "lib/python3.10/site-packages/datasets/iterable_dataset.py", line 63, in _infer_features_from_batch
pa_table = pa.Table.from_pydict(batch)
File "pyarrow/table.pxi", line 1813, in pyarrow.lib._Tabular.from_pydict
File "pyarrow/table.pxi", line 5339, in pyarrow.lib._from_pydict
File "pyarrow/array.pxi", line 374, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 344, in pyarrow.lib.array
File "pyarrow/array.pxi", line 42, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values
```
### Steps to reproduce the bug
```python
from datasets import Dataset
import numpy as np
# create list of numpy
iter_ds = Dataset.from_dict({'a': [[[1, 2, 3], [1, 2, 3]]]}).to_iterable_dataset().map(lambda x: {'a': [np.array(x['a'])]})
iter_ds = iter_ds._resolve_features() # errors here
```
### Expected behavior
features can be successfully resolved
### Environment info
- `datasets` version: 2.21.0
- Platform: Linux-5.15.0-94-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.23.4
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7100/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7100/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7099 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7099/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7099/comments | https://api.github.com/repos/huggingface/datasets/issues/7099/events | https://github.com/huggingface/datasets/pull/7099 | 2,465,221,827 | PR_kwDODunzps54U7s4 | 7,099 | Set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7099). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005649 / 0.011353 (-0.005704) | 0.003918 / 0.011008 (-0.007091) | 0.064333 / 0.038508 (0.025825) | 0.031909 / 0.023109 (0.008800) | 0.249020 / 0.275898 (-0.026878) | 0.273563 / 0.323480 (-0.049917) | 0.004184 / 0.007986 (-0.003802) | 0.002809 / 0.004328 (-0.001519) | 0.049066 / 0.004250 (0.044816) | 0.043324 / 0.037052 (0.006272) | 0.257889 / 0.258489 (-0.000600) | 0.285410 / 0.293841 (-0.008431) | 0.030681 / 0.128546 (-0.097865) | 0.012389 / 0.075646 (-0.063258) | 0.206172 / 0.419271 (-0.213100) | 0.036500 / 0.043533 (-0.007032) | 0.253674 / 0.255139 (-0.001465) | 0.272086 / 0.283200 (-0.011114) | 0.019558 / 0.141683 (-0.122125) | 1.149501 / 1.452155 (-0.302653) | 1.198036 / 1.492716 (-0.294680) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.139977 / 0.018006 (0.121971) | 0.301149 / 0.000490 (0.300659) | 0.000253 / 0.000200 (0.000053) | 0.000049 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019137 / 0.037411 (-0.018274) | 0.062616 / 0.014526 (0.048090) | 0.075965 / 0.176557 (-0.100591) | 0.120976 / 0.737135 (-0.616159) | 0.076384 / 0.296338 (-0.219954) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283801 / 0.215209 (0.068592) | 2.794074 / 2.077655 (0.716419) | 1.475633 / 1.504120 (-0.028487) | 1.336270 / 1.541195 (-0.204925) | 1.376159 / 1.468490 (-0.092331) | 0.718768 / 4.584777 (-3.866009) | 2.375970 / 3.745712 (-1.369742) | 2.969121 / 5.269862 (-2.300741) | 1.900236 / 4.565676 (-2.665440) | 0.082463 / 0.424275 (-0.341812) | 0.005159 / 0.007607 (-0.002448) | 0.329057 / 0.226044 (0.103012) | 3.250535 / 2.268929 (0.981607) | 1.846415 / 55.444624 (-53.598210) | 1.496622 / 6.876477 (-5.379855) | 1.538125 / 2.142072 (-0.603947) | 0.806127 / 4.805227 (-3.999101) | 0.135272 / 6.500664 (-6.365392) | 0.042668 / 0.075469 (-0.032801) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.983035 / 1.841788 (-0.858753) | 11.725835 / 8.074308 (3.651527) | 9.962818 / 10.191392 (-0.228574) | 0.131928 / 0.680424 (-0.548496) | 0.015784 / 0.534201 (-0.518417) | 0.301640 / 0.579283 (-0.277643) | 0.266251 / 0.434364 (-0.168113) | 0.339723 / 0.540337 (-0.200614) | 0.443384 / 1.386936 (-0.943552) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006301 / 0.011353 (-0.005052) | 0.004346 / 0.011008 (-0.006662) | 0.051406 / 0.038508 (0.012898) | 0.032263 / 0.023109 (0.009154) | 0.273715 / 0.275898 (-0.002183) | 0.300982 / 0.323480 (-0.022498) | 0.004533 / 0.007986 (-0.003452) | 0.002911 / 0.004328 (-0.001418) | 0.050464 / 0.004250 (0.046214) | 0.041131 / 0.037052 (0.004078) | 0.289958 / 0.258489 (0.031469) | 0.328632 / 0.293841 (0.034791) | 0.033545 / 0.128546 (-0.095001) | 0.013145 / 0.075646 (-0.062501) | 0.062241 / 0.419271 (-0.357031) | 0.035095 / 0.043533 (-0.008438) | 0.273303 / 0.255139 (0.018164) | 0.293652 / 0.283200 (0.010452) | 0.019980 / 0.141683 (-0.121703) | 1.155432 / 1.452155 (-0.296722) | 1.211409 / 1.492716 (-0.281307) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094885 / 0.018006 (0.076879) | 0.307423 / 0.000490 (0.306933) | 0.000254 / 0.000200 (0.000054) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023462 / 0.037411 (-0.013949) | 0.081980 / 0.014526 (0.067454) | 0.089890 / 0.176557 (-0.086666) | 0.131058 / 0.737135 (-0.606078) | 0.091873 / 0.296338 (-0.204465) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.298522 / 0.215209 (0.083313) | 2.981771 / 2.077655 (0.904116) | 1.632515 / 1.504120 (0.128395) | 1.502885 / 1.541195 (-0.038310) | 1.496868 / 1.468490 (0.028377) | 0.750145 / 4.584777 (-3.834632) | 0.988853 / 3.745712 (-2.756859) | 3.029162 / 5.269862 (-2.240700) | 1.952304 / 4.565676 (-2.613373) | 0.082418 / 0.424275 (-0.341857) | 0.005724 / 0.007607 (-0.001883) | 0.356914 / 0.226044 (0.130870) | 3.523804 / 2.268929 (1.254875) | 1.983254 / 55.444624 (-53.461370) | 1.673135 / 6.876477 (-5.203342) | 1.716639 / 2.142072 (-0.425433) | 0.821568 / 4.805227 (-3.983659) | 0.136113 / 6.500664 (-6.364551) | 0.041593 / 0.075469 (-0.033876) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.044670 / 1.841788 (-0.797118) | 12.739375 / 8.074308 (4.665066) | 10.263619 / 10.191392 (0.072227) | 0.132811 / 0.680424 (-0.547613) | 0.015491 / 0.534201 (-0.518710) | 0.305545 / 0.579283 (-0.273738) | 0.129226 / 0.434364 (-0.305138) | 0.345532 / 0.540337 (-0.194805) | 0.460406 / 1.386936 (-0.926530) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#ebec2691fb1e40145429f63375cef3f46d3011ab \"CML watermark\")\n"
] | "2024-08-14T08:31:17Z" | "2024-08-14T08:45:17Z" | "2024-08-14T08:39:25Z" | MEMBER | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7099/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7099/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7099.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7099",
"merged_at": "2024-08-14T08:39:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7099.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7099"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7098 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7098/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7098/comments | https://api.github.com/repos/huggingface/datasets/issues/7098/events | https://github.com/huggingface/datasets/pull/7098 | 2,465,016,562 | PR_kwDODunzps54UPMS | 7,098 | Release: 2.21.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7098). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | "2024-08-14T06:35:13Z" | "2024-08-14T06:41:07Z" | "2024-08-14T06:41:06Z" | MEMBER | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7098/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7098/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7098.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7098",
"merged_at": "2024-08-14T06:41:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7098.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7098"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7097 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7097/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7097/comments | https://api.github.com/repos/huggingface/datasets/issues/7097/events | https://github.com/huggingface/datasets/issues/7097 | 2,458,455,489 | I_kwDODunzps6SiQ3B | 7,097 | Some of DownloadConfig's properties are always being overridden in load.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/29772899?v=4",
"events_url": "https://api.github.com/users/ductai199x/events{/privacy}",
"followers_url": "https://api.github.com/users/ductai199x/followers",
"following_url": "https://api.github.com/users/ductai199x/following{/other_user}",
"gists_url": "https://api.github.com/users/ductai199x/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ductai199x",
"id": 29772899,
"login": "ductai199x",
"node_id": "MDQ6VXNlcjI5NzcyODk5",
"organizations_url": "https://api.github.com/users/ductai199x/orgs",
"received_events_url": "https://api.github.com/users/ductai199x/received_events",
"repos_url": "https://api.github.com/users/ductai199x/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ductai199x/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ductai199x/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ductai199x",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | "2024-08-09T18:26:37Z" | "2024-08-09T18:26:37Z" | null | NONE | null | ### Describe the bug
The `extract_compressed_file` and `force_extract` properties of DownloadConfig are always being set to True in the function `dataset_module_factory` in the `load.py` file. This behavior is very annoying because data extracted will just be ignored the next time the dataset is loaded.
See this image below:
![image](https://github.com/user-attachments/assets/9e76ebb7-09b1-4c95-adc8-a959b536f93c)
### Steps to reproduce the bug
1. Have a local dataset that contains archived files (zip, tar.gz, etc)
2. Build a dataset loading script to download and extract these files
3. Run the load_dataset function with a DownloadConfig that specifically set `force_extract` to False
4. The extraction process will start no matter if the archives was extracted previously
### Expected behavior
The extraction process should not run when the archives were previously extracted and `force_extract` is set to False.
### Environment info
datasets==2.20.0
python3.9 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7097/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7097/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7096 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7096/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7096/comments | https://api.github.com/repos/huggingface/datasets/issues/7096/events | https://github.com/huggingface/datasets/pull/7096 | 2,456,929,173 | PR_kwDODunzps535Xkr | 7,096 | Automatically create `cache_dir` from `cache_file_name` | {
"avatar_url": "https://avatars.githubusercontent.com/u/27844407?v=4",
"events_url": "https://api.github.com/users/ringohoffman/events{/privacy}",
"followers_url": "https://api.github.com/users/ringohoffman/followers",
"following_url": "https://api.github.com/users/ringohoffman/following{/other_user}",
"gists_url": "https://api.github.com/users/ringohoffman/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ringohoffman",
"id": 27844407,
"login": "ringohoffman",
"node_id": "MDQ6VXNlcjI3ODQ0NDA3",
"organizations_url": "https://api.github.com/users/ringohoffman/orgs",
"received_events_url": "https://api.github.com/users/ringohoffman/received_events",
"repos_url": "https://api.github.com/users/ringohoffman/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ringohoffman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ringohoffman/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ringohoffman",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"Hi @albertvillanova, is this PR looking okay to you? Anything else you'd like to see?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7096). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005278 / 0.011353 (-0.006075) | 0.003536 / 0.011008 (-0.007472) | 0.062604 / 0.038508 (0.024096) | 0.030704 / 0.023109 (0.007595) | 0.242178 / 0.275898 (-0.033720) | 0.264335 / 0.323480 (-0.059145) | 0.004118 / 0.007986 (-0.003868) | 0.002789 / 0.004328 (-0.001539) | 0.048813 / 0.004250 (0.044563) | 0.041787 / 0.037052 (0.004735) | 0.252369 / 0.258489 (-0.006120) | 0.280981 / 0.293841 (-0.012859) | 0.029646 / 0.128546 (-0.098900) | 0.012093 / 0.075646 (-0.063553) | 0.203036 / 0.419271 (-0.216235) | 0.035814 / 0.043533 (-0.007719) | 0.248929 / 0.255139 (-0.006210) | 0.266568 / 0.283200 (-0.016632) | 0.018761 / 0.141683 (-0.122922) | 1.188443 / 1.452155 (-0.263712) | 1.219324 / 1.492716 (-0.273392) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.095256 / 0.018006 (0.077250) | 0.301069 / 0.000490 (0.300579) | 0.000219 / 0.000200 (0.000019) | 0.000054 / 0.000054 (-0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018541 / 0.037411 (-0.018870) | 0.067333 / 0.014526 (0.052807) | 0.075483 / 0.176557 (-0.101073) | 0.121301 / 0.737135 (-0.615834) | 0.076924 / 0.296338 (-0.219414) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284722 / 0.215209 (0.069513) | 2.817656 / 2.077655 (0.740001) | 1.483827 / 1.504120 (-0.020293) | 1.363072 / 1.541195 (-0.178123) | 1.380472 / 1.468490 (-0.088018) | 0.739543 / 4.584777 (-3.845234) | 2.390699 / 3.745712 (-1.355013) | 2.980347 / 5.269862 (-2.289515) | 1.897881 / 4.565676 (-2.667795) | 0.078827 / 0.424275 (-0.345448) | 0.005193 / 0.007607 (-0.002414) | 0.342739 / 0.226044 (0.116695) | 3.370871 / 2.268929 (1.101942) | 1.846475 / 55.444624 (-53.598150) | 1.577860 / 6.876477 (-5.298617) | 1.628606 / 2.142072 (-0.513466) | 0.815686 / 4.805227 (-3.989541) | 0.134985 / 6.500664 (-6.365679) | 0.042330 / 0.075469 (-0.033139) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.962530 / 1.841788 (-0.879258) | 11.271449 / 8.074308 (3.197141) | 9.615452 / 10.191392 (-0.575940) | 0.140322 / 0.680424 (-0.540101) | 0.014057 / 0.534201 (-0.520144) | 0.306212 / 0.579283 (-0.273071) | 0.266758 / 0.434364 (-0.167606) | 0.341229 / 0.540337 (-0.199108) | 0.428974 / 1.386936 (-0.957962) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005980 / 0.011353 (-0.005373) | 0.003831 / 0.011008 (-0.007177) | 0.049837 / 0.038508 (0.011329) | 0.030602 / 0.023109 (0.007493) | 0.274107 / 0.275898 (-0.001791) | 0.298175 / 0.323480 (-0.025305) | 0.004492 / 0.007986 (-0.003494) | 0.002840 / 0.004328 (-0.001489) | 0.048984 / 0.004250 (0.044733) | 0.040001 / 0.037052 (0.002949) | 0.286130 / 0.258489 (0.027641) | 0.321546 / 0.293841 (0.027705) | 0.032675 / 0.128546 (-0.095871) | 0.012222 / 0.075646 (-0.063424) | 0.060321 / 0.419271 (-0.358950) | 0.034456 / 0.043533 (-0.009077) | 0.272408 / 0.255139 (0.017269) | 0.294714 / 0.283200 (0.011515) | 0.018568 / 0.141683 (-0.123115) | 1.169826 / 1.452155 (-0.282329) | 1.223906 / 1.492716 (-0.268810) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093734 / 0.018006 (0.075727) | 0.305915 / 0.000490 (0.305425) | 0.000210 / 0.000200 (0.000010) | 0.000052 / 0.000054 (-0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022389 / 0.037411 (-0.015022) | 0.076640 / 0.014526 (0.062114) | 0.088660 / 0.176557 (-0.087897) | 0.128998 / 0.737135 (-0.608137) | 0.090346 / 0.296338 (-0.205992) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.291642 / 0.215209 (0.076433) | 2.897270 / 2.077655 (0.819615) | 1.571564 / 1.504120 (0.067444) | 1.449533 / 1.541195 (-0.091662) | 1.458744 / 1.468490 (-0.009746) | 0.725465 / 4.584777 (-3.859312) | 0.962597 / 3.745712 (-2.783115) | 3.035056 / 5.269862 (-2.234806) | 1.902542 / 4.565676 (-2.663135) | 0.079869 / 0.424275 (-0.344407) | 0.005172 / 0.007607 (-0.002435) | 0.352099 / 0.226044 (0.126055) | 3.469058 / 2.268929 (1.200129) | 1.953402 / 55.444624 (-53.491222) | 1.647182 / 6.876477 (-5.229294) | 1.686473 / 2.142072 (-0.455599) | 0.797218 / 4.805227 (-4.008009) | 0.134161 / 6.500664 (-6.366503) | 0.041563 / 0.075469 (-0.033906) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.045855 / 1.841788 (-0.795933) | 12.271390 / 8.074308 (4.197082) | 10.186889 / 10.191392 (-0.004503) | 0.141141 / 0.680424 (-0.539283) | 0.015482 / 0.534201 (-0.518719) | 0.305699 / 0.579283 (-0.273584) | 0.128539 / 0.434364 (-0.305825) | 0.348492 / 0.540337 (-0.191845) | 0.444867 / 1.386936 (-0.942069) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#93dc73501298ccb1d31d854ba20fcf2c3b2fea8b \"CML watermark\")\n"
] | "2024-08-09T01:34:06Z" | "2024-08-15T17:25:26Z" | "2024-08-15T10:13:22Z" | CONTRIBUTOR | null | You get a pretty unhelpful error message when specifying a `cache_file_name` in a directory that doesn't exist, e.g. `cache_file_name="./cache/data.map"`
```python
import datasets
cache_file_name="./cache/train.map"
dataset = datasets.load_dataset("ylecun/mnist")
dataset["train"].map(lambda x: x, cache_file_name=cache_file_name)
```
```
FileNotFoundError: [Errno 2] No such file or directory: '/.../cache/tmp48r61siw'
```
It is simple enough to create and I was expecting that this would have been the case.
cc: @albertvillanova @lhoestq | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7096/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7096/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7096.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7096",
"merged_at": "2024-08-15T10:13:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7096.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7096"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7094 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7094/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7094/comments | https://api.github.com/repos/huggingface/datasets/issues/7094/events | https://github.com/huggingface/datasets/pull/7094 | 2,454,418,130 | PR_kwDODunzps53w2b7 | 7,094 | Add Arabic Docs to Datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/53489256?v=4",
"events_url": "https://api.github.com/users/AhmedAlmaghz/events{/privacy}",
"followers_url": "https://api.github.com/users/AhmedAlmaghz/followers",
"following_url": "https://api.github.com/users/AhmedAlmaghz/following{/other_user}",
"gists_url": "https://api.github.com/users/AhmedAlmaghz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AhmedAlmaghz",
"id": 53489256,
"login": "AhmedAlmaghz",
"node_id": "MDQ6VXNlcjUzNDg5MjU2",
"organizations_url": "https://api.github.com/users/AhmedAlmaghz/orgs",
"received_events_url": "https://api.github.com/users/AhmedAlmaghz/received_events",
"repos_url": "https://api.github.com/users/AhmedAlmaghz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AhmedAlmaghz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AhmedAlmaghz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AhmedAlmaghz",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | "2024-08-07T21:53:06Z" | "2024-08-07T21:53:06Z" | null | NONE | null | Translate Docs into Arabic issue-number : #7093
[Arabic Docs](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx)
[English Docs](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/en/index.mdx)
@stevhliu | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7094/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7094/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7094.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7094",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7094.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7094"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7093 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7093/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7093/comments | https://api.github.com/repos/huggingface/datasets/issues/7093/events | https://github.com/huggingface/datasets/issues/7093 | 2,454,413,074 | I_kwDODunzps6SS18S | 7,093 | Add Arabic Docs to datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/53489256?v=4",
"events_url": "https://api.github.com/users/AhmedAlmaghz/events{/privacy}",
"followers_url": "https://api.github.com/users/AhmedAlmaghz/followers",
"following_url": "https://api.github.com/users/AhmedAlmaghz/following{/other_user}",
"gists_url": "https://api.github.com/users/AhmedAlmaghz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AhmedAlmaghz",
"id": 53489256,
"login": "AhmedAlmaghz",
"node_id": "MDQ6VXNlcjUzNDg5MjU2",
"organizations_url": "https://api.github.com/users/AhmedAlmaghz/orgs",
"received_events_url": "https://api.github.com/users/AhmedAlmaghz/received_events",
"repos_url": "https://api.github.com/users/AhmedAlmaghz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AhmedAlmaghz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AhmedAlmaghz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AhmedAlmaghz",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | "2024-08-07T21:48:05Z" | "2024-08-07T21:48:05Z" | null | NONE | null | ### Feature request
Add Arabic Docs to datasets
[Datasets Arabic](https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx)
### Motivation
@AhmedAlmaghz
https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx
### Your contribution
@AhmedAlmaghz
https://github.com/AhmedAlmaghz/datasets/blob/main/docs/source/ar/index.mdx | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7093/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7093/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7092 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7092/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7092/comments | https://api.github.com/repos/huggingface/datasets/issues/7092/events | https://github.com/huggingface/datasets/issues/7092 | 2,451,393,658 | I_kwDODunzps6SHUx6 | 7,092 | load_dataset with multiple jsonlines files interprets datastructure too early | {
"avatar_url": "https://avatars.githubusercontent.com/u/23384483?v=4",
"events_url": "https://api.github.com/users/Vipitis/events{/privacy}",
"followers_url": "https://api.github.com/users/Vipitis/followers",
"following_url": "https://api.github.com/users/Vipitis/following{/other_user}",
"gists_url": "https://api.github.com/users/Vipitis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Vipitis",
"id": 23384483,
"login": "Vipitis",
"node_id": "MDQ6VXNlcjIzMzg0NDgz",
"organizations_url": "https://api.github.com/users/Vipitis/orgs",
"received_events_url": "https://api.github.com/users/Vipitis/received_events",
"repos_url": "https://api.github.com/users/Vipitis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Vipitis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vipitis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Vipitis",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I’ll take a look",
"Possible definitions of done for this issue:\r\n\r\n1. A fix so you can load your dataset specifically\r\n2. A general fix for datasets similar to this in the `datasets` library\r\n\r\nOption 1 is trivial. I think option 2 requires significant changes to the library.\r\n\r\nSince you outlined something akin to option 2 in `Expected behavior` I'm assuming that's what you'd like to see done. Is that right?\r\n\r\nIn the meantime, here's a solution for option 1:\r\n\r\n```python\r\nimport datasets\r\n\r\ndata_dir = './data/annotated/api'\r\n\r\nfeatures = datasets.Features({'id': datasets.Value(dtype='string'),\r\n 'name': datasets.Value(dtype='string'),\r\n 'author': datasets.Value(dtype='string'),\r\n 'description': datasets.Value(dtype='string'),\r\n 'tags': datasets.Sequence(feature=datasets.Value(dtype='string'), length=-1),\r\n 'likes': datasets.Value(dtype='int64'),\r\n 'viewed': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'date': datasets.Value(dtype='string'),\r\n 'time_retrieved': datasets.Value(dtype='string'),\r\n 'image_code': datasets.Value(dtype='string'),\r\n 'image_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'common_code': datasets.Value(dtype='string'),\r\n 'sound_code': datasets.Value(dtype='string'),\r\n 'sound_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'buffer_a_code': datasets.Value(dtype='string'),\r\n 'buffer_a_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'buffer_b_code': datasets.Value(dtype='string'),\r\n 'buffer_b_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'buffer_c_code': datasets.Value(dtype='string'),\r\n 'buffer_c_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'buffer_d_code': datasets.Value(dtype='string'),\r\n 'buffer_d_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'cube_a_code': datasets.Value(dtype='string'),\r\n 'cube_a_inputs': [{'channel': datasets.Value(dtype='int64'),\r\n 'ctype': datasets.Value(dtype='string'),\r\n 'id': datasets.Value(dtype='int64'),\r\n 'published': datasets.Value(dtype='int64'),\r\n 'sampler': {'filter': datasets.Value(dtype='string'),\r\n 'internal': datasets.Value(dtype='string'),\r\n 'srgb': datasets.Value(dtype='string'),\r\n 'vflip': datasets.Value(dtype='string'),\r\n 'wrap': datasets.Value(dtype='string')},\r\n 'src': datasets.Value(dtype='string')}],\r\n 'thumbnail': datasets.Value(dtype='string'),\r\n 'access': datasets.Value(dtype='string'),\r\n 'license': datasets.Value(dtype='string'),\r\n 'functions': datasets.Sequence(feature=datasets.Sequence(feature=datasets.Value(dtype='int64'), length=-1), length=-1),\r\n 'test': datasets.Value(dtype='string')})\r\n\r\ndatasets.load_dataset('json', data_dir=data_dir, features=features)\r\n```",
"As pointed out by @hvaara, you can define explicit features so that you avoid the `datasets` library having to infer them (from the first few samples).\r\n\r\nNote that the feature inference is done from the first few samples of JSON-Lines on purpose, so that the entire data does not need to be parsed twice (it would be inefficient for very large datasets).",
"I understand this. But can there be a solution that doesn't require the end user to write this shema by hand(in my case there is some fields that contain a nested structure)? \r\n\r\nMaybe offer an option to infer the shema automatically before loading the dataset. Or perhaps - trigger such a method when this error arises? \r\n\r\nIs this \"first few files\" heuristics accessible via kwargs perhaps. Maybe an error that says \r\n`Cloud not cast some structure into feature shema, consider increasing shema_files to a large number or all\".\r\n\r\nThere might be efficient implementations to solve this problem for larger datasets. ",
"@Vipitis raised a good point on the HF Discord regarding the use of a [dataset script](https://huggingface.co/docs/datasets/en/dataset_script) to provide the schema during initialization. Using this approach requires setting `trust_remote_code=True`, which is not allowed in certain evaluation frameworks.\r\n\r\nFor cases where using a dataset script is acceptable, would it be helpful to add functionality to the library (not necessarily in `load_dataset`) that can automatically discover the feature definitions and output them, so you don't have to manually define them?\r\n\r\nAlternatively, for situations where features need to be known at load-time without using a dataset script, another option could be loading the dataset schema from a file format that doesn't require `trust_remote_code=True`."
] | "2024-08-06T17:42:55Z" | "2024-08-08T16:35:01Z" | null | NONE | null | ### Describe the bug
likely related to #6460
using `datasets.load_dataset("json", data_dir= ... )` with multiple `.jsonl` files will error if one of the files (maybe the first file?) contains a full column of empty data.
### Steps to reproduce the bug
real world example:
data is available in this [PR-branch](https://github.com/Vipitis/shadertoys-dataset/pull/3/commits/cb1e7157814f74acb09d5dc2f1be3c0a868a9933). Because my files are chunked by months, some months contain all empty data for some columns, just by chance - these are `[]`. Otherwise it's all the same structure.
```python
from datasets import load_dataset
ds = load_dataset("json", data_dir="./data/annotated/api")
```
you get a long error trace, where in the middle it says something like
```cs
TypeError: Couldn't cast array of type struct<id: int64, src: string, ctype: string, channel: int64, sampler: struct<filter: string, wrap: string, vflip: string, srgb: string, internal: string>, published: int64> to null
```
toy example: (on request)
### Expected behavior
Some suggestions
1. give a better error message to the user
2. consider all files before deciding on a data structure for a given column.
3. if you encounter a new structure, and can't cast that to null, replace the null-hypothesis. (maybe something for pyarrow)
as a workaround I have lazily implemented the following (essentially step 2)
```python
import os
import jsonlines
import datasets
api_files = os.listdir("./data/annotated/api")
api_files = [f"./data/annotated/api/{f}" for f in api_files]
api_file_contents = []
for f in api_files:
with jsonlines.open(f) as reader:
for obj in reader:
api_file_contents.append(obj)
ds = datasets.Dataset.from_list(api_file_contents)
```
this works fine for my usecase, but is potentially slower and less memory efficient for really large datasets (where this is unlikely to happen in the first place).
### Environment info
- `datasets` version: 2.20.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.4
- `huggingface_hub` version: 0.23.4
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2023.10.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7092/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7092/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7090 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7090/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7090/comments | https://api.github.com/repos/huggingface/datasets/issues/7090/events | https://github.com/huggingface/datasets/issues/7090 | 2,449,699,490 | I_kwDODunzps6SA3Ki | 7,090 | The test test_move_script_doesnt_change_hash fails because it runs the 'python' command while the python executable has a different name | {
"avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4",
"events_url": "https://api.github.com/users/yurivict/events{/privacy}",
"followers_url": "https://api.github.com/users/yurivict/followers",
"following_url": "https://api.github.com/users/yurivict/following{/other_user}",
"gists_url": "https://api.github.com/users/yurivict/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yurivict",
"id": 271906,
"login": "yurivict",
"node_id": "MDQ6VXNlcjI3MTkwNg==",
"organizations_url": "https://api.github.com/users/yurivict/orgs",
"received_events_url": "https://api.github.com/users/yurivict/received_events",
"repos_url": "https://api.github.com/users/yurivict/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yurivict/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yurivict/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yurivict",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | "2024-08-06T00:35:05Z" | "2024-08-06T00:35:05Z" | null | NONE | null | ### Describe the bug
Tests should use the same pythin path as they are launched with, which in the case of FreeBSD is /usr/local/bin/python3.11
Failure:
```
if err_filename is not None:
> raise child_exception_type(errno_num, err_msg, err_filename)
E FileNotFoundError: [Errno 2] No such file or directory: 'python'
```
### Steps to reproduce the bug
regular test run using PyTest
### Expected behavior
n/a
### Environment info
FreeBSD 14.1 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7090/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7090/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7089 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7089/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7089/comments | https://api.github.com/repos/huggingface/datasets/issues/7089/events | https://github.com/huggingface/datasets/issues/7089 | 2,449,479,500 | I_kwDODunzps6SABdM | 7,089 | Missing pyspark dependency causes the testsuite to error out, instead of a few tests to be skipped | {
"avatar_url": "https://avatars.githubusercontent.com/u/271906?v=4",
"events_url": "https://api.github.com/users/yurivict/events{/privacy}",
"followers_url": "https://api.github.com/users/yurivict/followers",
"following_url": "https://api.github.com/users/yurivict/following{/other_user}",
"gists_url": "https://api.github.com/users/yurivict/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yurivict",
"id": 271906,
"login": "yurivict",
"node_id": "MDQ6VXNlcjI3MTkwNg==",
"organizations_url": "https://api.github.com/users/yurivict/orgs",
"received_events_url": "https://api.github.com/users/yurivict/received_events",
"repos_url": "https://api.github.com/users/yurivict/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yurivict/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yurivict/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yurivict",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | "2024-08-05T21:05:11Z" | "2024-08-05T21:05:11Z" | null | NONE | null | ### Describe the bug
see the subject
### Steps to reproduce the bug
regular tests
### Expected behavior
n/a
### Environment info
version 2.20.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7089/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7089/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7088 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7088/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7088/comments | https://api.github.com/repos/huggingface/datasets/issues/7088/events | https://github.com/huggingface/datasets/issues/7088 | 2,447,383,940 | I_kwDODunzps6R4B2E | 7,088 | Disable warning when using with_format format on tensors | {
"avatar_url": "https://avatars.githubusercontent.com/u/42048782?v=4",
"events_url": "https://api.github.com/users/Haislich/events{/privacy}",
"followers_url": "https://api.github.com/users/Haislich/followers",
"following_url": "https://api.github.com/users/Haislich/following{/other_user}",
"gists_url": "https://api.github.com/users/Haislich/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Haislich",
"id": 42048782,
"login": "Haislich",
"node_id": "MDQ6VXNlcjQyMDQ4Nzgy",
"organizations_url": "https://api.github.com/users/Haislich/orgs",
"received_events_url": "https://api.github.com/users/Haislich/received_events",
"repos_url": "https://api.github.com/users/Haislich/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Haislich/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Haislich/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Haislich",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | "2024-08-05T00:45:50Z" | "2024-08-05T00:45:50Z" | null | NONE | null | ### Feature request
If we write this code:
```python
"""Get data and define datasets."""
from enum import StrEnum
from datasets import load_dataset
from torch.utils.data import DataLoader
from torchvision import transforms
class Split(StrEnum):
"""Describes what type of split to use in the dataloader"""
TRAIN = "train"
TEST = "test"
VAL = "validation"
class ImageNetDataLoader(DataLoader):
"""Create an ImageNetDataloader"""
_preprocess_transform = transforms.Compose(
[
transforms.Resize(256),
transforms.CenterCrop(224),
]
)
def __init__(self, batch_size: int = 4, split: Split = Split.TRAIN):
dataset = (
load_dataset(
"imagenet-1k",
split=split,
trust_remote_code=True,
streaming=True,
)
.with_format("torch")
.map(self._preprocess)
)
super().__init__(dataset=dataset, batch_size=batch_size)
def _preprocess(self, data):
if data["image"].shape[0] < 3:
data["image"] = data["image"].repeat(3, 1, 1)
data["image"] = self._preprocess_transform(data["image"].float())
return data
if __name__ == "__main__":
dataloader = ImageNetDataLoader(batch_size=2)
for batch in dataloader:
print(batch["image"])
break
```
This will trigger an user warning :
```bash
datasets\formatting\torch_formatter.py:85: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requires_grad_(True), rather than torch.tensor(sourceTensor).
return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs})
```
### Motivation
This happens because the the way the formatted tensor is returned in `TorchFormatter._tensorize`.
This function handle values of different types, according to some tests it seems that possible value types are `int`, `numpy.ndarray` and `torch.Tensor`.
In particular this warning is triggered when the value type is `torch.Tensor`, because is not the suggested Pytorch way of doing it:
- https://stackoverflow.com/questions/55266154/pytorch-preferred-way-to-copy-a-tensor
- https://discuss.pytorch.org/t/it-is-recommended-to-use-source-tensor-clone-detach-or-sourcetensor-clone-detach-requires-grad-true/101218#:~:text=The%20warning%20points%20to%20wrapping%20a%20tensor%20in%20torch.tensor%2C%20which%20is%20not%20recommended.%0AInstead%20of%20torch.tensor(outputs)%20use%20outputs.clone().detach()%20or%20the%20same%20with%20.requires_grad_(True)%2C%20if%20necessary.
### Your contribution
A solution that I found to be working is to change the current way of doing it:
```python
return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs})
```
To:
```python
if (isinstance(value, torch.Tensor)):
tensor = value.clone().detach()
if self.torch_tensor_kwargs.get('requires_grad', False):
tensor.requires_grad_()
return tensor
else:
return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs})
``` | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7088/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7088/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7087 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7087/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7087/comments | https://api.github.com/repos/huggingface/datasets/issues/7087/events | https://github.com/huggingface/datasets/issues/7087 | 2,447,158,643 | I_kwDODunzps6R3K1z | 7,087 | Unable to create dataset card for Lushootseed language | {
"avatar_url": "https://avatars.githubusercontent.com/u/134876525?v=4",
"events_url": "https://api.github.com/users/vaishnavsudarshan/events{/privacy}",
"followers_url": "https://api.github.com/users/vaishnavsudarshan/followers",
"following_url": "https://api.github.com/users/vaishnavsudarshan/following{/other_user}",
"gists_url": "https://api.github.com/users/vaishnavsudarshan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vaishnavsudarshan",
"id": 134876525,
"login": "vaishnavsudarshan",
"node_id": "U_kgDOCAoNbQ",
"organizations_url": "https://api.github.com/users/vaishnavsudarshan/orgs",
"received_events_url": "https://api.github.com/users/vaishnavsudarshan/received_events",
"repos_url": "https://api.github.com/users/vaishnavsudarshan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vaishnavsudarshan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vaishnavsudarshan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vaishnavsudarshan",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | [
"Thanks for reporting.\r\n\r\nIt is weird, because the language entry is in the list. See: https://github.com/huggingface/huggingface.js/blob/98e32f0ed4ee057a596f66a1dec738e5db9643d5/packages/languages/src/languages_iso_639_3.ts#L15186-L15189\r\n\r\nI have reported the issue:\r\n- https://github.com/huggingface/huggingface.js/issues/834\r\n\r\n",
"As explained in the reported issue above, the problem only appears in the autocomplete field: you can still enter the `lut` language directly in the markdown editor window."
] | "2024-08-04T14:27:04Z" | "2024-08-06T06:59:23Z" | "2024-08-06T06:59:22Z" | NONE | null | ### Feature request
While I was creating the dataset which contained all documents from the Lushootseed Wikipedia, the dataset card asked me to enter which language the dataset was in. Since Lushootseed is a critically endangered language, it was not available as one of the options. Is it possible to allow entering languages that aren't available in the options?
### Motivation
I'd like to add more information about my dataset in the dataset card, and the language is one of the most important pieces of information, since the entire dataset is primarily concerned collecting Lushootseed documents.
### Your contribution
I can submit a pull request | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7087/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7087/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7086/comments | https://api.github.com/repos/huggingface/datasets/issues/7086/events | https://github.com/huggingface/datasets/issues/7086 | 2,445,516,829 | I_kwDODunzps6Rw6Ad | 7,086 | load_dataset ignores cached datasets and tries to hit HF Hub, resulting in API rate limit errors | {
"avatar_url": "https://avatars.githubusercontent.com/u/11379648?v=4",
"events_url": "https://api.github.com/users/tginart/events{/privacy}",
"followers_url": "https://api.github.com/users/tginart/followers",
"following_url": "https://api.github.com/users/tginart/following{/other_user}",
"gists_url": "https://api.github.com/users/tginart/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tginart",
"id": 11379648,
"login": "tginart",
"node_id": "MDQ6VXNlcjExMzc5NjQ4",
"organizations_url": "https://api.github.com/users/tginart/orgs",
"received_events_url": "https://api.github.com/users/tginart/received_events",
"repos_url": "https://api.github.com/users/tginart/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tginart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tginart/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tginart",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [] | "2024-08-02T18:12:23Z" | "2024-08-02T18:12:23Z" | null | NONE | null | ### Describe the bug
I have been running lm-eval-harness a lot which has results in an API rate limit. This seems strange, since all of the data should be cached locally. I have in fact verified this.
### Steps to reproduce the bug
1. Be Me
2. Run `load_dataset("TAUR-Lab/MuSR")`
3. Hit rate limit error
4. Dataset is in .cache/huggingface/datasets
5. ???
### Expected behavior
We should not run into API rate limits if we have cached the dataset
### Environment info
datasets 2.16.0
python 3.10.4 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7086/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7086/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7085 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7085/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7085/comments | https://api.github.com/repos/huggingface/datasets/issues/7085/events | https://github.com/huggingface/datasets/issues/7085 | 2,440,008,618 | I_kwDODunzps6Rb5Oq | 7,085 | [Regression] IterableDataset is broken on 2.20.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/5404177?v=4",
"events_url": "https://api.github.com/users/AjayP13/events{/privacy}",
"followers_url": "https://api.github.com/users/AjayP13/followers",
"following_url": "https://api.github.com/users/AjayP13/following{/other_user}",
"gists_url": "https://api.github.com/users/AjayP13/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AjayP13",
"id": 5404177,
"login": "AjayP13",
"node_id": "MDQ6VXNlcjU0MDQxNzc=",
"organizations_url": "https://api.github.com/users/AjayP13/orgs",
"received_events_url": "https://api.github.com/users/AjayP13/received_events",
"repos_url": "https://api.github.com/users/AjayP13/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AjayP13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AjayP13/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AjayP13",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] | null | [
"@lhoestq I detected this regression over on [DataDreamer](https://github.com/datadreamer-dev/DataDreamer)'s test suite. I put in these [monkey patches](https://github.com/datadreamer-dev/DataDreamer/blob/4cbaf9f39cf7bedde72bbaa68346e169788fbecb/src/_patches/datasets_reset_state_hack.py) in case that fixed it our tests failing in case it helps you figure out where this is coming from. I found it hard to reason through the resumable IterableDataset code though, so hopefully you have more intuition to implement a proper fix.",
"I believe these lines in `TypedExamplesIterable` are responsible for stopping the re-iteration of `IterableDataset`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/ebec2691fb1e40145429f63375cef3f46d3011ab/src/datasets/iterable_dataset.py#L1616-L1619\r\n\r\nIn contrast to other `Iterable`s, there is no check on whether `self._state_dict` is None or not. This particular case stands out and seems less straightforward to comprehend why. @lhoestq could you please assist us with this? Your help is much appreciated.",
"Thanks for reporting for investigating - your assumption was correct @VeryLazyBoy !"
] | "2024-07-31T13:01:59Z" | "2024-08-22T14:49:37Z" | "2024-08-22T14:49:07Z" | NONE | null | ### Describe the bug
In the latest version of datasets there is a major regression, after creating an `IterableDataset` from a generator and applying a few operations (`map`, `select`), you can no longer iterate through the dataset multiple times.
The issue seems to stem from the recent addition of "resumable IterableDatasets" (#6658) (@lhoestq). It seems like it's keeping state when it shouldn't.
### Steps to reproduce the bug
Minimal Reproducible Example (comparing `datasets==2.17.0` and `datasets==2.20.0`)
```
#!/bin/bash
# List of dataset versions to test
versions=("2.17.0" "2.20.0")
# Loop through each version
for version in "${versions[@]}"; do
# Install the specific version of the datasets library
pip3 install -q datasets=="$version" 2>/dev/null
# Run the Python script
python3 - <<EOF
from datasets import IterableDataset
from datasets.features.features import Features, Value
def test_gen():
yield from [{"foo": i} for i in range(10)]
features = Features([("foo", Value("int64"))])
d = IterableDataset.from_generator(test_gen, features=features)
mapped = d.map(lambda row: {"foo": row["foo"] * 2})
column = mapped.select_columns(["foo"])
print("Version $version - Iterate Once:", list(column))
print("Version $version - Iterate Twice:", list(column))
EOF
done
```
The output looks like this:
```
Version 2.17.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.17.0 - Iterate Twice: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.20.0 - Iterate Once: [{'foo': 0}, {'foo': 2}, {'foo': 4}, {'foo': 6}, {'foo': 8}, {'foo': 10}, {'foo': 12}, {'foo': 14}, {'foo': 16}, {'foo': 18}]
Version 2.20.0 - Iterate Twice: []
```
### Expected behavior
The expected behavior is it version 2.20.0 should behave the same as 2.17.0.
### Environment info
`datasets==2.20.0` on any platform. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7085/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7085/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7084/comments | https://api.github.com/repos/huggingface/datasets/issues/7084/events | https://github.com/huggingface/datasets/issues/7084 | 2,439,519,534 | I_kwDODunzps6RaB0u | 7,084 | More easily support streaming local files | {
"avatar_url": "https://avatars.githubusercontent.com/u/23191892?v=4",
"events_url": "https://api.github.com/users/fschlatt/events{/privacy}",
"followers_url": "https://api.github.com/users/fschlatt/followers",
"following_url": "https://api.github.com/users/fschlatt/following{/other_user}",
"gists_url": "https://api.github.com/users/fschlatt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fschlatt",
"id": 23191892,
"login": "fschlatt",
"node_id": "MDQ6VXNlcjIzMTkxODky",
"organizations_url": "https://api.github.com/users/fschlatt/orgs",
"received_events_url": "https://api.github.com/users/fschlatt/received_events",
"repos_url": "https://api.github.com/users/fschlatt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fschlatt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fschlatt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fschlatt",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | "2024-07-31T09:03:15Z" | "2024-07-31T09:05:58Z" | null | CONTRIBUTOR | null | ### Feature request
Simplify downloading and streaming datasets locally. Specifically, perhaps add an option to `load_dataset(..., streaming="download_first")` or add better support for streaming symlinked or arrow files.
### Motivation
I have downloaded FineWeb-edu locally and currently trying to stream the dataset from the local files. I have both the raw parquet files using `hugginface-cli download --repo-type dataset HuggingFaceFW/fineweb-edu` and the processed arrow files using `load_dataset("HuggingFaceFW/fineweb-edu")`.
Streaming the files locally does not work well for both file types for two different reasons.
**Arrow files**
When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/datasets/HuggingFaceFW___fineweb-edu/default/0.0.0/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/fineweb-edu-train-*.arrow"})` resolving the data files is fast, but because `arrow` is not included in the known [extensions file list](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/utils/file_utils.py#L738) , all files are opened and scanned to determine the compression type. Adding `arrow` to the known extension types resolves this issue.
**Parquet files**
When running `load_dataset("arrow", data_files={"train": "~/.cache/huggingface/hub/dataset-HuggingFaceFW___fineweb-edu/snapshots/5b89d1ea9319fe101b3cbdacd89a903aca1d6052/data/CC-MAIN-*/train-*.parquet"})` the paths do not get resolved because the parquet files are symlinked from the blobs (which contain all files in case there are different versions). This occurs because the [pattern matching](https://github.com/huggingface/datasets/blob/ce4a0c573920607bc6c814605734091b06b860e7/src/datasets/data_files.py#L389) checks if the path is a file and does not check for symlinks. Symlinks (at least on my machine) are of type "other".
### Your contribution
I have created a PR for fixing arrow file streaming and symlinks. However, I have not checked locally if the tests work or new tests need to be added.
IMO, the easiest option would be to add a `streaming=download_first` option, but I'm afraid that exceeds my current knowledge of how the datasets library works. https://github.com/huggingface/datasets/pull/7083 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7084/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7084/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7083 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7083/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7083/comments | https://api.github.com/repos/huggingface/datasets/issues/7083/events | https://github.com/huggingface/datasets/pull/7083 | 2,439,518,466 | PR_kwDODunzps5292hC | 7,083 | fix streaming from arrow files | {
"avatar_url": "https://avatars.githubusercontent.com/u/23191892?v=4",
"events_url": "https://api.github.com/users/fschlatt/events{/privacy}",
"followers_url": "https://api.github.com/users/fschlatt/followers",
"following_url": "https://api.github.com/users/fschlatt/following{/other_user}",
"gists_url": "https://api.github.com/users/fschlatt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fschlatt",
"id": 23191892,
"login": "fschlatt",
"node_id": "MDQ6VXNlcjIzMTkxODky",
"organizations_url": "https://api.github.com/users/fschlatt/orgs",
"received_events_url": "https://api.github.com/users/fschlatt/received_events",
"repos_url": "https://api.github.com/users/fschlatt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fschlatt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fschlatt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fschlatt",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [] | "2024-07-31T09:02:42Z" | "2024-08-30T15:17:03Z" | "2024-08-30T15:17:03Z" | CONTRIBUTOR | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7083/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7083/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7083.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7083",
"merged_at": "2024-08-30T15:17:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7083.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7083"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7082/comments | https://api.github.com/repos/huggingface/datasets/issues/7082/events | https://github.com/huggingface/datasets/pull/7082 | 2,437,354,975 | PR_kwDODunzps522dTJ | 7,082 | Support HTTP authentication in non-streaming mode | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7082). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005280 / 0.011353 (-0.006073) | 0.003726 / 0.011008 (-0.007282) | 0.067028 / 0.038508 (0.028520) | 0.030833 / 0.023109 (0.007724) | 0.256888 / 0.275898 (-0.019010) | 0.271252 / 0.323480 (-0.052228) | 0.003149 / 0.007986 (-0.004836) | 0.004031 / 0.004328 (-0.000298) | 0.051178 / 0.004250 (0.046927) | 0.042751 / 0.037052 (0.005699) | 0.268385 / 0.258489 (0.009896) | 0.295547 / 0.293841 (0.001706) | 0.030218 / 0.128546 (-0.098328) | 0.012033 / 0.075646 (-0.063613) | 0.206389 / 0.419271 (-0.212882) | 0.036227 / 0.043533 (-0.007306) | 0.258778 / 0.255139 (0.003639) | 0.276027 / 0.283200 (-0.007172) | 0.020309 / 0.141683 (-0.121374) | 1.109689 / 1.452155 (-0.342466) | 1.139979 / 1.492716 (-0.352738) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093615 / 0.018006 (0.075609) | 0.301279 / 0.000490 (0.300789) | 0.000207 / 0.000200 (0.000007) | 0.000048 / 0.000054 (-0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018697 / 0.037411 (-0.018715) | 0.062627 / 0.014526 (0.048101) | 0.075119 / 0.176557 (-0.101438) | 0.119960 / 0.737135 (-0.617175) | 0.074606 / 0.296338 (-0.221732) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.281042 / 0.215209 (0.065833) | 2.746232 / 2.077655 (0.668578) | 1.422351 / 1.504120 (-0.081769) | 1.290087 / 1.541195 (-0.251108) | 1.321067 / 1.468490 (-0.147423) | 0.727514 / 4.584777 (-3.857263) | 2.407086 / 3.745712 (-1.338626) | 2.914191 / 5.269862 (-2.355670) | 1.872206 / 4.565676 (-2.693471) | 0.079538 / 0.424275 (-0.344738) | 0.005250 / 0.007607 (-0.002357) | 0.335536 / 0.226044 (0.109491) | 3.324922 / 2.268929 (1.055994) | 1.790688 / 55.444624 (-53.653936) | 1.475738 / 6.876477 (-5.400739) | 1.492465 / 2.142072 (-0.649607) | 0.812342 / 4.805227 (-3.992885) | 0.135036 / 6.500664 (-6.365628) | 0.041484 / 0.075469 (-0.033985) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.948425 / 1.841788 (-0.893363) | 11.321564 / 8.074308 (3.247256) | 9.635661 / 10.191392 (-0.555731) | 0.142793 / 0.680424 (-0.537631) | 0.014988 / 0.534201 (-0.519213) | 0.300209 / 0.579283 (-0.279074) | 0.262303 / 0.434364 (-0.172061) | 0.337927 / 0.540337 (-0.202411) | 0.427962 / 1.386936 (-0.958975) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005664 / 0.011353 (-0.005689) | 0.003946 / 0.011008 (-0.007062) | 0.050034 / 0.038508 (0.011526) | 0.031652 / 0.023109 (0.008543) | 0.281139 / 0.275898 (0.005241) | 0.299203 / 0.323480 (-0.024277) | 0.004332 / 0.007986 (-0.003653) | 0.002769 / 0.004328 (-0.001560) | 0.048336 / 0.004250 (0.044086) | 0.039744 / 0.037052 (0.002692) | 0.289344 / 0.258489 (0.030855) | 0.320470 / 0.293841 (0.026629) | 0.032372 / 0.128546 (-0.096174) | 0.012090 / 0.075646 (-0.063557) | 0.060838 / 0.419271 (-0.358433) | 0.034227 / 0.043533 (-0.009306) | 0.275007 / 0.255139 (0.019868) | 0.293455 / 0.283200 (0.010256) | 0.017203 / 0.141683 (-0.124480) | 1.141577 / 1.452155 (-0.310578) | 1.176761 / 1.492716 (-0.315955) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093562 / 0.018006 (0.075556) | 0.302695 / 0.000490 (0.302205) | 0.000215 / 0.000200 (0.000015) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022638 / 0.037411 (-0.014774) | 0.078788 / 0.014526 (0.064262) | 0.088474 / 0.176557 (-0.088082) | 0.128421 / 0.737135 (-0.608714) | 0.089297 / 0.296338 (-0.207041) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.302669 / 0.215209 (0.087459) | 2.963855 / 2.077655 (0.886200) | 1.600053 / 1.504120 (0.095933) | 1.461456 / 1.541195 (-0.079739) | 1.469877 / 1.468490 (0.001387) | 0.725752 / 4.584777 (-3.859025) | 0.968970 / 3.745712 (-2.776742) | 2.910502 / 5.269862 (-2.359359) | 1.902762 / 4.565676 (-2.662914) | 0.079977 / 0.424275 (-0.344298) | 0.005582 / 0.007607 (-0.002025) | 0.351626 / 0.226044 (0.125581) | 3.520593 / 2.268929 (1.251664) | 1.968950 / 55.444624 (-53.475675) | 1.662190 / 6.876477 (-5.214286) | 1.677909 / 2.142072 (-0.464163) | 0.791541 / 4.805227 (-4.013687) | 0.134647 / 6.500664 (-6.366017) | 0.040687 / 0.075469 (-0.034782) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.028885 / 1.841788 (-0.812903) | 11.928358 / 8.074308 (3.854050) | 10.199165 / 10.191392 (0.007773) | 0.142930 / 0.680424 (-0.537493) | 0.016479 / 0.534201 (-0.517722) | 0.302993 / 0.579283 (-0.276290) | 0.128878 / 0.434364 (-0.305486) | 0.342591 / 0.540337 (-0.197747) | 0.456735 / 1.386936 (-0.930201) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d298f5549893228c03e9e3a42727327cb83f3dff \"CML watermark\")\n"
] | "2024-07-30T09:25:49Z" | "2024-08-08T08:29:55Z" | "2024-08-08T08:24:06Z" | MEMBER | null | Support HTTP authentication in non-streaming mode, by support passing HTTP storage_options in non-streaming mode.
- Note that currently, HTTP authentication is supported only in streaming mode.
For example, this is necessary if a remote HTTP host requires authentication to download the data. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7082/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7082/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7082.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7082",
"merged_at": "2024-08-08T08:24:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7082.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7082"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7081 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7081/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7081/comments | https://api.github.com/repos/huggingface/datasets/issues/7081/events | https://github.com/huggingface/datasets/pull/7081 | 2,437,059,657 | PR_kwDODunzps521cGm | 7,081 | Set load_from_disk path type as PathLike | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7081). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005665 / 0.011353 (-0.005688) | 0.004130 / 0.011008 (-0.006878) | 0.064231 / 0.038508 (0.025723) | 0.030738 / 0.023109 (0.007628) | 0.251896 / 0.275898 (-0.024002) | 0.275182 / 0.323480 (-0.048298) | 0.003364 / 0.007986 (-0.004621) | 0.003569 / 0.004328 (-0.000759) | 0.049407 / 0.004250 (0.045157) | 0.048177 / 0.037052 (0.011124) | 0.253739 / 0.258489 (-0.004751) | 0.304087 / 0.293841 (0.010246) | 0.030457 / 0.128546 (-0.098089) | 0.012762 / 0.075646 (-0.062885) | 0.214312 / 0.419271 (-0.204959) | 0.036673 / 0.043533 (-0.006860) | 0.251838 / 0.255139 (-0.003301) | 0.274049 / 0.283200 (-0.009151) | 0.021133 / 0.141683 (-0.120550) | 1.143743 / 1.452155 (-0.308412) | 1.203681 / 1.492716 (-0.289036) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094668 / 0.018006 (0.076662) | 0.300323 / 0.000490 (0.299833) | 0.000222 / 0.000200 (0.000022) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018565 / 0.037411 (-0.018846) | 0.066096 / 0.014526 (0.051570) | 0.075700 / 0.176557 (-0.100857) | 0.122185 / 0.737135 (-0.614950) | 0.077688 / 0.296338 (-0.218651) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288804 / 0.215209 (0.073595) | 2.838336 / 2.077655 (0.760681) | 1.530575 / 1.504120 (0.026455) | 1.406716 / 1.541195 (-0.134478) | 1.438885 / 1.468490 (-0.029605) | 0.744809 / 4.584777 (-3.839968) | 2.447992 / 3.745712 (-1.297721) | 3.126261 / 5.269862 (-2.143601) | 1.999687 / 4.565676 (-2.565990) | 0.081536 / 0.424275 (-0.342739) | 0.005827 / 0.007607 (-0.001780) | 0.346367 / 0.226044 (0.120323) | 3.373268 / 2.268929 (1.104339) | 1.890293 / 55.444624 (-53.554332) | 1.590384 / 6.876477 (-5.286093) | 1.652101 / 2.142072 (-0.489971) | 0.805888 / 4.805227 (-3.999339) | 0.137687 / 6.500664 (-6.362977) | 0.044536 / 0.075469 (-0.030933) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.998393 / 1.841788 (-0.843395) | 12.392241 / 8.074308 (4.317933) | 10.055638 / 10.191392 (-0.135754) | 0.132347 / 0.680424 (-0.548077) | 0.014635 / 0.534201 (-0.519566) | 0.301939 / 0.579283 (-0.277344) | 0.266756 / 0.434364 (-0.167608) | 0.342730 / 0.540337 (-0.197608) | 0.435463 / 1.386936 (-0.951473) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006421 / 0.011353 (-0.004932) | 0.004494 / 0.011008 (-0.006514) | 0.051315 / 0.038508 (0.012806) | 0.035570 / 0.023109 (0.012460) | 0.271635 / 0.275898 (-0.004263) | 0.297082 / 0.323480 (-0.026398) | 0.004572 / 0.007986 (-0.003414) | 0.002886 / 0.004328 (-0.001443) | 0.049152 / 0.004250 (0.044902) | 0.043000 / 0.037052 (0.005948) | 0.281921 / 0.258489 (0.023432) | 0.321097 / 0.293841 (0.027256) | 0.033488 / 0.128546 (-0.095058) | 0.012835 / 0.075646 (-0.062811) | 0.061831 / 0.419271 (-0.357441) | 0.034674 / 0.043533 (-0.008858) | 0.272885 / 0.255139 (0.017746) | 0.292726 / 0.283200 (0.009527) | 0.019906 / 0.141683 (-0.121777) | 1.132234 / 1.452155 (-0.319920) | 1.155359 / 1.492716 (-0.337357) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096943 / 0.018006 (0.078937) | 0.308980 / 0.000490 (0.308490) | 0.000225 / 0.000200 (0.000025) | 0.000047 / 0.000054 (-0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023551 / 0.037411 (-0.013861) | 0.081682 / 0.014526 (0.067156) | 0.090987 / 0.176557 (-0.085569) | 0.132542 / 0.737135 (-0.604593) | 0.092844 / 0.296338 (-0.203494) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.304190 / 0.215209 (0.088981) | 2.958591 / 2.077655 (0.880936) | 1.610211 / 1.504120 (0.106091) | 1.488216 / 1.541195 (-0.052978) | 1.525429 / 1.468490 (0.056939) | 0.752811 / 4.584777 (-3.831966) | 0.967887 / 3.745712 (-2.777825) | 2.982760 / 5.269862 (-2.287102) | 1.996623 / 4.565676 (-2.569053) | 0.080783 / 0.424275 (-0.343492) | 0.005337 / 0.007607 (-0.002270) | 0.354996 / 0.226044 (0.128951) | 3.540788 / 2.268929 (1.271860) | 1.997445 / 55.444624 (-53.447179) | 1.682232 / 6.876477 (-5.194245) | 1.883198 / 2.142072 (-0.258875) | 0.814444 / 4.805227 (-3.990783) | 0.135798 / 6.500664 (-6.364867) | 0.041750 / 0.075469 (-0.033719) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.048688 / 1.841788 (-0.793099) | 13.122809 / 8.074308 (5.048501) | 10.893354 / 10.191392 (0.701962) | 0.133710 / 0.680424 (-0.546713) | 0.016357 / 0.534201 (-0.517844) | 0.304364 / 0.579283 (-0.274919) | 0.126457 / 0.434364 (-0.307907) | 0.345747 / 0.540337 (-0.194591) | 0.441620 / 1.386936 (-0.945316) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#27ea8e8ead3e76bb07aa645f882945495d238ef3 \"CML watermark\")\n"
] | "2024-07-30T07:00:38Z" | "2024-07-30T08:30:37Z" | "2024-07-30T08:21:50Z" | MEMBER | null | Set `load_from_disk` path type as `PathLike`. This way it is aligned with `save_to_disk`. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7081/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7081/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7081.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7081",
"merged_at": "2024-07-30T08:21:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7081.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7081"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7080/comments | https://api.github.com/repos/huggingface/datasets/issues/7080/events | https://github.com/huggingface/datasets/issues/7080 | 2,434,275,664 | I_kwDODunzps6RGBlQ | 7,080 | Generating train split takes a long time | {
"avatar_url": "https://avatars.githubusercontent.com/u/35648800?v=4",
"events_url": "https://api.github.com/users/alexanderswerdlow/events{/privacy}",
"followers_url": "https://api.github.com/users/alexanderswerdlow/followers",
"following_url": "https://api.github.com/users/alexanderswerdlow/following{/other_user}",
"gists_url": "https://api.github.com/users/alexanderswerdlow/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alexanderswerdlow",
"id": 35648800,
"login": "alexanderswerdlow",
"node_id": "MDQ6VXNlcjM1NjQ4ODAw",
"organizations_url": "https://api.github.com/users/alexanderswerdlow/orgs",
"received_events_url": "https://api.github.com/users/alexanderswerdlow/received_events",
"repos_url": "https://api.github.com/users/alexanderswerdlow/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alexanderswerdlow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexanderswerdlow/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alexanderswerdlow",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"@alexanderswerdlow \r\nWhen no specific split is mentioned, the load_dataset library will load all available splits of the dataset. For example, if a dataset has \"train\" and \"test\" splits, the load_dataset function will load both into the DatasetDict object.\r\n\r\n![image](https://github.com/user-attachments/assets/379e6f57-7e1b-4cc3-bc36-dae3e878a51c)\r\n\r\n\r\nThe dataset PixArt-alpha/SAM-LLaVA-Captions10M may have been uploaded with different predefined splits (e.g., \"train\", \"test\", etc.), and by default, Hugging Face will load all splits unless you specifically request only one.\r\n\r\n### If you want to load only a specific split (e.g., only the \"train\" set), you can specify it in the split parameter like this:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"PixArt-alpha/SAM-LLaVA-Captions10M\", split=\"train\")\r\n```\r\n\r\n### You can also load multiple splits if needed:\r\n```python\r\ndataset = load_dataset(\"PixArt-alpha/SAM-LLaVA-Captions10M\", split=[\"train\", \"test\"])\r\n```\r\n\r\n",
"@alexanderswerdlow, I will now work on this..\r\n\r\n## Idea:\r\nWhenever this code has ran:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"PixArt-alpha/SAM-LLaVA-Captions10M\")\r\n```\r\n\r\nIt should show all the splits of the datasets, and user has to choose which one should be loaded before generating a split like this,,\r\n\r\n![image](https://github.com/user-attachments/assets/8fbc604f-f0a5-4a59-a63e-aa4c26442c83)\r\n"
] | "2024-07-29T01:42:43Z" | "2024-10-02T15:31:22Z" | null | NONE | null | ### Describe the bug
Loading a simple webdataset takes ~45 minutes.
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("PixArt-alpha/SAM-LLaVA-Captions10M")
```
### Expected behavior
The dataset should load immediately as it does when loaded through a normal indexed WebDataset loader. Generating splits should be optional and there should be a message showing how to disable it.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.14
- `huggingface_hub` version: 0.24.1
- PyArrow version: 16.1.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7080/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7080/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7079 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7079/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7079/comments | https://api.github.com/repos/huggingface/datasets/issues/7079/events | https://github.com/huggingface/datasets/issues/7079 | 2,433,363,298 | I_kwDODunzps6RCi1i | 7,079 | HfHubHTTPError: 500 Server Error: Internal Server Error for url: | {
"avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4",
"events_url": "https://api.github.com/users/neoneye/events{/privacy}",
"followers_url": "https://api.github.com/users/neoneye/followers",
"following_url": "https://api.github.com/users/neoneye/following{/other_user}",
"gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neoneye",
"id": 147971,
"login": "neoneye",
"node_id": "MDQ6VXNlcjE0Nzk3MQ==",
"organizations_url": "https://api.github.com/users/neoneye/orgs",
"received_events_url": "https://api.github.com/users/neoneye/received_events",
"repos_url": "https://api.github.com/users/neoneye/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neoneye/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neoneye",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"same issue here. @albertvillanova @lhoestq ",
"Also impacted by this issue in many of my datasets (though not all) - in my case, this also seems to affect datasets that have been updated recently. Git cloning and the web interface still work:\r\n- https://huggingface.co/api/datasets/acmc/cheat_reduced\r\n- https://huggingface.co/api/datasets/acmc/ghostbuster_reuter_reduced\r\n- https://huggingface.co/api/datasets/acmc/ghostbuster_wp_reduced\r\n- https://huggingface.co/api/datasets/acmc/ghostbuster_essay_reduced\r\n\r\nOddly enough, the system status looks good: https://status.huggingface.co/",
"Hey how to download these datasets using git cloning?",
"Also reported here\r\nhttps://github.com/huggingface/huggingface_hub/issues/2425",
"I have been getting the same error for the past 8 hours as well",
"Same error since yesterday, fails on any new dataset created",
"Same here. I cannot download the HelpSteer2 dataset: https://huggingface.co/datasets/nvidia/HelpSteer2 which has been uploaded about a month ago",
"> Hey how to download these datasets using git cloning?\n\nYou'll find a guide [here](https://huggingface.co/docs/hub/en/datasets-downloading) 👍🏻",
"Same here for imdb dataset",
"It also happens with this dataset: https://huggingface.co/datasets/ylacombe/jenny-tts-6h-tagged",
"same here for all datsets in the sentence-tramsformers repo and related collections.\r\n\r\nsame issue with dataset that i recently uploaded on my repo.\r\nseems that the upload date of the datset is not relevat (getting this issue with both old datasets and newer ones)\r\n\r\nfor some reason, i was able to get the dataset by turning it private and accessing it with the id token (accessing it as public while providing the token doesn not work)..... but i can say if that is just a random coincidence.\r\n\r\nseems not much deterministic, for a specific dataset (sentence-transformer nq ) , that was \"down\" since some hours , worked for like 5-10 minutes, then stopped again\r\n\r\nnow even this dataset (that worked since some min ago, and that i'm in the middle of processing steps) stopped working: _https://huggingface.co/datasets/bobox/msmarco-bm25-EduScore/_\r\n\r\nas already pointed out, there are no updates on **_https://status.huggingface.co/_**\r\n\r\n\\n\r\n\\n\r\n\r\nan example of the whole error message:\r\n``` \r\nHfHubHTTPError \r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, trust_remote_code, **config_kwargs)\r\n 2592 \r\n 2593 # Create a dataset builder\r\n-> 2594 builder_instance = load_dataset_builder(\r\n 2595 path=path,\r\n 2596 name=name,\r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, trust_remote_code, _require_default_config_name, **config_kwargs)\r\n 2264 download_config = download_config.copy() if download_config else DownloadConfig()\r\n 2265 download_config.storage_options.update(storage_options)\r\n-> 2266 dataset_module = dataset_module_factory(\r\n 2267 path,\r\n 2268 revision=revision,\r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)\r\n 1912 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\r\n 1913 ) from None\r\n-> 1914 raise e1 from None\r\n 1915 else:\r\n 1916 raise FileNotFoundError(\r\n\r\n[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, cache_dir, trust_remote_code, _require_default_config_name, _require_custom_configs, **download_kwargs)\r\n 1832 hf_api = HfApi(config.HF_ENDPOINT)\r\n 1833 try:\r\n-> 1834 dataset_info = hf_api.dataset_info(\r\n 1835 repo_id=path,\r\n 1836 revision=revision,\r\n\r\n[/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_validators.py](https://localhost:8080/#) in _inner_fn(*args, **kwargs)\r\n 112 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)\r\n 113 \r\n--> 114 return fn(*args, **kwargs)\r\n 115 \r\n 116 return _inner_fn # type: ignore\r\n\r\n[/usr/local/lib/python3.10/dist-packages/huggingface_hub/hf_api.py](https://localhost:8080/#) in dataset_info(self, repo_id, revision, timeout, files_metadata, token)\r\n 2362 \r\n 2363 r = get_session().get(path, headers=headers, timeout=timeout, params=params)\r\n-> 2364 hf_raise_for_status(r)\r\n 2365 data = r.json()\r\n 2366 return DatasetInfo(**data)\r\n\r\n[/usr/local/lib/python3.10/dist-packages/huggingface_hub/utils/_errors.py](https://localhost:8080/#) in hf_raise_for_status(response, endpoint_name)\r\n 369 # Convert `HTTPError` into a `HfHubHTTPError` to display request information\r\n 370 # as well (request id and/or server error message)\r\n--> 371 raise HfHubHTTPError(str(e), response=response) from e\r\n 372 \r\n 373 \r\n\r\nHfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/bobox/xSum-processed (Request ID: Root=1-66a527f0-756cfbc35cc466f075382289;7d5dc06a-37e9-4c22-874d-92b0b1023276)\r\n\r\nInternal Error - We're working hard to fix this as soon as possible!\r\n``` ",
"we're working on a fix !",
"We fixed the issue, you can load datasets again, sorry for the inconvenience !",
"I can confirm, it's working now. I can load the dataset, yay. Thank you @lhoestq ",
"@lhoestq thank you so much! ",
"Hi I'm getting the same error with this [dataset](https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset) \r\nWorking on the course of stable diffusion , trying to run this [notebook](https://colab.research.google.com/github/huggingface/diffusion-models-class/blob/main/unit1/01_introduction_to_diffusers.ipynb#scrollTo=-yX-MZhSsxwp) \r\nthis is the error: \r\n`HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/datasets/huggan/smithsonian_butterflies_subset/resolve/3cdedf844922ab40393d46d4c7f81c596e1c6d45/data/train-00000-of-00001.parquet (Request ID: Root=1-66ed3481-3393f4ab268b711440d31e02;c3ca2a7d-ae7b-4ba3-9947-9426711946a8)\r\n\r\nInternal Error - We're working hard to fix this as soon as possible!`\r\n\r\n",
"Thanks for reporting, we are investigating !"
] | "2024-07-27T08:21:03Z" | "2024-09-20T13:26:25Z" | "2024-07-27T19:52:30Z" | NONE | null | ### Describe the bug
newly uploaded datasets, since yesterday, yields an error.
old datasets, works fine.
Seems like the datasets api server returns a 500
I'm getting the same error, when I invoke `load_dataset` with my dataset.
Long discussion about it here, but I'm not sure anyone from huggingface have seen it.
https://discuss.huggingface.co/t/hfhubhttperror-500-server-error-internal-server-error-for-url/99580/1
### Steps to reproduce the bug
this api url:
https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3
respond with:
```
{"error":"Internal Error - We're working hard to fix this as soon as possible!"}
```
### Expected behavior
return no error with newer datasets.
With older datasets I can load the datasets fine.
### Environment info
# Browser
When I access the api in the browser:
https://huggingface.co/api/datasets/neoneye/simon-arc-shape-v4-rev3
```
{"error":"Internal Error - We're working hard to fix this as soon as possible!"}
```
### Request headers
```
Accept text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
Accept-Encoding gzip, deflate, br, zstd
Accept-Language en-US,en;q=0.5
Connection keep-alive
Host huggingface.co
Priority u=1
Sec-Fetch-Dest document
Sec-Fetch-Mode navigate
Sec-Fetch-Site cross-site
Upgrade-Insecure-Requests 1
User-Agent Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:127.0) Gecko/20100101 Firefox/127.0
```
### Response headers
```
X-Firefox-Spdy h2
access-control-allow-origin https://huggingface.co
access-control-expose-headers X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range
content-length 80
content-type application/json; charset=utf-8
cross-origin-opener-policy same-origin
date Fri, 26 Jul 2024 19:09:45 GMT
etag W/"50-9qrwU+BNI4SD0Fe32p/nofkmv0c"
referrer-policy strict-origin-when-cross-origin
vary Origin
via 1.1 1624c79cd07e6098196697a6a7907e4a.cloudfront.net (CloudFront)
x-amz-cf-id SP8E7n5qRaP6i9c9G83dNAiOzJBU4GXSrDRAcVNTomY895K35H0nJQ==
x-amz-cf-pop CPH50-C1
x-cache Error from cloudfront
x-error-message Internal Error - We're working hard to fix this as soon as possible!
x-powered-by huggingface-moon
x-request-id Root=1-66a3f479-026417465ef42f49349fdca1
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/147971?v=4",
"events_url": "https://api.github.com/users/neoneye/events{/privacy}",
"followers_url": "https://api.github.com/users/neoneye/followers",
"following_url": "https://api.github.com/users/neoneye/following{/other_user}",
"gists_url": "https://api.github.com/users/neoneye/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neoneye",
"id": 147971,
"login": "neoneye",
"node_id": "MDQ6VXNlcjE0Nzk3MQ==",
"organizations_url": "https://api.github.com/users/neoneye/orgs",
"received_events_url": "https://api.github.com/users/neoneye/received_events",
"repos_url": "https://api.github.com/users/neoneye/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neoneye/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neoneye/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neoneye",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 4,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 7,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7079/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7079/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7078 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7078/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7078/comments | https://api.github.com/repos/huggingface/datasets/issues/7078/events | https://github.com/huggingface/datasets/pull/7078 | 2,433,270,271 | PR_kwDODunzps52oq4n | 7,078 | Fix CI test_convert_to_parquet | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7078). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005262 / 0.011353 (-0.006090) | 0.003733 / 0.011008 (-0.007275) | 0.062619 / 0.038508 (0.024111) | 0.029491 / 0.023109 (0.006382) | 0.248947 / 0.275898 (-0.026951) | 0.278741 / 0.323480 (-0.044739) | 0.003173 / 0.007986 (-0.004813) | 0.002777 / 0.004328 (-0.001551) | 0.049344 / 0.004250 (0.045094) | 0.043103 / 0.037052 (0.006051) | 0.252402 / 0.258489 (-0.006087) | 0.288030 / 0.293841 (-0.005811) | 0.029425 / 0.128546 (-0.099121) | 0.012058 / 0.075646 (-0.063589) | 0.204509 / 0.419271 (-0.214762) | 0.035721 / 0.043533 (-0.007812) | 0.249121 / 0.255139 (-0.006018) | 0.272171 / 0.283200 (-0.011029) | 0.019515 / 0.141683 (-0.122168) | 1.130088 / 1.452155 (-0.322067) | 1.148856 / 1.492716 (-0.343860) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093613 / 0.018006 (0.075607) | 0.300830 / 0.000490 (0.300340) | 0.000219 / 0.000200 (0.000019) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018381 / 0.037411 (-0.019030) | 0.061515 / 0.014526 (0.046989) | 0.074370 / 0.176557 (-0.102186) | 0.120751 / 0.737135 (-0.616384) | 0.074971 / 0.296338 (-0.221367) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.280499 / 0.215209 (0.065290) | 2.763114 / 2.077655 (0.685459) | 1.458696 / 1.504120 (-0.045424) | 1.331214 / 1.541195 (-0.209981) | 1.343157 / 1.468490 (-0.125333) | 0.732775 / 4.584777 (-3.852002) | 2.381485 / 3.745712 (-1.364227) | 2.930117 / 5.269862 (-2.339745) | 1.887617 / 4.565676 (-2.678059) | 0.080543 / 0.424275 (-0.343732) | 0.005136 / 0.007607 (-0.002471) | 0.336924 / 0.226044 (0.110879) | 3.343071 / 2.268929 (1.074142) | 1.823677 / 55.444624 (-53.620948) | 1.572300 / 6.876477 (-5.304176) | 1.564040 / 2.142072 (-0.578032) | 0.802369 / 4.805227 (-4.002858) | 0.135198 / 6.500664 (-6.365466) | 0.041499 / 0.075469 (-0.033970) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.961202 / 1.841788 (-0.880585) | 11.275695 / 8.074308 (3.201387) | 9.508052 / 10.191392 (-0.683340) | 0.136921 / 0.680424 (-0.543503) | 0.014055 / 0.534201 (-0.520146) | 0.300076 / 0.579283 (-0.279208) | 0.263403 / 0.434364 (-0.170961) | 0.340871 / 0.540337 (-0.199466) | 0.433452 / 1.386936 (-0.953484) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005683 / 0.011353 (-0.005670) | 0.003596 / 0.011008 (-0.007412) | 0.049913 / 0.038508 (0.011405) | 0.033275 / 0.023109 (0.010166) | 0.266011 / 0.275898 (-0.009887) | 0.295182 / 0.323480 (-0.028298) | 0.004336 / 0.007986 (-0.003649) | 0.002787 / 0.004328 (-0.001541) | 0.049035 / 0.004250 (0.044784) | 0.039833 / 0.037052 (0.002781) | 0.283520 / 0.258489 (0.025031) | 0.317437 / 0.293841 (0.023596) | 0.032578 / 0.128546 (-0.095968) | 0.011744 / 0.075646 (-0.063902) | 0.060174 / 0.419271 (-0.359097) | 0.034182 / 0.043533 (-0.009351) | 0.271821 / 0.255139 (0.016682) | 0.292189 / 0.283200 (0.008989) | 0.017045 / 0.141683 (-0.124638) | 1.127742 / 1.452155 (-0.324413) | 1.180621 / 1.492716 (-0.312095) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.093798 / 0.018006 (0.075792) | 0.310715 / 0.000490 (0.310226) | 0.000213 / 0.000200 (0.000013) | 0.000046 / 0.000054 (-0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022379 / 0.037411 (-0.015032) | 0.076823 / 0.014526 (0.062298) | 0.088086 / 0.176557 (-0.088471) | 0.128926 / 0.737135 (-0.608210) | 0.089187 / 0.296338 (-0.207151) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.293982 / 0.215209 (0.078773) | 2.930932 / 2.077655 (0.853277) | 1.576425 / 1.504120 (0.072305) | 1.445163 / 1.541195 (-0.096031) | 1.462118 / 1.468490 (-0.006372) | 0.725816 / 4.584777 (-3.858961) | 0.949767 / 3.745712 (-2.795945) | 2.832821 / 5.269862 (-2.437041) | 1.897064 / 4.565676 (-2.668612) | 0.079853 / 0.424275 (-0.344423) | 0.005352 / 0.007607 (-0.002255) | 0.344551 / 0.226044 (0.118507) | 3.442506 / 2.268929 (1.173578) | 1.938700 / 55.444624 (-53.505925) | 1.662205 / 6.876477 (-5.214272) | 1.769061 / 2.142072 (-0.373011) | 0.818089 / 4.805227 (-3.987139) | 0.134612 / 6.500664 (-6.366052) | 0.040419 / 0.075469 (-0.035050) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.032267 / 1.841788 (-0.809521) | 11.902598 / 8.074308 (3.828290) | 10.342229 / 10.191392 (0.150837) | 0.140509 / 0.680424 (-0.539915) | 0.015593 / 0.534201 (-0.518608) | 0.303326 / 0.579283 (-0.275957) | 0.127391 / 0.434364 (-0.306973) | 0.342095 / 0.540337 (-0.198243) | 0.438978 / 1.386936 (-0.947958) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#30000fb6ca53126917ee17e1b1987f94f07a1569 \"CML watermark\")\n"
] | "2024-07-27T05:32:40Z" | "2024-07-27T05:50:57Z" | "2024-07-27T05:44:32Z" | MEMBER | null | Fix `test_convert_to_parquet` by patching `HfApi.preupload_lfs_files` and revert temporary fix:
- #7074 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7078/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7078/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7078.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7078",
"merged_at": "2024-07-27T05:44:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7078.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7078"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7077/comments | https://api.github.com/repos/huggingface/datasets/issues/7077/events | https://github.com/huggingface/datasets/issues/7077 | 2,432,345,489 | I_kwDODunzps6Q-qWR | 7,077 | column_names ignored by load_dataset() when loading CSV file | {
"avatar_url": "https://avatars.githubusercontent.com/u/9130265?v=4",
"events_url": "https://api.github.com/users/luismsgomes/events{/privacy}",
"followers_url": "https://api.github.com/users/luismsgomes/followers",
"following_url": "https://api.github.com/users/luismsgomes/following{/other_user}",
"gists_url": "https://api.github.com/users/luismsgomes/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/luismsgomes",
"id": 9130265,
"login": "luismsgomes",
"node_id": "MDQ6VXNlcjkxMzAyNjU=",
"organizations_url": "https://api.github.com/users/luismsgomes/orgs",
"received_events_url": "https://api.github.com/users/luismsgomes/received_events",
"repos_url": "https://api.github.com/users/luismsgomes/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/luismsgomes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luismsgomes/subscriptions",
"type": "User",
"url": "https://api.github.com/users/luismsgomes",
"user_view_type": "public"
} | [] | open | false | null | [] | null | [
"I confirm that `column_names` values are not copied to `names` variable because in this case `CsvConfig.__post_init__` is not called: `CsvConfig` is instantiated with default values and afterwards the `config_kwargs` are used to overwrite its attributes.\r\n\r\n@luismsgomes in the meantime, you can avoid the bug if you pass `names` instead of `column_names`."
] | "2024-07-26T14:18:04Z" | "2024-07-30T07:52:26Z" | null | NONE | null | ### Describe the bug
load_dataset() ignores the column_names kwarg when loading a CSV file. Instead, it uses whatever values are on the first line of the file.
### Steps to reproduce the bug
Call `load_dataset` to load data from a CSV file and specify `column_names` kwarg.
### Expected behavior
The resulting dataset should have the specified column names **and** the first line of the file should be considered as data values.
### Environment info
- `datasets` version: 2.20.0
- Platform: Linux-5.10.0-30-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- `huggingface_hub` version: 0.24.2
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.5.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7077/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7077/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7076/comments | https://api.github.com/repos/huggingface/datasets/issues/7076/events | https://github.com/huggingface/datasets/pull/7076 | 2,432,275,393 | PR_kwDODunzps52lTDe | 7,076 | 🧪 Do not mock create_commit | {
"avatar_url": "https://avatars.githubusercontent.com/u/342922?v=4",
"events_url": "https://api.github.com/users/coyotte508/events{/privacy}",
"followers_url": "https://api.github.com/users/coyotte508/followers",
"following_url": "https://api.github.com/users/coyotte508/following{/other_user}",
"gists_url": "https://api.github.com/users/coyotte508/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/coyotte508",
"id": 342922,
"login": "coyotte508",
"node_id": "MDQ6VXNlcjM0MjkyMg==",
"organizations_url": "https://api.github.com/users/coyotte508/orgs",
"received_events_url": "https://api.github.com/users/coyotte508/received_events",
"repos_url": "https://api.github.com/users/coyotte508/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/coyotte508/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/coyotte508/subscriptions",
"type": "User",
"url": "https://api.github.com/users/coyotte508",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7076). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update."
] | "2024-07-26T13:44:42Z" | "2024-07-27T05:48:17Z" | "2024-07-27T05:48:17Z" | MEMBER | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7076/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7076/timeline | null | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7076.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7076",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/7076.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7076"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/7075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7075/comments | https://api.github.com/repos/huggingface/datasets/issues/7075/events | https://github.com/huggingface/datasets/pull/7075 | 2,432,027,412 | PR_kwDODunzps52kciD | 7,075 | Update required soxr version from pre-release to release | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_7075). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005717 / 0.011353 (-0.005636) | 0.004102 / 0.011008 (-0.006906) | 0.064343 / 0.038508 (0.025835) | 0.031510 / 0.023109 (0.008400) | 0.254534 / 0.275898 (-0.021364) | 0.275080 / 0.323480 (-0.048400) | 0.004243 / 0.007986 (-0.003742) | 0.002782 / 0.004328 (-0.001546) | 0.049554 / 0.004250 (0.045303) | 0.045291 / 0.037052 (0.008239) | 0.264118 / 0.258489 (0.005629) | 0.296476 / 0.293841 (0.002635) | 0.030298 / 0.128546 (-0.098248) | 0.012646 / 0.075646 (-0.063000) | 0.208403 / 0.419271 (-0.210869) | 0.036365 / 0.043533 (-0.007168) | 0.250294 / 0.255139 (-0.004845) | 0.276057 / 0.283200 (-0.007143) | 0.018687 / 0.141683 (-0.122996) | 1.128970 / 1.452155 (-0.323184) | 1.170923 / 1.492716 (-0.321793) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.134953 / 0.018006 (0.116947) | 0.301722 / 0.000490 (0.301232) | 0.000242 / 0.000200 (0.000042) | 0.000050 / 0.000054 (-0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019650 / 0.037411 (-0.017761) | 0.063404 / 0.014526 (0.048878) | 0.074883 / 0.176557 (-0.101674) | 0.122846 / 0.737135 (-0.614289) | 0.077410 / 0.296338 (-0.218928) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.287710 / 0.215209 (0.072501) | 2.813834 / 2.077655 (0.736179) | 1.454710 / 1.504120 (-0.049410) | 1.327303 / 1.541195 (-0.213891) | 1.375064 / 1.468490 (-0.093426) | 0.746831 / 4.584777 (-3.837946) | 2.361008 / 3.745712 (-1.384705) | 3.080869 / 5.269862 (-2.188993) | 1.969927 / 4.565676 (-2.595749) | 0.081045 / 0.424275 (-0.343230) | 0.005168 / 0.007607 (-0.002440) | 0.342657 / 0.226044 (0.116613) | 3.404883 / 2.268929 (1.135955) | 1.840761 / 55.444624 (-53.603863) | 1.535400 / 6.876477 (-5.341076) | 1.584613 / 2.142072 (-0.557460) | 0.828003 / 4.805227 (-3.977224) | 0.135564 / 6.500664 (-6.365100) | 0.042717 / 0.075469 (-0.032752) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.985301 / 1.841788 (-0.856487) | 11.945913 / 8.074308 (3.871605) | 9.887577 / 10.191392 (-0.303815) | 0.141261 / 0.680424 (-0.539163) | 0.014961 / 0.534201 (-0.519240) | 0.304134 / 0.579283 (-0.275150) | 0.264733 / 0.434364 (-0.169631) | 0.349993 / 0.540337 (-0.190345) | 0.440390 / 1.386936 (-0.946546) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006145 / 0.011353 (-0.005207) | 0.004259 / 0.011008 (-0.006749) | 0.051245 / 0.038508 (0.012737) | 0.034873 / 0.023109 (0.011764) | 0.274149 / 0.275898 (-0.001749) | 0.299761 / 0.323480 (-0.023719) | 0.004457 / 0.007986 (-0.003529) | 0.002938 / 0.004328 (-0.001390) | 0.049547 / 0.004250 (0.045297) | 0.042441 / 0.037052 (0.005389) | 0.284961 / 0.258489 (0.026472) | 0.322197 / 0.293841 (0.028356) | 0.033850 / 0.128546 (-0.094696) | 0.012615 / 0.075646 (-0.063031) | 0.061967 / 0.419271 (-0.357304) | 0.035229 / 0.043533 (-0.008304) | 0.273941 / 0.255139 (0.018802) | 0.293395 / 0.283200 (0.010195) | 0.020566 / 0.141683 (-0.121117) | 1.173423 / 1.452155 (-0.278732) | 1.219948 / 1.492716 (-0.272768) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096131 / 0.018006 (0.078125) | 0.305548 / 0.000490 (0.305059) | 0.000217 / 0.000200 (0.000017) | 0.000044 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023847 / 0.037411 (-0.013564) | 0.079536 / 0.014526 (0.065010) | 0.088889 / 0.176557 (-0.087667) | 0.129181 / 0.737135 (-0.607954) | 0.090879 / 0.296338 (-0.205460) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299315 / 0.215209 (0.084106) | 2.952656 / 2.077655 (0.875001) | 1.587354 / 1.504120 (0.083234) | 1.453420 / 1.541195 (-0.087774) | 1.501784 / 1.468490 (0.033294) | 0.711481 / 4.584777 (-3.873296) | 0.971790 / 3.745712 (-2.773922) | 2.897636 / 5.269862 (-2.372226) | 1.947086 / 4.565676 (-2.618591) | 0.079700 / 0.424275 (-0.344575) | 0.005395 / 0.007607 (-0.002212) | 0.351340 / 0.226044 (0.125296) | 3.416472 / 2.268929 (1.147543) | 2.007559 / 55.444624 (-53.437066) | 1.660401 / 6.876477 (-5.216076) | 1.837049 / 2.142072 (-0.305024) | 0.817306 / 4.805227 (-3.987921) | 0.135176 / 6.500664 (-6.365488) | 0.041477 / 0.075469 (-0.033992) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.030033 / 1.841788 (-0.811755) | 12.528661 / 8.074308 (4.454353) | 10.603212 / 10.191392 (0.411820) | 0.142434 / 0.680424 (-0.537989) | 0.015603 / 0.534201 (-0.518598) | 0.304516 / 0.579283 (-0.274767) | 0.125324 / 0.434364 (-0.309040) | 0.343092 / 0.540337 (-0.197245) | 0.443359 / 1.386936 (-0.943577) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#45c5b3daacd7af212cae4c848a56e14d3cac291f \"CML watermark\")\n"
] | "2024-07-26T11:24:35Z" | "2024-07-26T11:46:52Z" | "2024-07-26T11:40:49Z" | MEMBER | null | Update required `soxr` version from pre-release to release 0.4.0: https://github.com/dofuuz/python-soxr/releases/tag/v0.4.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7075/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7075/timeline | null | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/7075.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7075",
"merged_at": "2024-07-26T11:40:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7075.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7075"
} | true |