url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.68B
1.88B
| node_id
stringlengths 18
19
| number
int64 5.79k
6.2k
| title
stringlengths 1
280
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
null | comments
int64 0
44
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 3
17.6k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5893 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5893/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5893/comments | https://api.github.com/repos/huggingface/datasets/issues/5893/events | https://github.com/huggingface/datasets/pull/5893 | 1,722,519,056 | PR_kwDODunzps5RK40K | 5,893 | Load cached dataset as iterable | {
"login": "mariusz-jachimowicz-83",
"id": 10278877,
"node_id": "MDQ6VXNlcjEwMjc4ODc3",
"avatar_url": "https://avatars.githubusercontent.com/u/10278877?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariusz-jachimowicz-83",
"html_url": "https://github.com/mariusz-jachimowicz-83",
"followers_url": "https://api.github.com/users/mariusz-jachimowicz-83/followers",
"following_url": "https://api.github.com/users/mariusz-jachimowicz-83/following{/other_user}",
"gists_url": "https://api.github.com/users/mariusz-jachimowicz-83/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariusz-jachimowicz-83/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariusz-jachimowicz-83/subscriptions",
"organizations_url": "https://api.github.com/users/mariusz-jachimowicz-83/orgs",
"repos_url": "https://api.github.com/users/mariusz-jachimowicz-83/repos",
"events_url": "https://api.github.com/users/mariusz-jachimowicz-83/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariusz-jachimowicz-83/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 8 | 2023-05-23T17:40:35 | 2023-06-01T11:58:24 | 2023-06-01T11:51:29 | CONTRIBUTOR | null | To be used to train models it allows to load an IterableDataset from the cached Arrow file.
See https://github.com/huggingface/datasets/issues/5481 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5893/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5893",
"html_url": "https://github.com/huggingface/datasets/pull/5893",
"diff_url": "https://github.com/huggingface/datasets/pull/5893.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5893.patch",
"merged_at": "2023-06-01T11:51:29"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5892/comments | https://api.github.com/repos/huggingface/datasets/issues/5892/events | https://github.com/huggingface/datasets/issues/5892 | 1,722,503,824 | I_kwDODunzps5mq1KQ | 5,892 | User access requests with manual review do not notify the dataset owner | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-23T17:27:46 | 2023-07-21T13:55:37 | 2023-07-21T13:55:36 | CONTRIBUTOR | null | ### Describe the bug
When a user access requests are enabled, and new requests are set to Manual Review, the dataset owner should be notified of the pending requests. However, instead, currently nothing happens, and so the dataset request can go unanswered for quite some time until the owner happens to check that particular dataset's Settings pane.
### Steps to reproduce the bug
1. Enable a dataset's user access requests
2. Set to Manual Review
3. Ask another HF user to request access to the dataset
4. Dataset owner is not notified
### Expected behavior
The dataset owner should receive some kind of notification, perhaps in their HF site inbox, or by email, when a dataset access request is made and manual review is enabled.
### Environment info
n/a | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5892/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5891 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5891/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5891/comments | https://api.github.com/repos/huggingface/datasets/issues/5891/events | https://github.com/huggingface/datasets/pull/5891 | 1,722,384,135 | PR_kwDODunzps5RKchn | 5,891 | Make split slicing consisten with list slicing | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2023-05-23T16:04:33 | 2023-05-23T16:11:12 | null | CONTRIBUTOR | null | Fix #1774, fix #5875
TODO: a test | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5891/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5891",
"html_url": "https://github.com/huggingface/datasets/pull/5891",
"diff_url": "https://github.com/huggingface/datasets/pull/5891.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5891.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5889 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5889/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5889/comments | https://api.github.com/repos/huggingface/datasets/issues/5889/events | https://github.com/huggingface/datasets/issues/5889 | 1,722,373,618 | I_kwDODunzps5mqVXy | 5,889 | Token Alignment for input and output data over train and test batch/dataset. | {
"login": "akesh1235",
"id": 125154243,
"node_id": "U_kgDOB3Wzww",
"avatar_url": "https://avatars.githubusercontent.com/u/125154243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akesh1235",
"html_url": "https://github.com/akesh1235",
"followers_url": "https://api.github.com/users/akesh1235/followers",
"following_url": "https://api.github.com/users/akesh1235/following{/other_user}",
"gists_url": "https://api.github.com/users/akesh1235/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akesh1235/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akesh1235/subscriptions",
"organizations_url": "https://api.github.com/users/akesh1235/orgs",
"repos_url": "https://api.github.com/users/akesh1235/repos",
"events_url": "https://api.github.com/users/akesh1235/events{/privacy}",
"received_events_url": "https://api.github.com/users/akesh1235/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2023-05-23T15:58:55 | 2023-05-23T15:58:55 | null | NONE | null | `data`
> DatasetDict({
train: Dataset({
features: ['input', 'output'],
num_rows: 4500
})
test: Dataset({
features: ['input', 'output'],
num_rows: 500
})
})
**# input (in-correct sentence)**
`data['train'][0]['input']`
**>>** 'We are meet sunday 10am12pmET in Crown Heights Brooklyn New York'
**# output (correct sentence)**
`data['train'][0]['output']`
**>>** 'We meet Sundays 10am-12pmET in Crown Heights, Brooklyn, New York.'
**I Want to align the output tokens with input**
```
`# tokenize both inputs and targets
def tokenize_fn(batch):
# tokenize the input sequence first
# this populates input_ids, attention_mask, etc.
tokenized_inputs = tokenizer(
batch['input']
)
labels_batch = tokenizer.tokenize(batch['output']) # original targets
aligned_labels_batch = []
for i, labels in enumerate(labels_batch):
word_ids = tokenized_inputs[i].word_ids()
aligned_labels_batch.append(align_targets(labels, word_ids)) # align_targets is another user defined function which is been called here
# recall: the 'target' must be stored in key called 'labels'
tokenized_inputs['labels'] = aligned_labels_batch
return tokenized_inputs`
```
```
data.map(
tokenize_fn,
batched=True,
remove_columns=data['train'].column_names,
)
```
When this user defined function is mapped to every records of train and test batch am getting following error:
**1.** **raise DatasetTransformationNotAllowedError(
3457 "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it."**
**2.** **TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]** | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5889/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5887/comments | https://api.github.com/repos/huggingface/datasets/issues/5887/events | https://github.com/huggingface/datasets/issues/5887 | 1,722,166,382 | I_kwDODunzps5mpixu | 5,887 | HuggingsFace dataset example give error | {
"login": "donhuvy",
"id": 1328316,
"node_id": "MDQ6VXNlcjEzMjgzMTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1328316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donhuvy",
"html_url": "https://github.com/donhuvy",
"followers_url": "https://api.github.com/users/donhuvy/followers",
"following_url": "https://api.github.com/users/donhuvy/following{/other_user}",
"gists_url": "https://api.github.com/users/donhuvy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donhuvy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donhuvy/subscriptions",
"organizations_url": "https://api.github.com/users/donhuvy/orgs",
"repos_url": "https://api.github.com/users/donhuvy/repos",
"events_url": "https://api.github.com/users/donhuvy/events{/privacy}",
"received_events_url": "https://api.github.com/users/donhuvy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
}
] | null | 4 | 2023-05-23T14:09:05 | 2023-07-25T14:01:01 | 2023-07-25T14:01:00 | NONE | null | ### Describe the bug
![image](https://github.com/huggingface/datasets/assets/1328316/1f4f0086-3db9-4c79-906b-05a375357cce)
![image](https://github.com/huggingface/datasets/assets/1328316/733ebd3d-89b9-4ece-b80a-00ab5b0a4122)
### Steps to reproduce the bug
Use link as reference document written https://colab.research.google.com/github/huggingface/datasets/blob/main/notebooks/Overview.ipynb#scrollTo=biqDH9vpvSVz
```python
# Now let's train our model
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model.train().to(device)
for i, batch in enumerate(dataloader):
batch.to(device)
outputs = model(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
model.zero_grad()
print(f'Step {i} - loss: {loss:.3}')
if i > 5:
break
```
Error
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-44-7040b885f382>](https://localhost:8080/#) in <cell line: 5>()
5 for i, batch in enumerate(dataloader):
6 batch.to(device)
----> 7 outputs = model(**batch)
8 loss = outputs.loss
9 loss.backward()
[/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *args, **kwargs)
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []
TypeError: DistilBertForQuestionAnswering.forward() got an unexpected keyword argument 'token_type_ids'
```
https://github.com/huggingface/datasets/assets/1328316/5d8b1d61-9337-4d59-8423-4f37f834c156
### Expected behavior
Run success on Google Colab (free)
### Environment info
Windows 11 x64, Google Colab free (my Google Drive just empty about 200 MB, but I don't think it cause problem) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5887/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5886 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5886/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5886/comments | https://api.github.com/repos/huggingface/datasets/issues/5886/events | https://github.com/huggingface/datasets/issues/5886 | 1,721,070,225 | I_kwDODunzps5mlXKR | 5,886 | Use work-stealing algorithm when parallel computing | {
"login": "1014661165",
"id": 46060451,
"node_id": "MDQ6VXNlcjQ2MDYwNDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/46060451?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1014661165",
"html_url": "https://github.com/1014661165",
"followers_url": "https://api.github.com/users/1014661165/followers",
"following_url": "https://api.github.com/users/1014661165/following{/other_user}",
"gists_url": "https://api.github.com/users/1014661165/gists{/gist_id}",
"starred_url": "https://api.github.com/users/1014661165/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/1014661165/subscriptions",
"organizations_url": "https://api.github.com/users/1014661165/orgs",
"repos_url": "https://api.github.com/users/1014661165/repos",
"events_url": "https://api.github.com/users/1014661165/events{/privacy}",
"received_events_url": "https://api.github.com/users/1014661165/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2023-05-23T03:08:44 | 2023-05-24T15:30:09 | null | NONE | null | ### Feature request
when i used Dataset.map api to process data concurrently, i found that
it gets slower and slower as it gets closer to completion. Then i read the source code of arrow_dataset.py and found that it shard the dataset and use multiprocessing pool to execute each shard.It may cause the slowest task to drag out the entire program's execution time,especially when processing huge dataset.
### Motivation
using work-stealing algorithm instead of sharding and parallel computing to optimize performance.
### Your contribution
just an idea. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5886/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5885 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5885/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5885/comments | https://api.github.com/repos/huggingface/datasets/issues/5885/events | https://github.com/huggingface/datasets/pull/5885 | 1,720,954,440 | PR_kwDODunzps5RFjTL | 5,885 | Modify `is_remote_filesystem` to return True for FUSE-mounted paths | {
"login": "maddiedawson",
"id": 106995444,
"node_id": "U_kgDOBmCe9A",
"avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maddiedawson",
"html_url": "https://github.com/maddiedawson",
"followers_url": "https://api.github.com/users/maddiedawson/followers",
"following_url": "https://api.github.com/users/maddiedawson/following{/other_user}",
"gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions",
"organizations_url": "https://api.github.com/users/maddiedawson/orgs",
"repos_url": "https://api.github.com/users/maddiedawson/repos",
"events_url": "https://api.github.com/users/maddiedawson/events{/privacy}",
"received_events_url": "https://api.github.com/users/maddiedawson/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 5 | 2023-05-23T01:04:54 | 2023-05-25T08:50:48 | null | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5885/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5885",
"html_url": "https://github.com/huggingface/datasets/pull/5885",
"diff_url": "https://github.com/huggingface/datasets/pull/5885.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5885.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5888 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5888/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5888/comments | https://api.github.com/repos/huggingface/datasets/issues/5888/events | https://github.com/huggingface/datasets/issues/5888 | 1,722,290,363 | I_kwDODunzps5mqBC7 | 5,888 | A way to upload and visualize .mp4 files (millions of them) as part of a dataset | {
"login": "AntreasAntoniou",
"id": 10792502,
"node_id": "MDQ6VXNlcjEwNzkyNTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AntreasAntoniou",
"html_url": "https://github.com/AntreasAntoniou",
"followers_url": "https://api.github.com/users/AntreasAntoniou/followers",
"following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}",
"gists_url": "https://api.github.com/users/AntreasAntoniou/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AntreasAntoniou/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AntreasAntoniou/subscriptions",
"organizations_url": "https://api.github.com/users/AntreasAntoniou/orgs",
"repos_url": "https://api.github.com/users/AntreasAntoniou/repos",
"events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}",
"received_events_url": "https://api.github.com/users/AntreasAntoniou/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 9 | 2023-05-22T18:05:26 | 2023-06-23T03:37:16 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
I recently chose to use huggingface hub as the home for a large multi modal dataset I've been building. https://huggingface.co/datasets/Antreas/TALI
It combines images, text, audio and video. Now, I could very easily upload a dataset made via datasets.Dataset.from_generator, as long as it did not include video files. I found that including .mp4 files in the entries would not auto-upload those files.
Hence I tried to upload them myself. I quickly found out that uploading many small files is a very bad way to use git lfs, and that it would take ages, so, I resorted to using 7z to pack them all up. But then I had a new problem.
My dataset had a size of 1.9TB. Trying to upload such a large file with the default huggingface_hub API always resulted in time outs etc. So I decided to split the large files into chunks of 5GB each and reupload.
So, eventually it all worked out. But now the dataset can't be properly and natively used by the datasets API because of all the needed preprocessing -- and furthermore the hub is unable to visualize things.
**Describe the solution you'd like**
A native way to upload large datasets that include .mp4 or other video types.
**Describe alternatives you've considered**
Already explained earlier
**Additional context**
https://huggingface.co/datasets/Antreas/TALI
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5888/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5884 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5884/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5884/comments | https://api.github.com/repos/huggingface/datasets/issues/5884/events | https://github.com/huggingface/datasets/issues/5884 | 1,719,548,172 | I_kwDODunzps5mfjkM | 5,884 | `Dataset.to_tf_dataset` fails when strings cannot be encoded as `np.bytes_` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
}
] | null | 2 | 2023-05-22T12:03:06 | 2023-06-09T16:04:56 | 2023-06-09T16:04:55 | CONTRIBUTOR | null | ### Describe the bug
When loading any dataset that contains a column with strings that are not ASCII-compatible, looping over those records raises the following exception e.g. for `é` character `UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 0: ordinal not in range(128)`.
### Steps to reproduce the bug
Running the following script will eventually fail, when reaching to the batch that contains non-ASCII compatible strings.
```python
from datasets import load_dataset
ds = load_dataset("imdb", split="train")
tfds = ds.to_tf_dataset(batch_size=16)
for batch in tfds:
print(batch)
>>> UnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 0: ordinal not in range(128)
```
### Expected behavior
The following script to run properly, making sure that the strings are either `numpy.unicode_` or `numpy.string` instead of `numpy.bytes_` since some characters are not ASCII compatible and that would lead to an issue when applying the `map`.
```python
from datasets import load_dataset
ds = load_dataset("imdb", split="train")
tfds = ds.to_tf_dataset(batch_size=16)
for batch in tfds:
print(batch)
```
### Environment info
- `datasets` version: 2.12.1.dev0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5884/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5883/comments | https://api.github.com/repos/huggingface/datasets/issues/5883/events | https://github.com/huggingface/datasets/pull/5883 | 1,719,527,597 | PR_kwDODunzps5RAkYi | 5,883 | Fix string-encoding, make `batch_size` optional, and minor improvements in `Dataset.to_tf_dataset` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 29 | 2023-05-22T11:51:07 | 2023-06-08T11:09:03 | 2023-06-06T16:49:15 | CONTRIBUTOR | null | ## What's in this PR?
This PR addresses some minor fixes and general improvements in the `to_tf_dataset` method of `datasets.Dataset`, to convert a 🤗HuggingFace Dataset as a TensorFlow Dataset.
The main bug solved in this PR comes with the string-encoding, since for safety purposes the internal conversion of `numpy.arrays` when `dtype` is unicode/string, is to convert it into `numpy.bytes`, more information in the docstring of https://github.com/tensorflow/tensorflow/blob/388d952114e59a1aeda440ed4737b29f8b7c6e8a/tensorflow/python/ops/script_ops.py#L210. That's triggered when using `tensorflow.numpy_function` as it's applying another type cast besides the one that `datasets` does, so the casting is applied at least twice per entry/batch. So this means that the definition of the `numpy.unicode_` dtype when the data in the batch is a string, is ignored, and replaced by `numpy.bytes_`.
Besides that, some other minor things have been fixed:
* Made `batch_size` an optional parameter in `to_tf_dataset`
* Map the `tensorflow` output dtypes just once, and not in every `tf.function` call during `map`
* Keep `numpy` formatting in the `datasets.Dataset` if already formatted like it, no need to format it again as `numpy`
* Docstring indentation in `dataset_to_tf` and `multiprocess_dataset_to_tf`
## What's missing in this PR?
I can include some integration tests if needed, to validate that `batch_size` is optional, and that the tensors in the TF-Dataset can be looped over with no issues as before. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5883/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5883",
"html_url": "https://github.com/huggingface/datasets/pull/5883",
"diff_url": "https://github.com/huggingface/datasets/pull/5883.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5883.patch",
"merged_at": "2023-06-06T16:49:15"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5881 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5881/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5881/comments | https://api.github.com/repos/huggingface/datasets/issues/5881/events | https://github.com/huggingface/datasets/issues/5881 | 1,719,402,643 | I_kwDODunzps5mfACT | 5,881 | Split dataset by node: index error when sharding iterable dataset | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-05-22T10:36:13 | 2023-05-23T08:32:14 | null | CONTRIBUTOR | null | ### Describe the bug
Context: we're splitting an iterable dataset by node and then passing it to a torch data loader with multiple workers
When we iterate over it for 5 steps, we don't get an error
When we instead iterate over it for 8 steps, we get an `IndexError` when fetching the data if we have too many workers
### Steps to reproduce the bug
Here, we have 2 JAX processes (`jax.process_count() = 2`) which we split the dataset over. The dataset loading script can be found here: https://huggingface.co/datasets/distil-whisper/librispeech_asr/blob/c6a1e805cbfeed5057400ac5937327d7e30281b8/librispeech_asr.py#L310
<details>
<summary> Code to reproduce </summary>
```python
from datasets import load_dataset
import jax
from datasets.distributed import split_dataset_by_node
from torch.utils.data import DataLoader
from tqdm import tqdm
# load an example dataset (https://huggingface.co/datasets/distil-whisper/librispeech_asr)
dataset = load_dataset("distil-whisper/librispeech_asr", "all", split="train.clean.100", streaming=True)
# just keep the text column -> no need to define a collator
dataset_text = dataset.remove_columns(set(dataset.features.keys()) - {"text"})
# define some constants
batch_size = 256
num_examples = 5 # works for 5 examples, doesn't for 8
num_workers = dataset_text.n_shards
# try with multiple workers
dataloader = DataLoader(dataset_text, batch_size=batch_size, num_workers=num_workers, drop_last=True)
for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Multiple workers"):
if i == num_examples:
break
# try splitting by node (we can't do this with `dataset_text` since `split_dataset_by_node` expects the Audio column for an ASR dataset)
dataset = split_dataset_by_node(dataset, rank=jax.process_index(), world_size=jax.process_count())
# remove the text column again
dataset_text = dataset.remove_columns(set(dataset.features.keys()) - {"text"})
dataloader = DataLoader(dataset_text, batch_size=16, num_workers=num_workers // 2, drop_last=True)
for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Split by node"):
if i == num_examples:
break
# too many workers
dataloader = DataLoader(dataset_text, batch_size=256, num_workers=num_workers, drop_last=True)
for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Too many workers"):
if i == num_examples:
break
```
</details>
<details>
<summary> With 5 examples: </summary>
```
Multiple workers: 100%|███████████████████████████████████████████████████████████████████| 5/5 [00:16<00:00, 3.33s/it]
Assigning 7 shards (or data sources) of the dataset to each node.
Split by node: 100%|██████████████████████████████████████████████████████████████████████| 5/5 [00:13<00:00, 2.76s/it]
Assigning 7 shards (or data sources) of the dataset to each node.
Too many dataloader workers: 14 (max is dataset.n_shards=7). Stopping 7 dataloader workers.
To parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary t
o have a number of workers greater than dataset.n_shards=7. To enable more parallelism, please split the dataset in more
files than 7.
Too many workers: 100%|███████████████████████████████████████████████████████████████████| 5/5 [00:15<00:00, 3.03s/it]
```
</details>
<details>
<summary> With 7 examples: </summary>
```
Multiple workers: 100%|███████████████████████████████████████████████████████████████████| 8/8 [00:13<00:00, 1.71s/it]
Assigning 7 shards (or data sources) of the dataset to each node.
Split by node: 100%|██████████████████████████████████████████████████████████████████████| 8/8 [00:11<00:00, 1.38s/it]
Assigning 7 shards (or data sources) of the dataset to each node.
Too many dataloader workers: 14 (max is dataset.n_shards=7). Stopping 7 dataloader workers.
To parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary to have a number of workers greater than dataset.n_shards=7. To enable more parallelism, please split the dataset in more files than 7.
Too many workers: 88%|██████████████████████████████████████████████████████████▋ | 7/8 [00:13<00:01, 1.89s/it]
Traceback (most recent call last):
File "distil-whisper/test_librispeech.py", line 36, in <module>
for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Too many workers"):
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/tqdm/std.py", line 1178, in __iter__
for obj in iterable:
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
data = self._next_data()
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1325, in _next_data
return self._process_data(data)
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
data.reraise()
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/_utils.py", line 644, in reraise
raise exception
IndexError: Caught IndexError in DataLoader worker process 7.
Original Traceback (most recent call last):
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch
data.append(next(self.dataset_iter))
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 986, in __iter__
yield from self._iter_pytorch(ex_iterable)
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 920, in _iter_pytorch
for key, example in ex_iterable.shard_data_sources(worker_info.id, worker_info.num_workers):
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 540, in shard_data_sources
self.ex_iterable.shard_data_sources(worker_id, num_workers),
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 796, in shard_data_sources
self.ex_iterable.shard_data_sources(worker_id, num_workers),
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 126, in shard_data_sources
requested_gen_kwargs = _merge_gen_kwargs([gen_kwargs_list[i] for i in shard_indices])
File "/home/sanchitgandhi/datasets/src/datasets/utils/sharding.py", line 76, in _merge_gen_kwargs
for key in gen_kwargs_list[0]
IndexError: list index out of range
```
</details>
### Expected behavior
Should pass for both 5 and 7 examples
### Environment info
- `datasets` version: 2.12.1.dev0
- Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5881/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5880 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5880/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5880/comments | https://api.github.com/repos/huggingface/datasets/issues/5880/events | https://github.com/huggingface/datasets/issues/5880 | 1,719,090,101 | I_kwDODunzps5mdzu1 | 5,880 | load_dataset from s3 file system through streaming can't not iterate data | {
"login": "janineguo",
"id": 59083384,
"node_id": "MDQ6VXNlcjU5MDgzMzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/59083384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/janineguo",
"html_url": "https://github.com/janineguo",
"followers_url": "https://api.github.com/users/janineguo/followers",
"following_url": "https://api.github.com/users/janineguo/following{/other_user}",
"gists_url": "https://api.github.com/users/janineguo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/janineguo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/janineguo/subscriptions",
"organizations_url": "https://api.github.com/users/janineguo/orgs",
"repos_url": "https://api.github.com/users/janineguo/repos",
"events_url": "https://api.github.com/users/janineguo/events{/privacy}",
"received_events_url": "https://api.github.com/users/janineguo/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2023-05-22T07:40:27 | 2023-05-26T12:52:08 | null | CONTRIBUTOR | null | ### Describe the bug
I have a JSON file in my s3 file system(minio), I can use load_dataset to get the file link, but I can't iterate it
<img width="816" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/cc0778d3-36f3-45b5-ac68-4e7c664c2ed0">
<img width="1144" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/76872af3-8b3c-42ff-9f55-528c920a7af1">
we can change 4 lines to fix this bug, you can check whether it is ok for us.
<img width="941" alt="image" src="https://github.com/huggingface/datasets/assets/59083384/5a22155a-ece7-496c-8506-047e5c235cd3">
### Steps to reproduce the bug
1. storage a file in you s3 file system
2. use load_dataset to read it through streaming
3. iterate it
### Expected behavior
can iterate it successfully
### Environment info
- `datasets` version: 2.12.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5880/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/5880/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5878 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5878/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5878/comments | https://api.github.com/repos/huggingface/datasets/issues/5878/events | https://github.com/huggingface/datasets/issues/5878 | 1,718,203,843 | I_kwDODunzps5mabXD | 5,878 | Prefetching for IterableDataset | {
"login": "vyeevani",
"id": 30946190,
"node_id": "MDQ6VXNlcjMwOTQ2MTkw",
"avatar_url": "https://avatars.githubusercontent.com/u/30946190?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vyeevani",
"html_url": "https://github.com/vyeevani",
"followers_url": "https://api.github.com/users/vyeevani/followers",
"following_url": "https://api.github.com/users/vyeevani/following{/other_user}",
"gists_url": "https://api.github.com/users/vyeevani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vyeevani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vyeevani/subscriptions",
"organizations_url": "https://api.github.com/users/vyeevani/orgs",
"repos_url": "https://api.github.com/users/vyeevani/repos",
"events_url": "https://api.github.com/users/vyeevani/events{/privacy}",
"received_events_url": "https://api.github.com/users/vyeevani/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 5 | 2023-05-20T15:25:40 | 2023-06-01T17:40:00 | null | NONE | null | ### Feature request
Add support for prefetching the next n batches through iterabledataset to reduce batch loading bottleneck in training loop.
### Motivation
The primary motivation behind this is to use hardware accelerators alongside a streaming dataset. This is required when you are in a low ram or low disk space setting as well as quick iteration where you're iterating though different accelerator environments (e.x changing ec2 instances quickly to figure out batch/sec for a particular architecture).
Currently, using the IterableDataset results in accelerators becoming basically useless due to the massive bottleneck induced by the dataset lazy loading/transform/mapping.
I've considered two alternatives:
PyTorch dataloader that handles this. However, I'm using jax, and I believe this is a piece of functionality that should live in the stream class.
Replicating the "num_workers" part of the PyTorch DataLoader to eagerly load batches and apply the transform so Arrow caching will automatically cache results and make them accessible.
### Your contribution
I may or may not have time to do this. Currently, I've written the basic multiprocessor approach to handle the eager DataLoader for my own use case with code that's not integrated to datasets. I'd definitely see this as being the default over the regular Dataset for most people given that they wouldn't have to wait on the datasets while also not worrying about performance. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5878/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5878/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5877 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5877/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5877/comments | https://api.github.com/repos/huggingface/datasets/issues/5877/events | https://github.com/huggingface/datasets/issues/5877 | 1,717,983,961 | I_kwDODunzps5mZlrZ | 5,877 | Request for text deduplication feature | {
"login": "SupreethRao99",
"id": 55043035,
"node_id": "MDQ6VXNlcjU1MDQzMDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/55043035?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SupreethRao99",
"html_url": "https://github.com/SupreethRao99",
"followers_url": "https://api.github.com/users/SupreethRao99/followers",
"following_url": "https://api.github.com/users/SupreethRao99/following{/other_user}",
"gists_url": "https://api.github.com/users/SupreethRao99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SupreethRao99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SupreethRao99/subscriptions",
"organizations_url": "https://api.github.com/users/SupreethRao99/orgs",
"repos_url": "https://api.github.com/users/SupreethRao99/repos",
"events_url": "https://api.github.com/users/SupreethRao99/events{/privacy}",
"received_events_url": "https://api.github.com/users/SupreethRao99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 3 | 2023-05-20T01:56:00 | 2023-07-26T21:42:14 | null | NONE | null | ### Feature request
It would be great if there would be support for high performance, highly scalable text deduplication algorithms as part of the datasets library.
### Motivation
Motivated by this blog post https://huggingface.co/blog/dedup and this library https://github.com/google-research/deduplicate-text-datasets, but slightly frustrated by how its not very easy to work with these tools I am proposing this feature.
### Your contribution
I would be happy to contribute to the development effort of this feature. would love to collaborate with others in the development effort. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5877/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5876 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5876/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5876/comments | https://api.github.com/repos/huggingface/datasets/issues/5876/events | https://github.com/huggingface/datasets/issues/5876 | 1,717,978,985 | I_kwDODunzps5mZkdp | 5,876 | Incompatibility with DataLab | {
"login": "helpmefindaname",
"id": 26192135,
"node_id": "MDQ6VXNlcjI2MTkyMTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/26192135?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/helpmefindaname",
"html_url": "https://github.com/helpmefindaname",
"followers_url": "https://api.github.com/users/helpmefindaname/followers",
"following_url": "https://api.github.com/users/helpmefindaname/following{/other_user}",
"gists_url": "https://api.github.com/users/helpmefindaname/gists{/gist_id}",
"starred_url": "https://api.github.com/users/helpmefindaname/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/helpmefindaname/subscriptions",
"organizations_url": "https://api.github.com/users/helpmefindaname/orgs",
"repos_url": "https://api.github.com/users/helpmefindaname/repos",
"events_url": "https://api.github.com/users/helpmefindaname/events{/privacy}",
"received_events_url": "https://api.github.com/users/helpmefindaname/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | 2 | 2023-05-20T01:39:11 | 2023-05-25T06:42:34 | 2023-05-25T06:42:34 | NONE | null | ### Describe the bug
Hello,
I am currently working on a project where both [DataLab](https://github.com/ExpressAI/DataLab) and [datasets](https://github.com/huggingface/datasets) are subdependencies.
I noticed that I cannot import both libraries, as they both register FileSystems in `fsspec`, expecting the FileSystems not being registered before.
When running the code below, I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\__init__.py", line 28, in <module>
from datalabs.arrow_dataset import concatenate_datasets, Dataset
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\arrow_dataset.py", line 60, in <module>
from datalabs.arrow_writer import ArrowWriter, OptimizedTypedSequence
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\arrow_writer.py", line 28, in <module>
from datalabs.features import (
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\features\__init__.py", line 2, in <module>
from datalabs.features.audio import Audio
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\features\audio.py", line 21, in <module>
from datalabs.utils.streaming_download_manager import xopen
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\utils\streaming_download_manager.py", line 16, in <module>
from datalabs.filesystems import COMPRESSION_FILESYSTEMS
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\datalabs\filesystems\__init__.py", line 37, in <module>
fsspec.register_implementation(fs_class.protocol, fs_class)
File "C:\Users\Bened\anaconda3\envs\ner-eval-dashboard2\lib\site-packages\fsspec\registry.py", line 51, in register_implementation
raise ValueError(
ValueError: Name (bz2) already in the registry and clobber is False
```
I think as simple solution would be to just set `clobber=True` in https://github.com/huggingface/datasets/blob/main/src/datasets/filesystems/__init__.py#L28. This allows the register to discard previous registrations. This should work, as the datalabs FileSystems are copies of the datasets FileSystems. However, I don't know if it is guaranteed to be compatible with other libraries that might use the same protocols.
I am linking the symmetric issue on [DataLab](https://github.com/ExpressAI/DataLab/issues/425) as ideally the issue is solved in both libraries the same way. Otherwise, it could lead to different behaviors depending on which library gets imported first.
### Steps to reproduce the bug
1. Run `pip install datalabs==0.4.15 datasets==2.12.0`
2. Run the following python code:
```
import datalabs
import datasets
```
### Expected behavior
It should be possible to import both libraries without getting a Value Error
### Environment info
datalabs==0.4.15
datasets==2.12.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5876/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5875 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5875/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5875/comments | https://api.github.com/repos/huggingface/datasets/issues/5875/events | https://github.com/huggingface/datasets/issues/5875 | 1,716,770,394 | I_kwDODunzps5mU9Za | 5,875 | Why split slicing doesn't behave like list slicing ? | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | open | false | null | [] | null | 1 | 2023-05-19T07:21:10 | 2023-05-23T16:02:14 | null | NONE | null | ### Describe the bug
If I want to get the first 10 samples of my dataset, I can do :
```
ds = datasets.load_dataset('mnist', split='train[:10]')
```
But if I exceed the number of samples in the dataset, an exception is raised :
```
ds = datasets.load_dataset('mnist', split='train[:999999999]')
```
> ValueError: Requested slice [:999999999] incompatible with 60000 examples.
### Steps to reproduce the bug
```
ds = datasets.load_dataset('mnist', split='train[:999999999]')
```
### Expected behavior
I would expect it to behave like python lists (no exception raised, the whole list is kept) :
```
d = list(range(1000))[:999999]
print(len(d)) # > 1000
```
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-12.6-arm64-arm-64bit
- Python version: 3.9.12
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5875/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5874/comments | https://api.github.com/repos/huggingface/datasets/issues/5874/events | https://github.com/huggingface/datasets/issues/5874 | 1,715,708,930 | I_kwDODunzps5mQ6QC | 5,874 | Using as_dataset on a "parquet" builder | {
"login": "rems75",
"id": 9039058,
"node_id": "MDQ6VXNlcjkwMzkwNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9039058?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rems75",
"html_url": "https://github.com/rems75",
"followers_url": "https://api.github.com/users/rems75/followers",
"following_url": "https://api.github.com/users/rems75/following{/other_user}",
"gists_url": "https://api.github.com/users/rems75/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rems75/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rems75/subscriptions",
"organizations_url": "https://api.github.com/users/rems75/orgs",
"repos_url": "https://api.github.com/users/rems75/repos",
"events_url": "https://api.github.com/users/rems75/events{/privacy}",
"received_events_url": "https://api.github.com/users/rems75/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-18T14:09:03 | 2023-05-31T13:23:55 | 2023-05-31T13:23:55 | NONE | null | ### Describe the bug
I used a custom builder to ``download_and_prepare`` a dataset. The first (very minor) issue is that the doc seems to suggest ``download_and_prepare`` will return the dataset, while it does not ([builder.py](https://github.com/huggingface/datasets/blob/main/src/datasets/builder.py#L718-L738)).
```
>>> from datasets import load_dataset_builder
>>> builder = load_dataset_builder("rotten_tomatoes")
>>> ds = builder.download_and_prepare("./output_dir", file_format="parquet")
```
The main issue I am facing is loading the dataset from those parquet files. I used the `as_dataset` method suggested by the doc, however it returns:
`
FileNotFoundError: [Errno 2] Failed to open local file 'output_dir/__main__-train-00000-of-00245.arrow'. Detail:
[errno 2] No such file or directory.
`
### Steps to reproduce the bug
1. Create a custom builder of some sort: `builder = CustomBuilder()`.
2. Run `download_and_prepare` with the parquet format: `builder.download_and_prepare("./output_dir", file_format="parquet")`.
3. Run `dataset = builder.as_dataset()`.
### Expected behavior
I guess I'd expect `as_dataset` to generate the dataset in arrow format if it has to, or to suggest an alternative way to load the dataset (I've also tried other methods with `load_dataset` to no avail, probably due to misunderstandings on my part).
### Environment info
```
- `datasets` version: 2.12.0
- Platform: Linux-5.15.0-1027-gcp-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.14.1
- PyArrow version: 8.0.0
- Pandas version: 1.5.3
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5874/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5873 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5873/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5873/comments | https://api.github.com/repos/huggingface/datasets/issues/5873/events | https://github.com/huggingface/datasets/issues/5873 | 1,713,269,724 | I_kwDODunzps5mHmvc | 5,873 | Allow setting the environment variable for the lock file path | {
"login": "xin3he",
"id": 83260933,
"node_id": "MDQ6VXNlcjgzMjYwOTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/83260933?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xin3he",
"html_url": "https://github.com/xin3he",
"followers_url": "https://api.github.com/users/xin3he/followers",
"following_url": "https://api.github.com/users/xin3he/following{/other_user}",
"gists_url": "https://api.github.com/users/xin3he/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xin3he/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xin3he/subscriptions",
"organizations_url": "https://api.github.com/users/xin3he/orgs",
"repos_url": "https://api.github.com/users/xin3he/repos",
"events_url": "https://api.github.com/users/xin3he/events{/privacy}",
"received_events_url": "https://api.github.com/users/xin3he/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2023-05-17T07:10:02 | 2023-05-17T07:11:05 | null | NONE | null | ### Feature request
Add an environment variable to replace the default lock file path.
### Motivation
Usually, dataset path is a read-only path while the lock file needs to be modified each time. It would be convenient if the path can be reset individually.
### Your contribution
```/src/datasets/utils/filelock.py
class UnixFileLock(BaseFileLock):
def __init__(self, lock_file, timeout=-1, max_filename_length=None):
#-------------------
if os.getenv('DS_TMP_PATH'):
file_name = str(lock_file).split('/')[-1]
dataset_tmp_path = os.getenv('DS_TMP_PATH')
lock_file = os.path.join(dataset_tmp_path, file_name)
#-------------------
max_filename_length = os.statvfs(os.path.dirname(lock_file)).f_namemax
super().__init__(lock_file, timeout=timeout, max_filename_length=max_filename_length)
```
A simple demo is as upper. Thanks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5873/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5872 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5872/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5872/comments | https://api.github.com/repos/huggingface/datasets/issues/5872/events | https://github.com/huggingface/datasets/pull/5872 | 1,713,174,662 | PR_kwDODunzps5QrQ5o | 5,872 | Fix infer module for uppercase extensions | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-17T05:56:45 | 2023-05-17T14:26:59 | 2023-05-17T14:19:18 | MEMBER | null | Fix the `infer_module_for_data_files` and `infer_module_for_data_files_in_archives` functions when passed a data file name with uppercase extension, e.g. `filename.TXT`.
Before, `None` module was returned. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5872/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5872",
"html_url": "https://github.com/huggingface/datasets/pull/5872",
"diff_url": "https://github.com/huggingface/datasets/pull/5872.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5872.patch",
"merged_at": "2023-05-17T14:19:18"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5871/comments | https://api.github.com/repos/huggingface/datasets/issues/5871/events | https://github.com/huggingface/datasets/issues/5871 | 1,712,573,073 | I_kwDODunzps5mE8qR | 5,871 | data configuration hash suffix depends on uncanonicalized data_dir | {
"login": "kylrth",
"id": 5044802,
"node_id": "MDQ6VXNlcjUwNDQ4MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5044802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylrth",
"html_url": "https://github.com/kylrth",
"followers_url": "https://api.github.com/users/kylrth/followers",
"following_url": "https://api.github.com/users/kylrth/following{/other_user}",
"gists_url": "https://api.github.com/users/kylrth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kylrth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylrth/subscriptions",
"organizations_url": "https://api.github.com/users/kylrth/orgs",
"repos_url": "https://api.github.com/users/kylrth/repos",
"events_url": "https://api.github.com/users/kylrth/events{/privacy}",
"received_events_url": "https://api.github.com/users/kylrth/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | {
"login": "kylrth",
"id": 5044802,
"node_id": "MDQ6VXNlcjUwNDQ4MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5044802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylrth",
"html_url": "https://github.com/kylrth",
"followers_url": "https://api.github.com/users/kylrth/followers",
"following_url": "https://api.github.com/users/kylrth/following{/other_user}",
"gists_url": "https://api.github.com/users/kylrth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kylrth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylrth/subscriptions",
"organizations_url": "https://api.github.com/users/kylrth/orgs",
"repos_url": "https://api.github.com/users/kylrth/repos",
"events_url": "https://api.github.com/users/kylrth/events{/privacy}",
"received_events_url": "https://api.github.com/users/kylrth/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "kylrth",
"id": 5044802,
"node_id": "MDQ6VXNlcjUwNDQ4MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5044802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kylrth",
"html_url": "https://github.com/kylrth",
"followers_url": "https://api.github.com/users/kylrth/followers",
"following_url": "https://api.github.com/users/kylrth/following{/other_user}",
"gists_url": "https://api.github.com/users/kylrth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kylrth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kylrth/subscriptions",
"organizations_url": "https://api.github.com/users/kylrth/orgs",
"repos_url": "https://api.github.com/users/kylrth/repos",
"events_url": "https://api.github.com/users/kylrth/events{/privacy}",
"received_events_url": "https://api.github.com/users/kylrth/received_events",
"type": "User",
"site_admin": false
}
] | null | 3 | 2023-05-16T18:56:04 | 2023-06-02T15:52:05 | 2023-06-02T15:52:05 | CONTRIBUTOR | null | ### Describe the bug
I am working with the `recipe_nlg` dataset, which requires manual download. Once it's downloaded, I've noticed that the hash in the custom data configuration is different if I add a trailing `/` to my `data_dir`. It took me a while to notice that the hashes were different, and to understand that that was the cause of my dataset being processed anew instead of the cached version being used.
### Steps to reproduce the bug
1. Follow the steps to manually download the `recipe_nlg` dataset to `/data/recipenlg`.
2. Load it using `load_dataset`, once without a trailing slash and once with one:
```python
>>> ds = load_dataset("recipe_nlg", data_dir="/data/recipenlg")
Using custom data configuration default-082278caeea85765
Downloading and preparing dataset recipe_nlg/default to /home/kyle/.cache/huggingface/datasets/recipe_nlg/default-082278caeea85765/1.0.0/aa4f120223637bedf7360cecb70a9bd108acfd64e38207ca90c9f385d21e5e74...
Dataset recipe_nlg downloaded and prepared to /home/kyle/.cache/huggingface/datasets/recipe_nlg/default-082278caeea85765/1.0.0/aa4f120223637bedf7360cecb70a9bd108acfd64e38207ca90c9f385d21e5e74. Subsequent calls will reuse this data.
100%|███████████████████████████████████████████████████████████████████| 1/1 [00:01<00:00, 1.10s/it]
DatasetDict({
train: Dataset({
features: ['id', 'title', 'ingredients', 'directions', 'link', 'source', 'ner'],
num_rows: 2231142
})
})
>>> ds = load_dataset("recipe_nlg", data_dir="/data/recipenlg/")
Using custom data configuration default-83e87680785d0493
Downloading and preparing dataset recipe_nlg/default to /home/user/.cache/huggingface/datasets/recipe_nlg/default-83e87680785d0493/1.0.0/aa4f120223637bedf7360cecb70a9bd108acfd64e38207ca90c9f385d21e5e74...
Generating train split: 1%| | 12701/2231142 [00:04<13:15, 2790.25 examples/s
^C
```
3. Observe that the hash suffix in the custom data configuration changes due to the altered string.
### Expected behavior
I think I would expect the hash to remain constant if it actually points to the same location on disk. I would expect the use of `os.path.normpath` to canonicalize the paths.
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.4.0-147-generic-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.2 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5871/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5870 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5870/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5870/comments | https://api.github.com/repos/huggingface/datasets/issues/5870/events | https://github.com/huggingface/datasets/issues/5870 | 1,712,156,282 | I_kwDODunzps5mDW56 | 5,870 | Behaviour difference between datasets.map and IterableDatasets.map | {
"login": "llStringll",
"id": 30209072,
"node_id": "MDQ6VXNlcjMwMjA5MDcy",
"avatar_url": "https://avatars.githubusercontent.com/u/30209072?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/llStringll",
"html_url": "https://github.com/llStringll",
"followers_url": "https://api.github.com/users/llStringll/followers",
"following_url": "https://api.github.com/users/llStringll/following{/other_user}",
"gists_url": "https://api.github.com/users/llStringll/gists{/gist_id}",
"starred_url": "https://api.github.com/users/llStringll/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/llStringll/subscriptions",
"organizations_url": "https://api.github.com/users/llStringll/orgs",
"repos_url": "https://api.github.com/users/llStringll/repos",
"events_url": "https://api.github.com/users/llStringll/events{/privacy}",
"received_events_url": "https://api.github.com/users/llStringll/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-05-16T14:32:57 | 2023-05-16T14:36:05 | null | NONE | null | ### Describe the bug
All the examples in all the docs mentioned throughout huggingface datasets correspond to datasets object, and not IterableDatasets object. At one point of time, they might have been in sync, but the code for datasets version >=2.9.0 is very different as compared to the docs.
I basically need to .map() a transform on images in an iterable dataset, which was made using a custom databuilder config.
This works very good in map-styles datasets, but the .map() fails in IterableDatasets, show behvaiour as such:
"pixel_values" key not found, KeyError in examples object/dict passed into transform function for map, which works fine with map style, even as batch.
In iterable style, the object/dict passed into map() paramter callable function is completely different as what is mentioned in all examples.
Please look into this. Thank you
My databuilder class is inherited as such:
def _info(self):
print ("Config: ",self.config.__dict__.keys())
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"labels": datasets.Sequence(datasets.Value("uint16")),
# "labels_name": datasets.Value("string"),
# "pixel_values": datasets.Array3D(shape=(3, 1280, 960), dtype="float32"),
"pixel_values": datasets.Array3D(shape=(1280, 960, 3), dtype="uint8"),
"image_s3_path": datasets.Value("string"),
}
),
supervised_keys=None,
homepage="none",
citation="",
)
def _split_generators(self, dl_manager):
records_train = list(db.mini_set.find({'split':'train'},{'image_s3_path':1, 'ocwen_template_name':1}))[:10000]
records_val = list(db.mini_set.find({'split':'val'},{'image_s3_path':1, 'ocwen_template_name':1}))[:1000]
# print (len(records),self.config.num_shards)
# shard_size_train = len(records_train)//self.config.num_shards
# sharded_records_train = [records_train[i:i+shard_size_train] for i in range(0,len(records_train),shard_size_train)]
# shard_size_val = len(records_val)//self.config.num_shards
# sharded_records_val = [records_val[i:i+shard_size_val] for i in range(0,len(records_val),shard_size_val)]
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, gen_kwargs={"records":records_train} # passing list of records, for sharding to take over
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION, gen_kwargs={"records":records_val} # passing list of records, for sharding to take over
),
]
def _generate_examples(self, records):
# print ("Generating examples for [{}] shards".format(len(shards)))
# initiate_db_connection()
# records = list(db.mini_set.find({'split':split},{'image_s3_path':1, 'ocwen_template_name':1}))[:10]
id_ = 0
# for records in shards:
for i,rec in enumerate(records):
img_local_path = fetch_file(rec['image_s3_path'],self.config.buffer_dir)
# t = self.config.processor(Image.open(img_local_path), random_padding=True, return_tensors="np").pixel_values.squeeze()
# print (t.shape, type(t),type(t[0][0][0]))
# sys.exit()
pvs = np.array(Image.open(img_local_path).resize((1280,960))) # image object is wxh, so resize as per that, numpy array of it is hxwxc, transposing to cxwxh
# pvs = self.config.processor(Image.open(img_local_path), random_padding=True, return_tensors="np").pixel_values.astype(np.float16).squeeze()
# print (type(pvs[0][0][0]))
lblids = self.config.processor.tokenizer('<s_class>'+rec['ocwen_template_name']+'</s_class>'+'</s>', add_special_tokens=False, padding=False, truncation=False, return_tensors="np")["input_ids"].squeeze(0) # take padding later, as per batch collating
# print (len(lblids),type(lblids[0]))
# print (type(pvs),pvs.shape,type(pvs[0][0][0]), type(lblids))
yield id_, {"labels":lblids,"pixel_values":pvs,"image_s3_path":rec['image_s3_path']}
id_+=1
os.remove(img_local_path)
and I load it inside my trainer script as such
`ds = load_dataset("/tmp/DonutDS/dataset/", split="train", streaming=True) # iterable dataset, where .map() falls`
or also as
`ds = load_from_disk('/tmp/DonutDS/dataset/') #map style dataset`
Thank you to the team for having such a great library, and for this bug fix in advance!
### Steps to reproduce the bug
Above config can allow one to reproduce the said bug
### Expected behavior
.map() should show some consistency b/w map-style and iterable-style datasets, or atleast the docs should address iterable-style datasets behaviour and examples. I honestly do not figure the use of such docs.
### Environment info
datasets==2.9.0
transformers==4.26.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5870/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5869/comments | https://api.github.com/repos/huggingface/datasets/issues/5869/events | https://github.com/huggingface/datasets/issues/5869 | 1,711,990,003 | I_kwDODunzps5mCuTz | 5,869 | Image Encoding Issue when submitting a Parquet Dataset | {
"login": "PhilippeMoussalli",
"id": 47530815,
"node_id": "MDQ6VXNlcjQ3NTMwODE1",
"avatar_url": "https://avatars.githubusercontent.com/u/47530815?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PhilippeMoussalli",
"html_url": "https://github.com/PhilippeMoussalli",
"followers_url": "https://api.github.com/users/PhilippeMoussalli/followers",
"following_url": "https://api.github.com/users/PhilippeMoussalli/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilippeMoussalli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PhilippeMoussalli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilippeMoussalli/subscriptions",
"organizations_url": "https://api.github.com/users/PhilippeMoussalli/orgs",
"repos_url": "https://api.github.com/users/PhilippeMoussalli/repos",
"events_url": "https://api.github.com/users/PhilippeMoussalli/events{/privacy}",
"received_events_url": "https://api.github.com/users/PhilippeMoussalli/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 16 | 2023-05-16T09:42:58 | 2023-06-16T12:48:38 | 2023-06-16T09:30:48 | NONE | null | ### Describe the bug
Hello,
I'd like to report an issue related to pushing a dataset represented as a Parquet file to a dataset repository using Dask. Here are the details:
We attempted to load an example dataset in Parquet format from the Hugging Face (HF) filesystem using Dask with the following code snippet:
```
import dask.dataframe as dd
df = dd.read_parquet("hf://datasets/lambdalabs/pokemon-blip-captions",index=False)
```
In this dataset, the "image" column is represented as a dictionary/struct with the format:
```
df = df.compute()
df["image"].iloc[0].keys()
-> dict_keys(['bytes', 'path'])
```
I think this is the format encoded by the [`Image`](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.Image) feature extractor from datasets to format suitable for Arrow.
The next step was to push the dataset to a repository that I created:
```
dd.to_parquet(dask_df, path = "hf://datasets/philippemo/dummy_dataset/data")
```
However, after pushing the dataset using Dask, the "image" column is now represented as the encoded dictionary `(['bytes', 'path'])`, and the images are not properly visualized. You can find the dataset here: [Link to the problematic dataset](https://huggingface.co/datasets/philippemo/dummy_dataset).
It's worth noting that both the original dataset and the one submitted with Dask have the same schema with minor alterations related to metadata:
**[ Schema of original dummy example.](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions/blob/main/data/train-00000-of-00001-566cc9b19d7203f8.parquet)**
```
image: struct<bytes: binary, path: null>
child 0, bytes: binary
child 1, path: null
text: string
```
**[ Schema of pushed dataset with dask](https://huggingface.co/datasets/philippemo/dummy_dataset/blob/main/data/part.0.parquet)**
```
image: struct<bytes: binary, path: null>
child 0, bytes: binary
child 1, path: null
text: string
```
This issue seems to be related to an encoding type that occurs when pushing a model to the hub. Normally, models should be represented as an HF dataset before pushing, but we are working with an example where we need to push large datasets using Dask.
Could you please provide clarification on how to resolve this issue?
Thank you!
### Reproduction
To get the schema I downloaded the parquet files and used pyarrow.parquet to read the schema
```
import pyarrow.parquet
pyarrow.parquet.read_schema(<path_to_parquet>, memory_map=True)
```
### Logs
_No response_
### System info
```shell
- huggingface_hub version: 0.14.1
- Platform: Linux-5.19.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: /home/philippe/.cache/huggingface/token
- Has saved token ?: True
- Who am I ?: philippemo
- Configured git credential helpers: cache
- FastAI: N/A
- Tensorflow: N/A
- Torch: N/A
- Jinja2: 3.1.2
- Graphviz: N/A
- Pydot: N/A
- Pillow: 9.4.0
- hf_transfer: N/A
- gradio: N/A
- ENDPOINT: https://huggingface.co
- HUGGINGFACE_HUB_CACHE: /home/philippe/.cache/huggingface/hub
- HUGGINGFACE_ASSETS_CACHE: /home/philippe/.cache/huggingface/assets
- HF_TOKEN_PATH: /home/philippe/.cache/huggingface/token
- HF_HUB_OFFLINE: False
- HF_HUB_DISABLE_TELEMETRY: False
- HF_HUB_DISABLE_PROGRESS_BARS: None
- HF_HUB_DISABLE_SYMLINKS_WARNING: False
- HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False
- HF_HUB_DISABLE_IMPLICIT_TOKEN: False
- HF_HUB_ENABLE_HF_TRANSFER: False
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5869/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5868 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5868/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5868/comments | https://api.github.com/repos/huggingface/datasets/issues/5868/events | https://github.com/huggingface/datasets/issues/5868 | 1,711,173,098 | I_kwDODunzps5l_m3q | 5,868 | Is it possible to change a cached file and 're-cache' it instead of re-generating? | {
"login": "zyh3826",
"id": 31238754,
"node_id": "MDQ6VXNlcjMxMjM4NzU0",
"avatar_url": "https://avatars.githubusercontent.com/u/31238754?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zyh3826",
"html_url": "https://github.com/zyh3826",
"followers_url": "https://api.github.com/users/zyh3826/followers",
"following_url": "https://api.github.com/users/zyh3826/following{/other_user}",
"gists_url": "https://api.github.com/users/zyh3826/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zyh3826/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zyh3826/subscriptions",
"organizations_url": "https://api.github.com/users/zyh3826/orgs",
"repos_url": "https://api.github.com/users/zyh3826/repos",
"events_url": "https://api.github.com/users/zyh3826/events{/privacy}",
"received_events_url": "https://api.github.com/users/zyh3826/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2023-05-16T03:45:42 | 2023-05-17T11:21:36 | 2023-05-17T11:21:36 | NONE | null | ### Feature request
Hi,
I have a huge cached file using `map`(over 500GB), and I want to change an attribution of each element, is there possible to do it using some method instead of re-generating, because `map` takes over 24 hours
### Motivation
For large datasets, I think it is very important because we always face the problem which is changing something in the original cache without re-generating it.
### Your contribution
For now, I can't help, sorry. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5868/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5867 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5867/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5867/comments | https://api.github.com/repos/huggingface/datasets/issues/5867/events | https://github.com/huggingface/datasets/pull/5867 | 1,710,656,067 | PR_kwDODunzps5QizOn | 5,867 | Add logic for hashing modules/functions optimized with `torch.compile` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2023-05-15T19:03:35 | 2023-05-17T13:41:48 | null | CONTRIBUTOR | null | Fix https://github.com/huggingface/datasets/issues/5839
PS: The `Pickler.save` method is becoming a bit messy, so I plan to refactor the pickler a bit at some point. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5867/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5867",
"html_url": "https://github.com/huggingface/datasets/pull/5867",
"diff_url": "https://github.com/huggingface/datasets/pull/5867.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5867.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5866 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5866/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5866/comments | https://api.github.com/repos/huggingface/datasets/issues/5866/events | https://github.com/huggingface/datasets/issues/5866 | 1,710,496,993 | I_kwDODunzps5l9Bzh | 5,866 | Issue with Sequence features | {
"login": "alialamiidrissi",
"id": 14365168,
"node_id": "MDQ6VXNlcjE0MzY1MTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/14365168?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alialamiidrissi",
"html_url": "https://github.com/alialamiidrissi",
"followers_url": "https://api.github.com/users/alialamiidrissi/followers",
"following_url": "https://api.github.com/users/alialamiidrissi/following{/other_user}",
"gists_url": "https://api.github.com/users/alialamiidrissi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alialamiidrissi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alialamiidrissi/subscriptions",
"organizations_url": "https://api.github.com/users/alialamiidrissi/orgs",
"repos_url": "https://api.github.com/users/alialamiidrissi/repos",
"events_url": "https://api.github.com/users/alialamiidrissi/events{/privacy}",
"received_events_url": "https://api.github.com/users/alialamiidrissi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-15T17:13:29 | 2023-05-26T11:57:17 | 2023-05-26T11:57:17 | NONE | null | ### Describe the bug
Sequences features sometimes causes errors when the specified length is not -1
### Steps to reproduce the bug
```python
import numpy as np
from datasets import Features, ClassLabel, Sequence, Value, Dataset
feats = Features(**{'target': ClassLabel(names=[0, 1]),'x': Sequence(feature=Value(dtype='float64',id=None), length=2, id=None)})
Dataset.from_dict({"target": np.ones(2000).astype(int), "x": np.random.rand(2000,2)},features = feats).flatten_indices()
```
Throws:
```
TypeError: Couldn't cast array of type
fixed_size_list<item: double>[2]
to
Sequence(feature=Value(dtype='float64', id=None), length=2, id=None)
```
The same code works without any issues when `length = -1`
EDIT: The error seems to happen only when the length of the dataset is bigger than 1000 for some reason
### Expected behavior
No exception
### Environment info
- `datasets` version: 2.10.1
- Python version: 3.9.5
- PyArrow version: 11.0.0
- Pandas version: 1.4.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5866/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5865 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5865/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5865/comments | https://api.github.com/repos/huggingface/datasets/issues/5865/events | https://github.com/huggingface/datasets/pull/5865 | 1,710,455,738 | PR_kwDODunzps5QiHnw | 5,865 | Deprecate task api | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 9 | 2023-05-15T16:48:24 | 2023-07-10T12:33:59 | 2023-07-10T12:24:01 | CONTRIBUTOR | null | The task API is not well adopted in the ecosystem, so this PR deprecates it. The `train_eval_index` is a newer, more flexible solution that should be used instead (I think?).
These are the projects that still use the task API :
* the image classification example in Transformers: [here](https://github.com/huggingface/transformers/blob/8f76dc8e5aaad58f2df7748b6d6970376f315a9a/examples/pytorch/image-classification/run_image_classification_no_trainer.py#L262) and [here](https://github.com/huggingface/transformers/blob/8f76dc8e5aaad58f2df7748b6d6970376f315a9a/examples/tensorflow/image-classification/run_image_classification.py#L277)
* autotrain: [here](https://github.com/huggingface/autotrain-backend/blob/455e274004b56f9377d64db4ab03671508fcc4cd/zeus/zeus/run/utils.py#L666)
* api-inference-community: [here](https://github.com/huggingface/api-inference-community/blob/fb8fb29d577a5bf01c82944db745489a6d6ed3d4/manage.py#L64) (but the rest of the code does not call the `resolve_dataset` function)
So we need to update these files after the merge.
cc @lewtun | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5865/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5865",
"html_url": "https://github.com/huggingface/datasets/pull/5865",
"diff_url": "https://github.com/huggingface/datasets/pull/5865.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5865.patch",
"merged_at": "2023-07-10T12:24:01"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5864 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5864/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5864/comments | https://api.github.com/repos/huggingface/datasets/issues/5864/events | https://github.com/huggingface/datasets/issues/5864 | 1,710,450,047 | I_kwDODunzps5l82V_ | 5,864 | Slow iteration over Torch tensors | {
"login": "crisostomi",
"id": 51738205,
"node_id": "MDQ6VXNlcjUxNzM4MjA1",
"avatar_url": "https://avatars.githubusercontent.com/u/51738205?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/crisostomi",
"html_url": "https://github.com/crisostomi",
"followers_url": "https://api.github.com/users/crisostomi/followers",
"following_url": "https://api.github.com/users/crisostomi/following{/other_user}",
"gists_url": "https://api.github.com/users/crisostomi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/crisostomi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/crisostomi/subscriptions",
"organizations_url": "https://api.github.com/users/crisostomi/orgs",
"repos_url": "https://api.github.com/users/crisostomi/repos",
"events_url": "https://api.github.com/users/crisostomi/events{/privacy}",
"received_events_url": "https://api.github.com/users/crisostomi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-05-15T16:43:58 | 2023-05-16T03:27:38 | null | NONE | null | ### Describe the bug
I have a problem related to this [issue](https://github.com/huggingface/datasets/issues/5841): I get a way slower iteration when using a Torch dataloader if I use vanilla Numpy tensors or if I first apply a ToTensor transform to the input. In particular, it takes 5 seconds to iterate over the vanilla input and ~30s after the transformation.
### Steps to reproduce the bug
Here is the minimum code to reproduce the problem
```python
import numpy as np
from datasets import Dataset, DatasetDict, load_dataset, Array3D, Image, Features
from torch.utils.data import DataLoader
from tqdm import tqdm
import torchvision
from torchvision.transforms import ToTensor, Normalize
#################################
# Without transform
#################################
train_dataset = load_dataset(
'cifar100',
split='train',
use_auth_token=True,
)
train_dataset.set_format(type="numpy", columns=["img", "fine_label"])
train_loader= DataLoader(
train_dataset,
batch_size=100,
pin_memory=False,
shuffle=True,
num_workers=8,
)
for batch in tqdm(train_loader, desc="Loading data, no transform"):
pass
#################################
# With transform
#################################
transform_func = torchvision.transforms.Compose([
ToTensor(),
Normalize(mean=[0.485, 0.456, 0.406], std= [0.229, 0.224, 0.225]),]
)
train_dataset = train_dataset.map(
desc=f"Preprocessing samples",
function=lambda x: {"img": transform_func(x["img"])},
)
train_dataset.set_format(type="numpy", columns=["img", "fine_label"])
train_loader= DataLoader(
train_dataset,
batch_size=100,
pin_memory=False,
shuffle=True,
num_workers=8,
)
for batch in tqdm(train_loader, desc="Loading data after transform"):
pass
```
I have also tried converting the Image column to an Array3D
```python
img_shape = train_dataset[0]["img"].shape
features = train_dataset.features.copy()
features["x"] = Array3D(shape=img_shape, dtype="float32")
train_dataset = train_dataset.map(
desc=f"Preprocessing samples",
function=lambda x: {"x": np.array(x["img"], dtype=np.uint8)},
features=features,
)
train_dataset.cast_column("x", Array3D(shape=img_shape, dtype="float32"))
train_dataset.set_format(type="numpy", columns=["x", "fine_label"])
```
but to no avail. Any clue?
### Expected behavior
The iteration should take approximately the same time with or without the transformation, as it doesn't change the shape of the input. What may be the issue here?
### Environment info
```
- `datasets` version: 2.12.0
- Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5864/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5863 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5863/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5863/comments | https://api.github.com/repos/huggingface/datasets/issues/5863/events | https://github.com/huggingface/datasets/pull/5863 | 1,710,335,905 | PR_kwDODunzps5QhtlM | 5,863 | Use a new low-memory approach for tf dataset index shuffling | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 36 | 2023-05-15T15:28:34 | 2023-06-08T16:40:18 | 2023-06-08T16:32:51 | MEMBER | null | This PR tries out a new approach to generating the index tensor in `to_tf_dataset`, which should reduce memory usage for very large datasets. I'll need to do some testing before merging it!
Fixes #5855 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5863/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5863",
"html_url": "https://github.com/huggingface/datasets/pull/5863",
"diff_url": "https://github.com/huggingface/datasets/pull/5863.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5863.patch",
"merged_at": "2023-06-08T16:32:50"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5862 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5862/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5862/comments | https://api.github.com/repos/huggingface/datasets/issues/5862/events | https://github.com/huggingface/datasets/issues/5862 | 1,710,140,646 | I_kwDODunzps5l7qzm | 5,862 | IndexError: list index out of range with data hosted on Zenodo | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | 1 | 2023-05-15T13:47:19 | 2023-06-16T14:54:02 | null | MEMBER | null | The dataset viewer sometimes raises an `IndexError`:
```
IndexError: list index out of range
```
See:
- huggingface/datasets-server#1151
- https://huggingface.co/datasets/reddit/discussions/5
- huggingface/datasets-server#1118
- https://huggingface.co/datasets/krr-oxford/OntoLAMA/discussions/1
- https://huggingface.co/datasets/hyperpartisan_news_detection/discussions/3
- https://huggingface.co/datasets/um005/discussions/2
- https://huggingface.co/datasets/tapaco/discussions/2
- https://huggingface.co/datasets/common_language/discussions/3
- https://huggingface.co/datasets/pass/discussions/1
After investigation:
- This happens with data files hosted on Zenodo
- Indeed, there is an underlying 429 HTTP error: Too Many Requests
Note that some time ago, it also happened with data files hosted on Google Drive. See:
- #4581
- #4580
The reason then was that there was a 403 HTTP error: Forbidden
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5862/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5861 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5861/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5861/comments | https://api.github.com/repos/huggingface/datasets/issues/5861/events | https://github.com/huggingface/datasets/pull/5861 | 1,709,807,340 | PR_kwDODunzps5Qf55q | 5,861 | Better error message when combining dataset dicts instead of datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2023-05-15T10:36:24 | 2023-05-23T10:40:13 | 2023-05-23T10:32:58 | MEMBER | null | close https://github.com/huggingface/datasets/issues/5851 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5861/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5861",
"html_url": "https://github.com/huggingface/datasets/pull/5861",
"diff_url": "https://github.com/huggingface/datasets/pull/5861.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5861.patch",
"merged_at": "2023-05-23T10:32:58"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5860 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5860/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5860/comments | https://api.github.com/repos/huggingface/datasets/issues/5860/events | https://github.com/huggingface/datasets/pull/5860 | 1,709,727,460 | PR_kwDODunzps5QfojD | 5,860 | Minor tqdm optim | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-05-15T09:49:37 | 2023-05-17T18:46:46 | 2023-05-17T18:39:35 | MEMBER | null | Don't create a tqdm progress bar when `disable_tqdm` is passed to `map_nested`.
On my side it sped up some iterable datasets by ~30% when `map_nested` is used extensively to recursively tensorize python dicts. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5860/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5860",
"html_url": "https://github.com/huggingface/datasets/pull/5860",
"diff_url": "https://github.com/huggingface/datasets/pull/5860.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5860.patch",
"merged_at": "2023-05-17T18:39:35"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5859 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5859/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5859/comments | https://api.github.com/repos/huggingface/datasets/issues/5859/events | https://github.com/huggingface/datasets/pull/5859 | 1,709,554,829 | PR_kwDODunzps5QfDLC | 5,859 | Raise TypeError when indexing a dataset with bool | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 7 | 2023-05-15T08:08:42 | 2023-05-25T16:31:24 | 2023-05-25T16:23:17 | MEMBER | null | Fix #5858. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5859/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5859",
"html_url": "https://github.com/huggingface/datasets/pull/5859",
"diff_url": "https://github.com/huggingface/datasets/pull/5859.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5859.patch",
"merged_at": "2023-05-25T16:23:17"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5858 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5858/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5858/comments | https://api.github.com/repos/huggingface/datasets/issues/5858/events | https://github.com/huggingface/datasets/issues/5858 | 1,709,332,632 | I_kwDODunzps5l4liY | 5,858 | Throw an error when dataset improperly indexed | {
"login": "sarahwie",
"id": 8027676,
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sarahwie",
"html_url": "https://github.com/sarahwie",
"followers_url": "https://api.github.com/users/sarahwie/followers",
"following_url": "https://api.github.com/users/sarahwie/following{/other_user}",
"gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions",
"organizations_url": "https://api.github.com/users/sarahwie/orgs",
"repos_url": "https://api.github.com/users/sarahwie/repos",
"events_url": "https://api.github.com/users/sarahwie/events{/privacy}",
"received_events_url": "https://api.github.com/users/sarahwie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | 1 | 2023-05-15T05:15:53 | 2023-05-25T16:23:19 | 2023-05-25T16:23:19 | NONE | null | ### Describe the bug
Pandas-style subset indexing on dataset does not throw an error, when maybe it should. Instead returns the first instance of the dataset regardless of index condition.
### Steps to reproduce the bug
Steps to reproduce the behavior:
1. `squad = datasets.load_dataset("squad_v2", split="validation")`
2. `item = squad[squad['question'] == "Who was the Norse leader?"]`
or `it = squad[squad['id'] == '56ddde6b9a695914005b962b']`
3. returns the first item in the dataset, which does not satisfy the above conditions:
`{'id': '56ddde6b9a695914005b9628', 'title': 'Normans', 'context': 'The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their name to Normandy, a region in France. They were descended from Norse ("Norman" comes from "Norseman") raiders and pirates from Denmark, Iceland and Norway who, under their leader Rollo, agreed to swear fealty to King Charles III of West Francia. Through generations of assimilation and mixing with the native Frankish and Roman-Gaulish populations, their descendants would gradually merge with the Carolingian-based cultures of West Francia. The distinct cultural and ethnic identity of the Normans emerged initially in the first half of the 10th century, and it continued to evolve over the succeeding centuries.', 'question': 'In what country is Normandy located?', 'answers': {'text': ['France', 'France', 'France', 'France'], 'answer_start': [159, 159, 159, 159]}}`
### Expected behavior
Should either throw an error message, or return the dataset item that satisfies the condition.
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5858/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5857 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5857/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5857/comments | https://api.github.com/repos/huggingface/datasets/issues/5857/events | https://github.com/huggingface/datasets/issues/5857 | 1,709,326,622 | I_kwDODunzps5l4kEe | 5,857 | Adding chemistry dataset/models in huggingface | {
"login": "knc6",
"id": 16902896,
"node_id": "MDQ6VXNlcjE2OTAyODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/16902896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/knc6",
"html_url": "https://github.com/knc6",
"followers_url": "https://api.github.com/users/knc6/followers",
"following_url": "https://api.github.com/users/knc6/following{/other_user}",
"gists_url": "https://api.github.com/users/knc6/gists{/gist_id}",
"starred_url": "https://api.github.com/users/knc6/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/knc6/subscriptions",
"organizations_url": "https://api.github.com/users/knc6/orgs",
"repos_url": "https://api.github.com/users/knc6/repos",
"events_url": "https://api.github.com/users/knc6/events{/privacy}",
"received_events_url": "https://api.github.com/users/knc6/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2023-05-15T05:09:49 | 2023-07-21T13:45:40 | 2023-07-21T13:45:40 | NONE | null | ### Feature request
Huggingface is really amazing platform for open science.
In addition to computer vision, video and NLP, would it be of interest to add chemistry/materials science dataset/models in Huggingface? Or, if its already done, can you provide some pointers.
We have been working on a comprehensive benchmark on this topic: [JARVIS-Leaderboard](https://pages.nist.gov/jarvis_leaderboard/) and I am wondering if we could contribute/integrate this project as a part of huggingface.
### Motivation
Similar to the main stream AI field, there is need of large scale benchmarks/models/infrastructure for chemistry/materials data.
### Your contribution
We can start adding datasets as our [benchmarks](https://github.com/usnistgov/jarvis_leaderboard/tree/main/jarvis_leaderboard/benchmarks) should be easily convertible to the dataset format. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5857/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5856 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5856/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5856/comments | https://api.github.com/repos/huggingface/datasets/issues/5856/events | https://github.com/huggingface/datasets/issues/5856 | 1,709,218,242 | I_kwDODunzps5l4JnC | 5,856 | Error loading natural_questions | {
"login": "Crownor",
"id": 19185508,
"node_id": "MDQ6VXNlcjE5MTg1NTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/19185508?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Crownor",
"html_url": "https://github.com/Crownor",
"followers_url": "https://api.github.com/users/Crownor/followers",
"following_url": "https://api.github.com/users/Crownor/following{/other_user}",
"gists_url": "https://api.github.com/users/Crownor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Crownor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Crownor/subscriptions",
"organizations_url": "https://api.github.com/users/Crownor/orgs",
"repos_url": "https://api.github.com/users/Crownor/repos",
"events_url": "https://api.github.com/users/Crownor/events{/privacy}",
"received_events_url": "https://api.github.com/users/Crownor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-15T02:46:04 | 2023-06-05T09:11:19 | 2023-06-05T09:11:18 | NONE | null | ### Describe the bug
When try to load natural_questions through datasets == 2.12.0 with python == 3.8.9:
```python
import datasets
datasets.load_dataset('natural_questions',beam_runner='DirectRunner')
```
It failed with following info:
`pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs`
### Steps to reproduce the bug
In python console:
```python
import datasets
datasets.load_dataset('natural_questions',beam_runner='DirectRunner')
```
Then the trace is:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/load.py", line 1797, in load_dataset
builder_instance.download_and_prepare(
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/builder.py", line 890, in download_and_prepare
self._download_and_prepare(
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/builder.py", line 2019, in _download_and_prepare
num_examples, num_bytes = beam_writer.finalize(metrics.query(m_filter))
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/arrow_writer.py", line 694, in finalize
shard_num_bytes, _ = parquet_to_arrow(source, destination)
File "/home/nlp/.cache/pypoetry/virtualenvs/drg-W3LF4Ol9-py3.8/lib/python3.8/site-packages/datasets/arrow_writer.py", line 737, in parquet_to_arrow
for record_batch in parquet_file.iter_batches():
File "pyarrow/_parquet.pyx", line 1323, in iter_batches
File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
```
### Expected behavior
load natural_question questions
### Environment info
```
- `datasets` version: 2.12.0
- Platform: Linux-3.10.0-1160.42.2.el7.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.9
- Huggingface_hub version: 0.14.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.1
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5856/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5855 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5855/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5855/comments | https://api.github.com/repos/huggingface/datasets/issues/5855/events | https://github.com/huggingface/datasets/issues/5855 | 1,708,784,943 | I_kwDODunzps5l2f0v | 5,855 | `to_tf_dataset` consumes too much memory | {
"login": "massquantity",
"id": 28751760,
"node_id": "MDQ6VXNlcjI4NzUxNzYw",
"avatar_url": "https://avatars.githubusercontent.com/u/28751760?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/massquantity",
"html_url": "https://github.com/massquantity",
"followers_url": "https://api.github.com/users/massquantity/followers",
"following_url": "https://api.github.com/users/massquantity/following{/other_user}",
"gists_url": "https://api.github.com/users/massquantity/gists{/gist_id}",
"starred_url": "https://api.github.com/users/massquantity/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/massquantity/subscriptions",
"organizations_url": "https://api.github.com/users/massquantity/orgs",
"repos_url": "https://api.github.com/users/massquantity/repos",
"events_url": "https://api.github.com/users/massquantity/events{/privacy}",
"received_events_url": "https://api.github.com/users/massquantity/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 6 | 2023-05-14T01:22:29 | 2023-06-08T16:32:52 | 2023-06-08T16:32:52 | NONE | null | ### Describe the bug
Hi, I'm using `to_tf_dataset` to convert a _large_ dataset to `tf.data.Dataset`. I observed that the data loading *before* training took a lot of time and memory, even with `batch_size=1`.
After some digging, i believe the reason lies in the shuffle behavior. The [source code](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/tf_utils.py#L185) uses `len(dataset)` as the `buffer_size`, which may load all the data into the memory, and the [tf.data doc](https://www.tensorflow.org/guide/data#randomly_shuffling_input_data) also states that "While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill".
### Steps to reproduce the bug
```python
from datasets import Dataset
def gen(): # some large data
for i in range(50000000):
yield {"data": i}
ds = Dataset.from_generator(gen, cache_dir="./huggingface")
tf_ds = ds.to_tf_dataset(
batch_size=64,
shuffle=False, # no shuffle
drop_remainder=False,
prefetch=True,
)
# fast and memory friendly 🤗
for batch in tf_ds:
...
tf_ds_shuffle = ds.to_tf_dataset(
batch_size=64,
shuffle=True,
drop_remainder=False,
prefetch=True,
)
# slow and memory hungry for simple iteration 😱
for batch in tf_ds_shuffle:
...
```
### Expected behavior
Shuffling should not load all the data into the memory. Would adding a `buffer_size` parameter in the `to_tf_dataset` API alleviate the problem?
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.17.1-051701-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 1.4.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5855/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5854 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5854/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5854/comments | https://api.github.com/repos/huggingface/datasets/issues/5854/events | https://github.com/huggingface/datasets/issues/5854 | 1,708,779,300 | I_kwDODunzps5l2eck | 5,854 | Can not load audiofolder dataset on kaggle | {
"login": "ILG2021",
"id": 93691919,
"node_id": "U_kgDOBZWgDw",
"avatar_url": "https://avatars.githubusercontent.com/u/93691919?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ILG2021",
"html_url": "https://github.com/ILG2021",
"followers_url": "https://api.github.com/users/ILG2021/followers",
"following_url": "https://api.github.com/users/ILG2021/following{/other_user}",
"gists_url": "https://api.github.com/users/ILG2021/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ILG2021/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ILG2021/subscriptions",
"organizations_url": "https://api.github.com/users/ILG2021/orgs",
"repos_url": "https://api.github.com/users/ILG2021/repos",
"events_url": "https://api.github.com/users/ILG2021/events{/privacy}",
"received_events_url": "https://api.github.com/users/ILG2021/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-05-14T00:50:47 | 2023-08-16T13:35:36 | 2023-07-21T13:53:45 | NONE | null | ### Describe the bug
It's crash log:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/audiofolder/audiofolder.py or any data file in the same directory. Couldn't find 'audiofolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/audiofolder/audiofolder.py
### Steps to reproduce the bug
![image](https://github.com/huggingface/datasets/assets/93691919/a2829d27-d15c-4acc-86fb-d1987c760468)
common_voice = load_dataset("audiofolder", data_dir="/kaggle/working/data")
### Expected behavior
load dataset without error. It works ok on colab, but on kaggle it happends.
### Environment info
- `datasets` version: 2.1.0
- Platform: Linux-5.15.109+-x86_64-with-glibc2.31
- Python version: 3.10.10
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5854/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5853/comments | https://api.github.com/repos/huggingface/datasets/issues/5853/events | https://github.com/huggingface/datasets/pull/5853 | 1,708,092,786 | PR_kwDODunzps5QaZLP | 5,853 | [docs] Redirects, migrated from nginx | {
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-05-12T19:19:27 | 2023-05-15T10:37:19 | 2023-05-15T10:30:14 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5853/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5853",
"html_url": "https://github.com/huggingface/datasets/pull/5853",
"diff_url": "https://github.com/huggingface/datasets/pull/5853.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5853.patch",
"merged_at": "2023-05-15T10:30:14"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5852 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5852/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5852/comments | https://api.github.com/repos/huggingface/datasets/issues/5852/events | https://github.com/huggingface/datasets/pull/5852 | 1,707,927,165 | PR_kwDODunzps5QZ1lj | 5,852 | Iterable torch formatting | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 11 | 2023-05-12T16:48:49 | 2023-06-13T16:04:05 | 2023-06-13T15:57:05 | MEMBER | null | Used the TorchFormatter to get torch tensors in iterable dataset with format set to "torch".
It uses the data from Arrow if possible, otherwise applies recursive_tensorize.
When set back to format_type=None, cast_to_python_objects is used.
requires https://github.com/huggingface/datasets/pull/5821
close https://github.com/huggingface/datasets/issues/5793 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5852/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5852",
"html_url": "https://github.com/huggingface/datasets/pull/5852",
"diff_url": "https://github.com/huggingface/datasets/pull/5852.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5852.patch",
"merged_at": "2023-06-13T15:57:05"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5850 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5850/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5850/comments | https://api.github.com/repos/huggingface/datasets/issues/5850/events | https://github.com/huggingface/datasets/pull/5850 | 1,707,678,911 | PR_kwDODunzps5QZALv | 5,850 | Make packaged builders skip non-supported file formats | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 10 | 2023-05-12T13:52:34 | 2023-06-07T12:26:38 | null | MEMBER | null | This PR makes packaged builders skip non-supported file formats:
- Csv builder skips non-CSV files
- Analogously for the other builders
Fix #5849. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5850/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5850",
"html_url": "https://github.com/huggingface/datasets/pull/5850",
"diff_url": "https://github.com/huggingface/datasets/pull/5850.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5850.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5849 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5849/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5849/comments | https://api.github.com/repos/huggingface/datasets/issues/5849/events | https://github.com/huggingface/datasets/issues/5849 | 1,707,551,511 | I_kwDODunzps5lxysX | 5,849 | CSV datasets should only read the CSV data files in the repo | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | 0 | 2023-05-12T12:29:53 | 2023-06-22T14:16:27 | 2023-06-22T14:16:27 | MEMBER | null | When a no-script dataset has many CSV files and a JPG file, the library infers to use the Csv builder, but tries to read as CSV all files in the repo, also the JPG file.
I think the Csv builder should filter out non-CSV files when reading.
An analogue solution should be implemented for other packaged builders.
Related to:
- https://huggingface.co/datasets/abidlabs/img2text/discussions/1
- https://github.com/gradio-app/gradio/pull/3973#issuecomment-1545409061
CC: @abidlabs @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5849/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5849/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5848 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5848/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5848/comments | https://api.github.com/repos/huggingface/datasets/issues/5848/events | https://github.com/huggingface/datasets/pull/5848 | 1,707,506,734 | PR_kwDODunzps5QYa1B | 5,848 | Add `accelerate` as metric's test dependency to fix CI error | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-05-12T12:01:01 | 2023-05-12T13:48:47 | 2023-05-12T13:39:06 | CONTRIBUTOR | null | The `frugalscore` metric uses Transformers' Trainer, which requires `accelerate` (as of recently).
Fixes the following [CI error](https://github.com/huggingface/datasets/actions/runs/4950900048/jobs/8855148703?pr=5845). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5848/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5848",
"html_url": "https://github.com/huggingface/datasets/pull/5848",
"diff_url": "https://github.com/huggingface/datasets/pull/5848.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5848.patch",
"merged_at": "2023-05-12T13:39:06"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5847 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5847/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5847/comments | https://api.github.com/repos/huggingface/datasets/issues/5847/events | https://github.com/huggingface/datasets/issues/5847 | 1,706,616,634 | I_kwDODunzps5luOc6 | 5,847 | Streaming IterableDataset not working with translation pipeline | {
"login": "jlquinn",
"id": 826841,
"node_id": "MDQ6VXNlcjgyNjg0MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/826841?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jlquinn",
"html_url": "https://github.com/jlquinn",
"followers_url": "https://api.github.com/users/jlquinn/followers",
"following_url": "https://api.github.com/users/jlquinn/following{/other_user}",
"gists_url": "https://api.github.com/users/jlquinn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jlquinn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlquinn/subscriptions",
"organizations_url": "https://api.github.com/users/jlquinn/orgs",
"repos_url": "https://api.github.com/users/jlquinn/repos",
"events_url": "https://api.github.com/users/jlquinn/events{/privacy}",
"received_events_url": "https://api.github.com/users/jlquinn/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 8 | 2023-05-11T21:52:38 | 2023-05-16T15:59:55 | null | NONE | null | ### Describe the bug
I'm trying to use a streaming dataset for translation inference to avoid downloading the training data.
I'm using a pipeline and a dataset, and following the guidance in the tutorial.
Instead I get an exception that IterableDataset has no len().
### Steps to reproduce the bug
CODE:
```
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
from datasets import load_dataset
ds = load_dataset(path="wmt14", name="fr-en", split="test", streaming=True)
bs=1
mt = pipeline("translation_en_to_fr", model="t5-base", batch_size=bs)
#print(mt("hello")) THIS WORKS
ks = KeyDataset(ds, "translation")
print(f"{ks}")
xx= mt(ks)
for x in xx:
print(x)
```
RUN:
```
(watnlp) [jlquinn@bertdev01 hf]$ python ende.t5.pipe.py
2023-05-11 16:48:08.817572: I tensorflow/core/util/util.cc:169] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-05-11 16:48:08.821388: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2023-05-11 16:48:08.821407: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
<transformers.pipelines.pt_utils.KeyDataset object at 0x7f61ed5da9d0>
Traceback (most recent call last):
File "/home/jlquinn/models/hf/ende.t5.pipe.py", line 11, in <module>
for x in xx:
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 111, in __next__
item = next(self.iterator)
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 111, in __next__
item = next(self.iterator)
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
data = self._next_data()
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 720, in _next_data
index = self._next_index() # may raise StopIteration
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 671, in _next_index
return next(self._sampler_iter) # may raise StopIteration
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/sampler.py", line 247, in __iter__
for idx in self.sampler:
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/torch/utils/data/sampler.py", line 76, in __iter__
return iter(range(len(self.data_source)))
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 13, in __len__
return len(self.dataset)
File "/home/jlquinn/miniconda3/envs/watnlp/lib/python3.9/site-packages/transformers/pipelines/pt_utils.py", line 289, in __len__
return len(self.dataset)
TypeError: object of type 'IterableDataset' has no len()
```
### Expected behavior
I'm expecting french translations of the english test set to be printed.
### Environment info
Run on CPU with no GPU.
RHEL 8.7 x86_64
python 3.9.0
transformers 4.17.0
datasets 2.0.0
tokenizers 0.12.1
```
(watnlp) [jlquinn@bertdev01 hf]$ datasets-cli env
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.0.0
- Platform: Linux-4.18.0-372.19.1.el8_6.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.0
- PyArrow version: 8.0.0
- Pandas version: 1.4.4
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5847/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5851 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5851/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5851/comments | https://api.github.com/repos/huggingface/datasets/issues/5851/events | https://github.com/huggingface/datasets/issues/5851 | 1,707,907,048 | I_kwDODunzps5lzJfo | 5,851 | Error message not clear in interleaving datasets | {
"login": "surya-narayanan",
"id": 17240858,
"node_id": "MDQ6VXNlcjE3MjQwODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surya-narayanan",
"html_url": "https://github.com/surya-narayanan",
"followers_url": "https://api.github.com/users/surya-narayanan/followers",
"following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}",
"gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions",
"organizations_url": "https://api.github.com/users/surya-narayanan/orgs",
"repos_url": "https://api.github.com/users/surya-narayanan/repos",
"events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}",
"received_events_url": "https://api.github.com/users/surya-narayanan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | 0 | 2023-05-11T20:52:13 | 2023-05-23T10:32:59 | 2023-05-23T10:32:59 | NONE | null | ### System Info
standard env
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I'm trying to interleave 'sciq', 'wiki' and the 'pile-enron' dataset. I think the error I made was that I loaded the train split of one, but for the other but the error is not too helpful-
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[/home/suryahari/Vornoi/save_model_ops.py](https://vscode-remote+ssh-002dremote-002bthomsonlab-002d2-002ejamesgornet-002ecom.vscode-resource.vscode-cdn.net/home/suryahari/Vornoi/save_model_ops.py) in line 3
[41](file:///home/suryahari/Vornoi/save_model_ops.py?line=40) # %%
----> [43](file:///home/suryahari/Vornoi/save_model_ops.py?line=42) dataset = interleave_datasets(datasets, stopping_strategy="all_exhausted")
File [~/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py:124](https://vscode-remote+ssh-002dremote-002bthomsonlab-002d2-002ejamesgornet-002ecom.vscode-resource.vscode-cdn.net/home/suryahari/~/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py:124), in interleave_datasets(datasets, probabilities, seed, info, split, stopping_strategy)
[122](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=121) for dataset in datasets[1:]:
[123](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=122) if (map_style and not isinstance(dataset, Dataset)) or (iterable and not isinstance(dataset, IterableDataset)):
--> [124](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=123) raise ValueError(
[125](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=124) f"Unable to interleave a {type(datasets[0])} with a {type(dataset)}. Expected a list of Dataset objects or a list of IterableDataset objects."
[126](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=125) )
[127](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=126) if stopping_strategy not in ["first_exhausted", "all_exhausted"]:
[128](file:///home/suryahari/miniconda3/envs/vornoi/lib/python3.10/site-packages/datasets/combine.py?line=127) raise ValueError(f"{stopping_strategy} is not supported. Please enter a valid stopping_strategy.")
ValueError: Unable to interleave a with a . Expected a list of Dataset objects or a list of IterableDataset objects.
```
### Expected behavior
the error message should hopefully be more clear | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5851/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5846 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5846/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5846/comments | https://api.github.com/repos/huggingface/datasets/issues/5846/events | https://github.com/huggingface/datasets/issues/5846 | 1,706,289,290 | I_kwDODunzps5ls-iK | 5,846 | load_dataset('bigcode/the-stack-dedup', streaming=True) very slow! | {
"login": "tbenthompson",
"id": 4241811,
"node_id": "MDQ6VXNlcjQyNDE4MTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4241811?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tbenthompson",
"html_url": "https://github.com/tbenthompson",
"followers_url": "https://api.github.com/users/tbenthompson/followers",
"following_url": "https://api.github.com/users/tbenthompson/following{/other_user}",
"gists_url": "https://api.github.com/users/tbenthompson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tbenthompson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tbenthompson/subscriptions",
"organizations_url": "https://api.github.com/users/tbenthompson/orgs",
"repos_url": "https://api.github.com/users/tbenthompson/repos",
"events_url": "https://api.github.com/users/tbenthompson/events{/privacy}",
"received_events_url": "https://api.github.com/users/tbenthompson/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | 4 | 2023-05-11T17:58:57 | 2023-05-16T03:23:46 | null | NONE | null | ### Describe the bug
Running
```
import datasets
ds = datasets.load_dataset('bigcode/the-stack-dedup', streaming=True)
```
takes about 2.5 minutes!
I would expect this to be near instantaneous. With other datasets, the runtime is one or two seconds.
### Environment info
- `datasets` version: 2.11.0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.10
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5846/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5845/comments | https://api.github.com/repos/huggingface/datasets/issues/5845/events | https://github.com/huggingface/datasets/pull/5845 | 1,706,253,251 | PR_kwDODunzps5QUMjS | 5,845 | Add `date_format` param to the CSV reader | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 6 | 2023-05-11T17:29:57 | 2023-05-15T07:39:13 | 2023-05-12T15:14:48 | CONTRIBUTOR | null | Adds the `date_format` param introduced in Pandas 2.0 to the CSV reader and improves its type hints. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5845/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5845",
"html_url": "https://github.com/huggingface/datasets/pull/5845",
"diff_url": "https://github.com/huggingface/datasets/pull/5845.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5845.patch",
"merged_at": "2023-05-12T15:14:48"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5844 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5844/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5844/comments | https://api.github.com/repos/huggingface/datasets/issues/5844/events | https://github.com/huggingface/datasets/issues/5844 | 1,705,907,812 | I_kwDODunzps5lrhZk | 5,844 | TypeError: Couldn't cast array of type struct<answer: struct<unanswerable: bool, answerType: string, free_form_answer: string, evidence: list<item: string>, evidenceAnnotate: list<item: string>, highlighted_evidence: list<item: string>>> to ... | {
"login": "chen-coding",
"id": 54010030,
"node_id": "MDQ6VXNlcjU0MDEwMDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/54010030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chen-coding",
"html_url": "https://github.com/chen-coding",
"followers_url": "https://api.github.com/users/chen-coding/followers",
"following_url": "https://api.github.com/users/chen-coding/following{/other_user}",
"gists_url": "https://api.github.com/users/chen-coding/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chen-coding/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chen-coding/subscriptions",
"organizations_url": "https://api.github.com/users/chen-coding/orgs",
"repos_url": "https://api.github.com/users/chen-coding/repos",
"events_url": "https://api.github.com/users/chen-coding/events{/privacy}",
"received_events_url": "https://api.github.com/users/chen-coding/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 0 | 2023-05-11T14:15:01 | 2023-05-11T14:15:01 | null | NONE | null | ### Describe the bug
TypeError: Couldn't cast array of type struct<answer: struct<unanswerable: bool, answerType: string, free_form_answer: string, evidence: list<item: string>, evidenceAnnotate: list<item: string>, highlighted_evidence: list<item: string>>> to {'answer': {'unanswerable': Value(dtype='bool', id=None), 'answerType': Value(dtype='string', id=None), 'free_form_answer': Value(dtype='string', id=None), 'evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'evidenceAnnotate': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'highlighted_evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'unanswerable': Value(dtype='bool', id=None), 'answerType': Value(dtype='string', id=None), 'free_form_answer': Value(dtype='string', id=None), 'evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'evidenceAnnotate': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'highlighted_evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
When I use _load_dataset()_ I get the error
`from datasets import load_dataset
datafiles = {'train': './data/train.json', 'validation': './data/validation.json', 'test': './data/test.json'}
raw_data = load_dataset("json", data_files=datafiles, cache_dir="./cache")
`
Detailed error information is as follows:
Traceback (most recent call last):
File "C:/Users/CHENJIALEI/Desktop/NLPCC2023/NLPCC23_SciMRC-main/test2.py", line 9, in <module>
raw_data = load_dataset("json", data_files=datafiles, cache_dir="./cache")
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\load.py", line 1747, in load_dataset
builder_instance.download_and_prepare(
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\builder.py", line 814, in download_and_prepare
self._download_and_prepare(
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\builder.py", line 905, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\builder.py", line 1521, in _prepare_split
writer.write_table(table)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\arrow_writer.py", line 540, in write_table
pa_table = table_cast(pa_table, self._schema)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 2069, in table_cast
return cast_table_to_schema(table, schema)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 2031, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 2031, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1740, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1740, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1867, in cast_array_to_feature
casted_values = _c(array.values, feature[0])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1742, in wrapper
return func(array, *args, **kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1862, in cast_array_to_feature
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1862, in <listcomp>
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1742, in wrapper
return func(array, *args, **kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1867, in cast_array_to_feature
casted_values = _c(array.values, feature[0])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1742, in wrapper
return func(array, *args, **kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1913, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
It is successful when I load the data separately
`raw_data = load_dataset("json", data_files="./data/train.json", cache_dir="./cache")`
### Steps to reproduce the bug
1.from datasets import load_dataset
2.datafiles = {'train': './data/train.json', 'validation': './data/validation.json', 'test': './data/test.json'}
3.raw_data = load_dataset("json", data_files=datafiles, cache_dir="./cache")
### Expected behavior
Successfully load dataset
### Environment info
datasets == 2.6.1
pyarrow == 8.0.0
python == 3.8
platform:windows11 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5844/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5841/comments | https://api.github.com/repos/huggingface/datasets/issues/5841/events | https://github.com/huggingface/datasets/issues/5841 | 1,705,286,639 | I_kwDODunzps5lpJvv | 5,841 | Abusurdly slow on iteration | {
"login": "fecet",
"id": 41792945,
"node_id": "MDQ6VXNlcjQxNzkyOTQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/41792945?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fecet",
"html_url": "https://github.com/fecet",
"followers_url": "https://api.github.com/users/fecet/followers",
"following_url": "https://api.github.com/users/fecet/following{/other_user}",
"gists_url": "https://api.github.com/users/fecet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fecet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fecet/subscriptions",
"organizations_url": "https://api.github.com/users/fecet/orgs",
"repos_url": "https://api.github.com/users/fecet/repos",
"events_url": "https://api.github.com/users/fecet/events{/privacy}",
"received_events_url": "https://api.github.com/users/fecet/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-05-11T08:04:09 | 2023-05-15T15:38:13 | 2023-05-15T15:38:13 | NONE | null | ### Describe the bug
I am attempting to iterate through an image dataset, but I am encountering a significant slowdown in the iteration speed. In order to investigate this issue, I conducted the following experiment:
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_dict({"tensor":a})
for i in tqdm(ds.with_format("numpy")):
pass
for i in tqdm(ds.with_format("torch")):
pass
```
I noticed that the dataset in numpy format performs significantly faster than the one in torch format. My hypothesis is that the dataset undergoes a transformation process of torch->python->numpy(torch) in the background, which might be causing the slowdown. Is there any way to expedite the process by bypassing such transformations?
Furthermore, if I increase the size of a to an image shape, like:
```python
a=torch.randn(3,224,224)
```
the iteration speed becomes absurdly slow, around 100 iterations per second, whereas the speed with numpy format is approximately 250 iterations per second. This level of speed would be unacceptable for large image datasets, as it could take several hours just to iterate through a single epoch.
### Steps to reproduce the bug
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_dict({"tensor":a})
for i in tqdm(ds.with_format("numpy")):
pass
for i in tqdm(ds.with_format("torch")):
pass
```
### Expected behavior
iteration faster
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.10
- Python version: 3.8.16
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5841/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5841/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5840/comments | https://api.github.com/repos/huggingface/datasets/issues/5840/events | https://github.com/huggingface/datasets/issues/5840 | 1,705,212,085 | I_kwDODunzps5lo3i1 | 5,840 | load model error. | {
"login": "LanShanPi",
"id": 58167546,
"node_id": "MDQ6VXNlcjU4MTY3NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/58167546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LanShanPi",
"html_url": "https://github.com/LanShanPi",
"followers_url": "https://api.github.com/users/LanShanPi/followers",
"following_url": "https://api.github.com/users/LanShanPi/following{/other_user}",
"gists_url": "https://api.github.com/users/LanShanPi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LanShanPi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LanShanPi/subscriptions",
"organizations_url": "https://api.github.com/users/LanShanPi/orgs",
"repos_url": "https://api.github.com/users/LanShanPi/repos",
"events_url": "https://api.github.com/users/LanShanPi/events{/privacy}",
"received_events_url": "https://api.github.com/users/LanShanPi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-11T07:12:38 | 2023-05-12T13:44:07 | 2023-05-12T13:44:06 | NONE | null | ### Describe the bug
I had trained one model use deepspeed, when I load the final load I get the follow error:
OSError: Can't load tokenizer for '/XXX/DeepSpeedExamples/applications/DeepSpeed-Chat/output/step3-models/1.3b/actor'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '/home/fm001/hzl/Project/DeepSpeedExamples/applications/DeepSpeed-Chat/output/step3-models/1.3b/actor' is the correct path to a directory containing all relevant files for a BloomTokenizerFast tokenizer.
my load code is : python chat.py --path /XXX/DeepSpeedExamples/applications/DeepSpeed-Chat/output/step3-models/1.3b/actor/
### Steps to reproduce the bug
。。。
### Expected behavior
。。。
### Environment info
。。。 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5840/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5842 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5842/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5842/comments | https://api.github.com/repos/huggingface/datasets/issues/5842/events | https://github.com/huggingface/datasets/issues/5842 | 1,705,510,602 | I_kwDODunzps5lqAbK | 5,842 | Remove columns in interable dataset | {
"login": "surya-narayanan",
"id": 17240858,
"node_id": "MDQ6VXNlcjE3MjQwODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surya-narayanan",
"html_url": "https://github.com/surya-narayanan",
"followers_url": "https://api.github.com/users/surya-narayanan/followers",
"following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}",
"gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions",
"organizations_url": "https://api.github.com/users/surya-narayanan/orgs",
"repos_url": "https://api.github.com/users/surya-narayanan/repos",
"events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}",
"received_events_url": "https://api.github.com/users/surya-narayanan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-05-11T03:48:46 | 2023-06-21T16:36:42 | 2023-06-21T16:36:41 | NONE | null | ### Feature request
Right now, remove_columns() produces a NotImplementedError for iterable style datasets
### Motivation
It would be great to have the same functionality irrespective of whether one is using an iterable or a map-style dataset
### Your contribution
hope and courage. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5842/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5843 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5843/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5843/comments | https://api.github.com/repos/huggingface/datasets/issues/5843/events | https://github.com/huggingface/datasets/issues/5843 | 1,705,514,551 | I_kwDODunzps5lqBY3 | 5,843 | Can't add iterable datasets to a Dataset Dict. | {
"login": "surya-narayanan",
"id": 17240858,
"node_id": "MDQ6VXNlcjE3MjQwODU4",
"avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surya-narayanan",
"html_url": "https://github.com/surya-narayanan",
"followers_url": "https://api.github.com/users/surya-narayanan/followers",
"following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}",
"gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions",
"organizations_url": "https://api.github.com/users/surya-narayanan/orgs",
"repos_url": "https://api.github.com/users/surya-narayanan/repos",
"events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}",
"received_events_url": "https://api.github.com/users/surya-narayanan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-11T02:09:29 | 2023-05-25T04:51:59 | 2023-05-25T04:51:59 | NONE | null | ### System Info
standard env
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Get the following error:
TypeError: Values in `DatasetDict` should be of type `Dataset` but got type '<class 'datasets.iterable_dataset.IterableDataset'>'
### Expected behavior
should be able to add iterable datasets to a dataset dict | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5843/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5839 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5839/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5839/comments | https://api.github.com/repos/huggingface/datasets/issues/5839/events | https://github.com/huggingface/datasets/issues/5839 | 1,704,554,718 | I_kwDODunzps5lmXDe | 5,839 | Make models/functions optimized with `torch.compile` hashable | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | 0 | 2023-05-10T20:02:08 | 2023-05-10T20:02:08 | null | CONTRIBUTOR | null | As reported in https://github.com/huggingface/datasets/issues/5819, hashing functions/transforms that reference a model, or a function, optimized with `torch.compile` currently fails due to them not being picklable (the concrete error can be found in the linked issue).
The solutions to consider:
1. hashing/pickling the original, uncompiled version of a compiled model/function (attributes `_orig_mod`/`_torchdynamo_orig_callable`) (less precise than the 2nd option as it ignores the other params of `torch.compute`)
2. wait for https://github.com/pytorch/pytorch/issues/101107 to be resolved
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5839/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5838/comments | https://api.github.com/repos/huggingface/datasets/issues/5838/events | https://github.com/huggingface/datasets/issues/5838 | 1,703,210,848 | I_kwDODunzps5lhO9g | 5,838 | Streaming support for `load_from_disk` | {
"login": "Nilabhra",
"id": 5437792,
"node_id": "MDQ6VXNlcjU0Mzc3OTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5437792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Nilabhra",
"html_url": "https://github.com/Nilabhra",
"followers_url": "https://api.github.com/users/Nilabhra/followers",
"following_url": "https://api.github.com/users/Nilabhra/following{/other_user}",
"gists_url": "https://api.github.com/users/Nilabhra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Nilabhra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Nilabhra/subscriptions",
"organizations_url": "https://api.github.com/users/Nilabhra/orgs",
"repos_url": "https://api.github.com/users/Nilabhra/repos",
"events_url": "https://api.github.com/users/Nilabhra/events{/privacy}",
"received_events_url": "https://api.github.com/users/Nilabhra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 10 | 2023-05-10T06:25:22 | 2023-05-12T09:37:45 | 2023-05-12T09:37:45 | NONE | null | ### Feature request
Support for streaming datasets stored in object stores in `load_from_disk`.
### Motivation
The `load_from_disk` function supports fetching datasets stored in object stores such as `s3`. In many cases, the datasets that are stored in object stores are very large and being able to stream the data from the buckets becomes essential.
### Your contribution
I'd be happy to contribute this feature if I could get the guidance on how to do so. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5838/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5837 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5837/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5837/comments | https://api.github.com/repos/huggingface/datasets/issues/5837/events | https://github.com/huggingface/datasets/issues/5837 | 1,703,019,816 | I_kwDODunzps5lggUo | 5,837 | Use DeepSpeed load myself " .csv " dataset. | {
"login": "LanShanPi",
"id": 58167546,
"node_id": "MDQ6VXNlcjU4MTY3NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/58167546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LanShanPi",
"html_url": "https://github.com/LanShanPi",
"followers_url": "https://api.github.com/users/LanShanPi/followers",
"following_url": "https://api.github.com/users/LanShanPi/following{/other_user}",
"gists_url": "https://api.github.com/users/LanShanPi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LanShanPi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LanShanPi/subscriptions",
"organizations_url": "https://api.github.com/users/LanShanPi/orgs",
"repos_url": "https://api.github.com/users/LanShanPi/repos",
"events_url": "https://api.github.com/users/LanShanPi/events{/privacy}",
"received_events_url": "https://api.github.com/users/LanShanPi/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2023-05-10T02:39:28 | 2023-05-15T03:51:36 | null | NONE | null | ### Describe the bug
When I use DeepSpeed train a model with my own " XXX.csv" dataset I got the follow question:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py", line 1767, in load_dataset
builder_instance = load_dataset_builder(
File "/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py", line 1498, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/fm001/.conda/envs/hzl/lib/python3.8/site-packages/datasets/load.py", line 1217, in dataset_module_factory
raise FileNotFoundError(
FileNotFoundError: Couldn't find a dataset script at /home/fm001/hzl/Data/qa.csv/qa.csv.py or any data file in the same directory.
### Steps to reproduce the bug
my code is :
from datasets import load_dataset
mydata = load_dataset("/home/fm001/hzl/Data/qa.csv")
### Expected behavior
。。。
### Environment info
。。。 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5837/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5836 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5836/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5836/comments | https://api.github.com/repos/huggingface/datasets/issues/5836/events | https://github.com/huggingface/datasets/pull/5836 | 1,702,773,316 | PR_kwDODunzps5QIgzu | 5,836 | [docs] Custom decoding transforms | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-05-09T21:21:41 | 2023-05-15T07:36:12 | 2023-05-10T20:23:03 | MEMBER | null | Adds custom decoding transform solution to the docs to fix #5782. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5836/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5836/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5836",
"html_url": "https://github.com/huggingface/datasets/pull/5836",
"diff_url": "https://github.com/huggingface/datasets/pull/5836.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5836.patch",
"merged_at": "2023-05-10T20:23:03"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5835 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5835/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5835/comments | https://api.github.com/repos/huggingface/datasets/issues/5835/events | https://github.com/huggingface/datasets/pull/5835 | 1,702,522,620 | PR_kwDODunzps5QHquR | 5,835 | Always set nullable fields in the writer | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-05-09T18:16:59 | 2023-05-23T16:10:29 | 2023-05-19T13:04:30 | MEMBER | null | This fixes loading of e.g. parquet data with non-nullable fields.
Indeed `datasets.Features` doesn't support non-nullable fields, which can lead to data not concatenable due to arrow schema mismatch. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5835/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5835",
"html_url": "https://github.com/huggingface/datasets/pull/5835",
"diff_url": "https://github.com/huggingface/datasets/pull/5835.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5835.patch",
"merged_at": "2023-05-19T13:04:30"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5834/comments | https://api.github.com/repos/huggingface/datasets/issues/5834/events | https://github.com/huggingface/datasets/issues/5834 | 1,702,448,892 | I_kwDODunzps5leU78 | 5,834 | Is uint8 supported? | {
"login": "ryokan0123",
"id": 17979572,
"node_id": "MDQ6VXNlcjE3OTc5NTcy",
"avatar_url": "https://avatars.githubusercontent.com/u/17979572?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ryokan0123",
"html_url": "https://github.com/ryokan0123",
"followers_url": "https://api.github.com/users/ryokan0123/followers",
"following_url": "https://api.github.com/users/ryokan0123/following{/other_user}",
"gists_url": "https://api.github.com/users/ryokan0123/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ryokan0123/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ryokan0123/subscriptions",
"organizations_url": "https://api.github.com/users/ryokan0123/orgs",
"repos_url": "https://api.github.com/users/ryokan0123/repos",
"events_url": "https://api.github.com/users/ryokan0123/events{/privacy}",
"received_events_url": "https://api.github.com/users/ryokan0123/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-05-09T17:31:13 | 2023-05-13T05:04:21 | 2023-05-13T05:04:21 | NONE | null | ### Describe the bug
I expect the dataset to store the data in the `uint8` data type, but it's returning `int64` instead.
While I've found that `datasets` doesn't yet support float16 (https://github.com/huggingface/datasets/issues/4981), I'm wondering if this is the case for other data types as well.
Is there a way to store vector data as `uint8` and then upload it to the hub?
### Steps to reproduce the bug
```python
from datasets import Features, Dataset, Sequence, Value
import numpy as np
dataset = Dataset.from_dict(
{"vector": [np.array([0, 1, 2], dtype=np.uint8)]}, features=Features({"vector": Sequence(Value("uint8"))})
).with_format("numpy")
print(dataset[0]["vector"].dtype)
```
### Expected behavior
Expected: `uint8`
Actual: `int64`
### Environment info
- `datasets` version: 2.12.0
- Platform: macOS-12.1-x86_64-i386-64bit
- Python version: 3.8.12
- Huggingface_hub version: 0.12.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5834/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5833 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5833/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5833/comments | https://api.github.com/repos/huggingface/datasets/issues/5833/events | https://github.com/huggingface/datasets/issues/5833 | 1,702,280,682 | I_kwDODunzps5ldr3q | 5,833 | Unable to push dataset - `create_pr` problem | {
"login": "agombert",
"id": 17645711,
"node_id": "MDQ6VXNlcjE3NjQ1NzEx",
"avatar_url": "https://avatars.githubusercontent.com/u/17645711?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/agombert",
"html_url": "https://github.com/agombert",
"followers_url": "https://api.github.com/users/agombert/followers",
"following_url": "https://api.github.com/users/agombert/following{/other_user}",
"gists_url": "https://api.github.com/users/agombert/gists{/gist_id}",
"starred_url": "https://api.github.com/users/agombert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/agombert/subscriptions",
"organizations_url": "https://api.github.com/users/agombert/orgs",
"repos_url": "https://api.github.com/users/agombert/repos",
"events_url": "https://api.github.com/users/agombert/events{/privacy}",
"received_events_url": "https://api.github.com/users/agombert/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | 8 | 2023-05-09T15:32:55 | 2023-07-20T17:17:00 | null | NONE | null | ### Describe the bug
I can't upload to the hub the dataset I manually created locally (Image dataset). I have a problem when using the method `.push_to_hub` which asks for a `create_pr` attribute which is not compatible.
### Steps to reproduce the bug
here what I have:
```python
dataset.push_to_hub("agomberto/FrenchCensus-handwritten-texts")
```
Output:
```python
Pushing split train to the Hub.
Pushing dataset shards to the dataset hub: 0%| | 0/2 [00:00<?, ?it/s]
Creating parquet from Arrow format: 0%| | 0/3 [00:00<?, ?ba/s]
Creating parquet from Arrow format: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 12.70ba/s]
Pushing dataset shards to the dataset hub: 0%| | 0/2 [00:01<?, ?it/s]
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py:259, in hf_raise_for_status(response, endpoint_name)
258 try:
--> 259 response.raise_for_status()
260 except HTTPError as e:
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/requests/models.py:1021, in Response.raise_for_status(self)
1020 if http_error_msg:
-> 1021 raise HTTPError(http_error_msg, response=self)
HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/agomberto/FrenchCensus-handwritten-texts/commit/main
The above exception was the direct cause of the following exception:
HfHubHTTPError Traceback (most recent call last)
Cell In[7], line 1
----> 1 dataset.push_to_hub("agomberto/FrenchCensus-handwritten-texts")
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/datasets/dataset_dict.py:1583, in DatasetDict.push_to_hub(self, repo_id, private, token, branch, max_shard_size, num_shards, embed_external_files)
1581 logger.warning(f"Pushing split {split} to the Hub.")
1582 # The split=key needs to be removed before merging
-> 1583 repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parquet_shards_to_hub(
1584 repo_id,
1585 split=split,
1586 private=private,
1587 token=token,
1588 branch=branch,
1589 max_shard_size=max_shard_size,
1590 num_shards=num_shards.get(split),
1591 embed_external_files=embed_external_files,
1592 )
1593 total_uploaded_size += uploaded_size
1594 total_dataset_nbytes += dataset_nbytes
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/datasets/arrow_dataset.py:5275, in Dataset._push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, max_shard_size, num_shards, embed_external_files)
5273 shard.to_parquet(buffer)
5274 uploaded_size += buffer.tell()
-> 5275 _retry(
5276 api.upload_file,
5277 func_kwargs={
5278 "path_or_fileobj": buffer.getvalue(),
5279 "path_in_repo": shard_path_in_repo,
5280 "repo_id": repo_id,
5281 "token": token,
5282 "repo_type": "dataset",
5283 "revision": branch,
5284 },
5285 exceptions=HTTPError,
5286 status_codes=[504],
5287 base_wait_time=2.0,
5288 max_retries=5,
5289 max_wait_time=20.0,
5290 )
5291 shards_path_in_repo.append(shard_path_in_repo)
5293 # Cleanup to remove unused files
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/datasets/utils/file_utils.py:285, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)
283 except exceptions as err:
284 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):
--> 285 raise err
286 else:
287 sleep_time = min(max_wait_time, base_wait_time * 2**retry) # Exponential backoff
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/datasets/utils/file_utils.py:282, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time)
280 while True:
281 try:
--> 282 return func(*func_args, **func_kwargs)
283 except exceptions as err:
284 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes):
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py:120, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
117 if check_use_auth_token:
118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> 120 return fn(*args, **kwargs)
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/hf_api.py:2998, in HfApi.upload_file(self, path_or_fileobj, path_in_repo, repo_id, token, repo_type, revision, commit_message, commit_description, create_pr, parent_commit)
2990 commit_message = (
2991 commit_message if commit_message is not None else f"Upload {path_in_repo} with huggingface_hub"
2992 )
2993 operation = CommitOperationAdd(
2994 path_or_fileobj=path_or_fileobj,
2995 path_in_repo=path_in_repo,
2996 )
-> 2998 commit_info = self.create_commit(
2999 repo_id=repo_id,
3000 repo_type=repo_type,
3001 operations=[operation],
3002 commit_message=commit_message,
3003 commit_description=commit_description,
3004 token=token,
3005 revision=revision,
3006 create_pr=create_pr,
3007 parent_commit=parent_commit,
3008 )
3010 if commit_info.pr_url is not None:
3011 revision = quote(_parse_revision_from_pr_url(commit_info.pr_url), safe="")
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py:120, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs)
117 if check_use_auth_token:
118 kwargs = smoothly_deprecate_use_auth_token(fn_name=fn.__name__, has_token=has_token, kwargs=kwargs)
--> 120 return fn(*args, **kwargs)
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/hf_api.py:2548, in HfApi.create_commit(self, repo_id, operations, commit_message, commit_description, token, repo_type, revision, create_pr, num_threads, parent_commit)
2546 try:
2547 commit_resp = get_session().post(url=commit_url, headers=headers, data=data, params=params)
-> 2548 hf_raise_for_status(commit_resp, endpoint_name="commit")
2549 except RepositoryNotFoundError as e:
2550 e.append_to_message(_CREATE_COMMIT_NO_REPO_ERROR_MESSAGE)
File ~/miniconda3/envs/hwocr/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py:301, in hf_raise_for_status(response, endpoint_name)
297 raise BadRequestError(message, response=response) from e
299 # Convert `HTTPError` into a `HfHubHTTPError` to display request information
300 # as well (request id and/or server error message)
--> 301 raise HfHubHTTPError(str(e), response=response) from e
HfHubHTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/agomberto/FrenchCensus-handwritten-texts/commit/main (Request ID: Root=1-645a66bf-255ad91602a6404e6cb70fba)
Forbidden: pass `create_pr=1` as a query parameter to create a Pull Request
```
And then when I do
```python
dataset.push_to_hub("agomberto/FrenchCensus-handwritten-texts", create_pr=1)
```
I get
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[8], line 1
----> 1 dataset.push_to_hub("agomberto/FrenchCensus-handwritten-texts", create_pr=1)
TypeError: push_to_hub() got an unexpected keyword argument 'create_pr'
```
### Expected behavior
I would like to have the dataset updloaded [here](https://huggingface.co/datasets/agomberto/FrenchCensus-handwritten-texts).
### Environment info
```bash
- `datasets` version: 2.12.0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.8.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 1.5.3
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5833/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5832 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5832/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5832/comments | https://api.github.com/repos/huggingface/datasets/issues/5832/events | https://github.com/huggingface/datasets/issues/5832 | 1,702,135,336 | I_kwDODunzps5ldIYo | 5,832 | 404 Client Error: Not Found for url: https://huggingface.co/api/models/bert-large-cased | {
"login": "varungupta31",
"id": 51288316,
"node_id": "MDQ6VXNlcjUxMjg4MzE2",
"avatar_url": "https://avatars.githubusercontent.com/u/51288316?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/varungupta31",
"html_url": "https://github.com/varungupta31",
"followers_url": "https://api.github.com/users/varungupta31/followers",
"following_url": "https://api.github.com/users/varungupta31/following{/other_user}",
"gists_url": "https://api.github.com/users/varungupta31/gists{/gist_id}",
"starred_url": "https://api.github.com/users/varungupta31/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/varungupta31/subscriptions",
"organizations_url": "https://api.github.com/users/varungupta31/orgs",
"repos_url": "https://api.github.com/users/varungupta31/repos",
"events_url": "https://api.github.com/users/varungupta31/events{/privacy}",
"received_events_url": "https://api.github.com/users/varungupta31/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-05-09T14:14:59 | 2023-05-09T14:25:59 | 2023-05-09T14:25:59 | NONE | null | ### Describe the bug
Running [Bert-Large-Cased](https://huggingface.co/bert-large-cased) model causes `HTTPError`, with the following traceback-
```
HTTPError Traceback (most recent call last)
<ipython-input-6-5c580443a1ad> in <module>
----> 1 tokenizer = BertTokenizer.from_pretrained('bert-large-cased')
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1646 # At this point pretrained_model_name_or_path is either a directory or a model identifier name
1647 fast_tokenizer_file = get_fast_tokenizer_file(
-> 1648 pretrained_model_name_or_path, revision=revision, use_auth_token=use_auth_token
1649 )
1650 additional_files_names = {
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in get_fast_tokenizer_file(path_or_repo, revision, use_auth_token)
3406 """
3407 # Inspect all files from the repo/folder.
-> 3408 all_files = get_list_of_files(path_or_repo, revision=revision, use_auth_token=use_auth_token)
3409 tokenizer_files_map = {}
3410 for file_name in all_files:
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/transformers/file_utils.py in get_list_of_files(path_or_repo, revision, use_auth_token)
1685 token = None
1686 model_info = HfApi(endpoint=HUGGINGFACE_CO_RESOLVE_ENDPOINT).model_info(
-> 1687 path_or_repo, revision=revision, token=token
1688 )
1689 return [f.rfilename for f in model_info.siblings]
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/huggingface_hub/hf_api.py in model_info(self, repo_id, revision, token)
246 )
247 r = requests.get(path, headers=headers)
--> 248 r.raise_for_status()
249 d = r.json()
250 return ModelInfo(**d)
~/miniconda3/envs/cmd-chall/lib/python3.7/site-packages/requests/models.py in raise_for_status(self)
951
952 if http_error_msg:
--> 953 raise HTTPError(http_error_msg, response=self)
954
955 def close(self):
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/models/bert-large-cased
```
I have also tried running in offline mode, as [discussed here](https://huggingface.co/docs/transformers/installation#offline-mode)
```
HF_DATASETS_OFFLINE=1
TRANSFORMERS_OFFLINE=1
```
### Steps to reproduce the bug
1. `from transformers import BertTokenizer, BertModel`
2. `tokenizer = BertTokenizer.from_pretrained('bert-large-cased')`
### Expected behavior
Run without the HTTP error.
### Environment info
| # Name | Version | Build | Channel | |
|--------------------|------------|-----------------------------|---------|---|
| _libgcc_mutex | 0.1 | main | | |
| _openmp_mutex | 4.5 | 1_gnu | | |
| _pytorch_select | 0.1 | cpu_0 | | |
| appdirs | 1.4.4 | pypi_0 | pypi | |
| backcall | 0.2.0 | pypi_0 | pypi | |
| blas | 1.0 | mkl | | |
| bzip2 | 1.0.8 | h7b6447c_0 | | |
| ca-certificates | 2021.7.5 | h06a4308_1 | | |
| certifi | 2021.5.30 | py37h06a4308_0 | | |
| cffi | 1.14.6 | py37h400218f_0 | | |
| charset-normalizer | 2.0.3 | pypi_0 | pypi | |
| click | 8.0.1 | pypi_0 | pypi | |
| colorama | 0.4.4 | pypi_0 | pypi | |
| cudatoolkit | 11.1.74 | h6bb024c_0 | nvidia | |
| cycler | 0.11.0 | pypi_0 | pypi | |
| decorator | 5.0.9 | pypi_0 | pypi | |
| docker-pycreds | 0.4.0 | pypi_0 | pypi | |
| docopt | 0.6.2 | pypi_0 | pypi | |
| dominate | 2.6.0 | pypi_0 | pypi | |
| ffmpeg | 4.3 | hf484d3e_0 | pytorch | |
| filelock | 3.0.12 | pypi_0 | pypi | |
| fonttools | 4.38.0 | pypi_0 | pypi | |
| freetype | 2.10.4 | h5ab3b9f_0 | | |
| gitdb | 4.0.7 | pypi_0 | pypi | |
| gitpython | 3.1.18 | pypi_0 | pypi | |
| gmp | 6.2.1 | h2531618_2 | | |
| gnutls | 3.6.15 | he1e5248_0 | | |
| huggingface-hub | 0.0.12 | pypi_0 | pypi | |
| humanize | 3.10.0 | pypi_0 | pypi | |
| idna | 3.2 | pypi_0 | pypi | |
| importlib-metadata | 4.6.1 | pypi_0 | pypi | |
| intel-openmp | 2019.4 | 243 | | |
| ipdb | 0.13.9 | pypi_0 | pypi | |
| ipython | 7.25.0 | pypi_0 | pypi | |
| ipython-genutils | 0.2.0 | pypi_0 | pypi | |
| jedi | 0.18.0 | pypi_0 | pypi | |
| joblib | 1.0.1 | pypi_0 | pypi | |
| jpeg | 9b | h024ee3a_2 | | |
| jsonpickle | 1.5.2 | pypi_0 | pypi | |
| kiwisolver | 1.4.4 | pypi_0 | pypi | |
| lame | 3.100 | h7b6447c_0 | | |
| lcms2 | 2.12 | h3be6417_0 | | |
| ld_impl_linux-64 | 2.35.1 | h7274673_9 | | |
| libffi | 3.3 | he6710b0_2 | | |
| libgcc-ng | 9.3.0 | h5101ec6_17 | | |
| libgomp | 9.3.0 | h5101ec6_17 | | |
| libiconv | 1.15 | h63c8f33_5 | | |
| libidn2 | 2.3.2 | h7f8727e_0 | | |
| libmklml | 2019.0.5 | 0 | | |
| libpng | 1.6.37 | hbc83047_0 | | |
| libstdcxx-ng | 9.3.0 | hd4cf53a_17 | | |
| libtasn1 | 4.16.0 | h27cfd23_0 | | |
| libtiff | 4.2.0 | h85742a9_0 | | |
| libunistring | 0.9.10 | h27cfd23_0 | | |
| libuv | 1.40.0 | h7b6447c_0 | | |
| libwebp-base | 1.2.0 | h27cfd23_0 | | |
| lz4-c | 1.9.3 | h2531618_0 | | |
| matplotlib | 3.5.3 | pypi_0 | pypi | |
| matplotlib-inline | 0.1.2 | pypi_0 | pypi | |
| mergedeep | 1.3.4 | pypi_0 | pypi | |
| mkl | 2020.2 | 256 | | |
| mkl-service | 2.3.0 | py37he8ac12f_0 | | |
| mkl_fft | 1.3.0 | py37h54f3939_0 | | |
| mkl_random | 1.1.1 | py37h0573a6f_0 | | |
| msgpack | 1.0.2 | pypi_0 | pypi | |
| munch | 2.5.0 | pypi_0 | pypi | |
| ncurses | 6.2 | he6710b0_1 | | |
| nettle | 3.7.3 | hbbd107a_1 | | |
| ninja | 1.10.2 | hff7bd54_1 | | |
| nltk | 3.8.1 | pypi_0 | pypi | |
| numpy | 1.19.2 | py37h54aff64_0 | | |
| numpy-base | 1.19.2 | py37hfa32c7d_0 | | |
| olefile | 0.46 | py37_0 | | |
| openh264 | 2.1.0 | hd408876_0 | | |
| openjpeg | 2.3.0 | h05c96fa_1 | | |
| openssl | 1.1.1k | h27cfd23_0 | | |
| packaging | 21.0 | pypi_0 | pypi | |
| pandas | 1.3.1 | pypi_0 | pypi | |
| parso | 0.8.2 | pypi_0 | pypi | |
| pathtools | 0.1.2 | pypi_0 | pypi | |
| pexpect | 4.8.0 | pypi_0 | pypi | |
| pickleshare | 0.7.5 | pypi_0 | pypi | |
| pillow | 8.3.1 | py37h2c7a002_0 | | |
| pip | 21.1.3 | py37h06a4308_0 | | |
| prompt-toolkit | 3.0.19 | pypi_0 | pypi | |
| protobuf | 4.21.12 | pypi_0 | pypi | |
| psutil | 5.8.0 | pypi_0 | pypi | |
| ptyprocess | 0.7.0 | pypi_0 | pypi | |
| py-cpuinfo | 8.0.0 | pypi_0 | pypi | |
| pycparser | 2.20 | py_2 | | |
| pygments | 2.9.0 | pypi_0 | pypi | |
| pyparsing | 2.4.7 | pypi_0 | pypi | |
| python | 3.7.10 | h12debd9_4 | | |
| python-dateutil | 2.8.2 | pypi_0 | pypi | |
| pytorch | 1.9.0 | py3.7_cuda11.1_cudnn8.0.5_0 | pytorch | |
| pytz | 2021.1 | pypi_0 | pypi | |
| pyyaml | 5.4.1 | pypi_0 | pypi | |
| readline | 8.1 | h27cfd23_0 | | |
| regex | 2022.10.31 | pypi_0 | pypi | |
| requests | 2.26.0 | pypi_0 | pypi | |
| sacred | 0.8.2 | pypi_0 | pypi | |
| sacremoses | 0.0.45 | pypi_0 | pypi | |
| scikit-learn | 0.24.2 | pypi_0 | pypi | |
| scipy | 1.7.0 | pypi_0 | pypi | |
| sentry-sdk | 1.15.0 | pypi_0 | pypi | |
| setproctitle | 1.3.2 | pypi_0 | pypi | |
| setuptools | 52.0.0 | py37h06a4308_0 | | |
| six | 1.16.0 | pyhd3eb1b0_0 | | |
| smmap | 4.0.0 | pypi_0 | pypi | |
| sqlite | 3.36.0 | hc218d9a_0 | | |
| threadpoolctl | 2.2.0 | pypi_0 | pypi | |
| tk | 8.6.10 | hbc83047_0 | | |
| tokenizers | 0.10.3 | pypi_0 | pypi | |
| toml | 0.10.2 | pypi_0 | pypi | |
| torchaudio | 0.9.0 | py37 | pytorch | |
| torchvision | 0.10.0 | py37_cu111 | pytorch | |
| tqdm | 4.61.2 | pypi_0 | pypi | |
| traitlets | 5.0.5 | pypi_0 | pypi | |
| transformers | 4.9.1 | pypi_0 | pypi | |
| typing-extensions | 3.10.0.0 | hd3eb1b0_0 | | |
| typing_extensions | 3.10.0.0 | pyh06a4308_0 | | |
| urllib3 | 1.26.14 | pypi_0 | pypi | |
| wandb | 0.13.10 | pypi_0 | pypi | |
| wcwidth | 0.2.5 | pypi_0 | pypi | |
| wheel | 0.36.2 | pyhd3eb1b0_0 | | |
| wrapt | 1.12.1 | pypi_0 | pypi | |
| xz | 5.2.5 | h7b6447c_0 | | |
| zipp | 3.5.0 | pypi_0 | pypi | |
| zlib | 1.2.11 | h7b6447c_3 | | |
| zstd | 1.4.9 | haebb681_0 | | | | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5832/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5831 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5831/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5831/comments | https://api.github.com/repos/huggingface/datasets/issues/5831/events | https://github.com/huggingface/datasets/issues/5831 | 1,701,813,835 | I_kwDODunzps5lb55L | 5,831 | [Bug]504 Server Error when loading dataset which was already cached | {
"login": "SingL3",
"id": 20473466,
"node_id": "MDQ6VXNlcjIwNDczNDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/20473466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SingL3",
"html_url": "https://github.com/SingL3",
"followers_url": "https://api.github.com/users/SingL3/followers",
"following_url": "https://api.github.com/users/SingL3/following{/other_user}",
"gists_url": "https://api.github.com/users/SingL3/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SingL3/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SingL3/subscriptions",
"organizations_url": "https://api.github.com/users/SingL3/orgs",
"repos_url": "https://api.github.com/users/SingL3/repos",
"events_url": "https://api.github.com/users/SingL3/events{/privacy}",
"received_events_url": "https://api.github.com/users/SingL3/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 6 | 2023-05-09T10:31:07 | 2023-05-10T01:48:20 | null | NONE | null | ### Describe the bug
I have already cached the dataset using:
```
dataset = load_dataset("databricks/databricks-dolly-15k",
cache_dir="/mnt/data/llm/datasets/databricks-dolly-15k")
```
After that, I tried to load it again using the same machine, I got this error:
```
Traceback (most recent call last):
File "/mnt/home/llm/pythia/train.py", line 16, in <module>
dataset = load_dataset("databricks/databricks-dolly-15k",
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/datasets/load.py", line 1773, in load_dataset
builder_instance = load_dataset_builder(
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/datasets/load.py", line 1502, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/datasets/load.py", line 1219, in dataset_module_factory
raise e1 from None
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/datasets/load.py", line 1186, in dataset_module_factory
raise e
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/datasets/load.py", line 1160, in dataset_module_factory
dataset_info = hf_api.dataset_info(
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn
return fn(*args, **kwargs)
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 1667, in dataset_info
hf_raise_for_status(r)
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 301, in hf_raise_for_status
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/databricks/databricks-dolly-15k
```
### Steps to reproduce the bug
1. cache the databrick-dolly-15k dataset using load_dataset, setting a cache_dir
2. use load_dataset again, setting the same cache_dir
### Expected behavior
Dataset loaded succuessfully.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-4.18.0-372.16.1.el8_6.x86_64-x86_64-with-glibc2.27
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5831/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5831/timeline | null | reopened | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5830/comments | https://api.github.com/repos/huggingface/datasets/issues/5830/events | https://github.com/huggingface/datasets/pull/5830 | 1,701,451,399 | PR_kwDODunzps5QEFEi | 5,830 | Debug windows #2 | {
"login": "HyukjinKwon",
"id": 6477701,
"node_id": "MDQ6VXNlcjY0Nzc3MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6477701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HyukjinKwon",
"html_url": "https://github.com/HyukjinKwon",
"followers_url": "https://api.github.com/users/HyukjinKwon/followers",
"following_url": "https://api.github.com/users/HyukjinKwon/following{/other_user}",
"gists_url": "https://api.github.com/users/HyukjinKwon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HyukjinKwon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HyukjinKwon/subscriptions",
"organizations_url": "https://api.github.com/users/HyukjinKwon/orgs",
"repos_url": "https://api.github.com/users/HyukjinKwon/repos",
"events_url": "https://api.github.com/users/HyukjinKwon/events{/privacy}",
"received_events_url": "https://api.github.com/users/HyukjinKwon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-05-09T06:40:34 | 2023-05-09T06:40:47 | 2023-05-09T06:40:47 | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5830/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5830",
"html_url": "https://github.com/huggingface/datasets/pull/5830",
"diff_url": "https://github.com/huggingface/datasets/pull/5830.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5830.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5829 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5829/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5829/comments | https://api.github.com/repos/huggingface/datasets/issues/5829/events | https://github.com/huggingface/datasets/issues/5829 | 1,699,958,189 | I_kwDODunzps5lU02t | 5,829 | (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64')) | {
"login": "elcolie",
"id": 18206728,
"node_id": "MDQ6VXNlcjE4MjA2NzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/18206728?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elcolie",
"html_url": "https://github.com/elcolie",
"followers_url": "https://api.github.com/users/elcolie/followers",
"following_url": "https://api.github.com/users/elcolie/following{/other_user}",
"gists_url": "https://api.github.com/users/elcolie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elcolie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elcolie/subscriptions",
"organizations_url": "https://api.github.com/users/elcolie/orgs",
"repos_url": "https://api.github.com/users/elcolie/repos",
"events_url": "https://api.github.com/users/elcolie/events{/privacy}",
"received_events_url": "https://api.github.com/users/elcolie/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-08T10:07:14 | 2023-06-30T11:39:14 | 2023-05-09T00:46:42 | NONE | null | ### Describe the bug
M2 MBP can't run
```python
from datasets import load_dataset
jazzy = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision='v1.2-jazzy')
```
### Steps to reproduce the bug
1. Use M2 MBP
2. Python 3.10.10 from pyenv
3. Run
```
from datasets import load_dataset
jazzy = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision='v1.2-jazzy')
```
### Expected behavior
Be able to run normally
### Environment info
```
from datasets import load_dataset
jazzy = load_dataset("nomic-ai/gpt4all-j-prompt-generations", revision='v1.2-jazzy')
```
OSX: 13.2
CPU: M2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5829/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5829/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5828/comments | https://api.github.com/repos/huggingface/datasets/issues/5828/events | https://github.com/huggingface/datasets/issues/5828 | 1,699,235,739 | I_kwDODunzps5lSEeb | 5,828 | Stream data concatenation issue | {
"login": "krishnapriya-18",
"id": 48817796,
"node_id": "MDQ6VXNlcjQ4ODE3Nzk2",
"avatar_url": "https://avatars.githubusercontent.com/u/48817796?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krishnapriya-18",
"html_url": "https://github.com/krishnapriya-18",
"followers_url": "https://api.github.com/users/krishnapriya-18/followers",
"following_url": "https://api.github.com/users/krishnapriya-18/following{/other_user}",
"gists_url": "https://api.github.com/users/krishnapriya-18/gists{/gist_id}",
"starred_url": "https://api.github.com/users/krishnapriya-18/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/krishnapriya-18/subscriptions",
"organizations_url": "https://api.github.com/users/krishnapriya-18/orgs",
"repos_url": "https://api.github.com/users/krishnapriya-18/repos",
"events_url": "https://api.github.com/users/krishnapriya-18/events{/privacy}",
"received_events_url": "https://api.github.com/users/krishnapriya-18/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-05-07T21:02:54 | 2023-06-29T20:07:56 | 2023-05-10T05:05:47 | NONE | null | ### Describe the bug
I am not able to concatenate the augmentation of the stream data. I am using the latest version of dataset.
ValueError: The features can't be aligned because the key audio of features {'audio_id': Value(dtype='string',
id=None), 'audio': {'array': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None), 'path':
Value(dtype='null', id=None), 'sampling_rate': Value(dtype='int64', id=None)}, 'transcript': Value(dtype='string',
id=None)} has unexpected type - {'array': Sequence(feature=Value(dtype='float32', id=None), length=-1, id=None),
'path': Value(dtype='null', id=None), 'sampling_rate': Value(dtype='int64', id=None)} (expected either
Audio(sampling_rate=16000, mono=True, decode=True, id=None) or Value("null").
### Steps to reproduce the bug
dataset = load_dataset("tobiolatunji/afrispeech-200", "all", streaming=True).shuffle(seed=42)
dataset_cln = dataset.remove_columns(['speaker_id', 'path', 'age_group', 'gender', 'accent', 'domain', 'country', 'duration'])
dataset_cln = dataset_cln.cast_column("audio", Audio(sampling_rate=16000))
from audiomentations import AddGaussianNoise,Compose,Gain,OneOf,PitchShift,PolarityInversion,TimeStretch
augmentation = Compose([
AddGaussianNoise(min_amplitude=0.005, max_amplitude=0.015, p=0.2)
])
def augment_dataset(batch):
audio = batch["audio"]
audio["array"] = augmentation(audio["array"], sample_rate=audio["sampling_rate"])
return batch
augmented_dataset_cln = dataset_cln['train'].map(augment_dataset)
dataset_cln['train'] = interleave_datasets([dataset_cln['train'], augmented_dataset_cln])
dataset_cln['train'] = dataset_cln['train'].shuffle(seed=42)
### Expected behavior
I should be able to merge as sampling rate is same.
### Environment info
import datasets
import transformers
import accelerate
print(datasets.__version__)
print(transformers.__version__)
print(torch.__version__)
print(evaluate.__version__)
print(accelerate.__version__)
2.12.0
4.28.1
2.0.0
0.4.0
0.18.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5828/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5827 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5827/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5827/comments | https://api.github.com/repos/huggingface/datasets/issues/5827/events | https://github.com/huggingface/datasets/issues/5827 | 1,698,891,246 | I_kwDODunzps5lQwXu | 5,827 | load json dataset interrupt when dtype cast problem occured | {
"login": "1014661165",
"id": 46060451,
"node_id": "MDQ6VXNlcjQ2MDYwNDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/46060451?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/1014661165",
"html_url": "https://github.com/1014661165",
"followers_url": "https://api.github.com/users/1014661165/followers",
"following_url": "https://api.github.com/users/1014661165/following{/other_user}",
"gists_url": "https://api.github.com/users/1014661165/gists{/gist_id}",
"starred_url": "https://api.github.com/users/1014661165/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/1014661165/subscriptions",
"organizations_url": "https://api.github.com/users/1014661165/orgs",
"repos_url": "https://api.github.com/users/1014661165/repos",
"events_url": "https://api.github.com/users/1014661165/events{/privacy}",
"received_events_url": "https://api.github.com/users/1014661165/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-05-07T04:52:09 | 2023-05-10T12:32:28 | null | NONE | null | ### Describe the bug
i have a json like this:
[
{"id": 1, "name": 1},
{"id": 2, "name": "Nan"},
{"id": 3, "name": 3},
....
]
,which have several problematic rows data like row 2, then i load it with datasets.load_dataset('json', data_files=['xx.json'], split='train'), it will report like this:
Generating train split: 0 examples [00:00, ? examples/s]Failed to read file 'C:\Users\gawinjunwu\Downloads\test\data\a.json' with error <class 'pyarrow.lib.ArrowInvalid'>: Could not convert '2' with type str: tried to convert to int64
Traceback (most recent call last):
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 1858, in _prepare_split_single
for _, table in generator:
File "D:\Python3.9\lib\site-packages\datasets\packaged_modules\json\json.py", line 146, in _generate_tables
raise ValueError(f"Not able to read records in the JSON file at {file}.") from None
ValueError: Not able to read records in the JSON file at C:\Users\gawinjunwu\Downloads\test\data\a.json.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\gawinjunwu\Downloads\test\scripts\a.py", line 4, in <module>
ds = load_dataset('json', data_dir='data', split='train')
File "D:\Python3.9\lib\site-packages\datasets\load.py", line 1797, in load_dataset
builder_instance.download_and_prepare(
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 890, in download_and_prepare
self._download_and_prepare(
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 985, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 1746, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 1891, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset.
Could datasets skip those problematic data row?
### Steps to reproduce the bug
prepare a json file like this:
[
{"id": 1, "name": 1},
{"id": 2, "name": "Nan"},
{"id": 3, "name": 3}
]
then use datasets.load_dataset('json', dir_files=['xxx.json']) to load the json file
### Expected behavior
skip the problematic data row and load row1 and row3
### Environment info
python3.9 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5827/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5826 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5826/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5826/comments | https://api.github.com/repos/huggingface/datasets/issues/5826/events | https://github.com/huggingface/datasets/pull/5826 | 1,698,155,751 | PR_kwDODunzps5P5FYZ | 5,826 | Support working_dir in from_spark | {
"login": "maddiedawson",
"id": 106995444,
"node_id": "U_kgDOBmCe9A",
"avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maddiedawson",
"html_url": "https://github.com/maddiedawson",
"followers_url": "https://api.github.com/users/maddiedawson/followers",
"following_url": "https://api.github.com/users/maddiedawson/following{/other_user}",
"gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions",
"organizations_url": "https://api.github.com/users/maddiedawson/orgs",
"repos_url": "https://api.github.com/users/maddiedawson/repos",
"events_url": "https://api.github.com/users/maddiedawson/events{/privacy}",
"received_events_url": "https://api.github.com/users/maddiedawson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 6 | 2023-05-05T20:22:40 | 2023-05-25T17:45:54 | 2023-05-25T08:46:15 | CONTRIBUTOR | null | Accept `working_dir` as an argument to `Dataset.from_spark`. Setting a non-NFS working directory for Spark workers to materialize to will improve write performance. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5826/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5826",
"html_url": "https://github.com/huggingface/datasets/pull/5826",
"diff_url": "https://github.com/huggingface/datasets/pull/5826.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5826.patch",
"merged_at": "2023-05-25T08:46:15"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5825 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5825/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5825/comments | https://api.github.com/repos/huggingface/datasets/issues/5825/events | https://github.com/huggingface/datasets/issues/5825 | 1,697,327,483 | I_kwDODunzps5lKyl7 | 5,825 | FileNotFound even though exists | {
"login": "Muennighoff",
"id": 62820084,
"node_id": "MDQ6VXNlcjYyODIwMDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Muennighoff",
"html_url": "https://github.com/Muennighoff",
"followers_url": "https://api.github.com/users/Muennighoff/followers",
"following_url": "https://api.github.com/users/Muennighoff/following{/other_user}",
"gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions",
"organizations_url": "https://api.github.com/users/Muennighoff/orgs",
"repos_url": "https://api.github.com/users/Muennighoff/repos",
"events_url": "https://api.github.com/users/Muennighoff/events{/privacy}",
"received_events_url": "https://api.github.com/users/Muennighoff/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-05-05T09:49:55 | 2023-08-16T10:02:01 | 2023-08-16T10:02:01 | CONTRIBUTOR | null | ### Describe the bug
I'm trying to download https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl which works fine in my webbrowser, but somehow not with datasets. Am I doing sth wrong?
```
Downloading builder script: 100%
2.82k/2.82k [00:00<00:00, 64.2kB/s]
Downloading readme: 100%
12.6k/12.6k [00:00<00:00, 585kB/s]
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[<ipython-input-2-4b45446a91d5>](https://localhost:8080/#) in <cell line: 4>()
2 lang = "ur"
3 fname = "xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl"
----> 4 dataset = load_dataset("bigscience/xP3", data_files=f"{lang}/{fname}")
6 frames
[/usr/local/lib/python3.10/dist-packages/datasets/data_files.py](https://localhost:8080/#) in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions)
291 if allowed_extensions is not None:
292 error_msg += f" with any supported extension {list(allowed_extensions)}"
--> 293 raise FileNotFoundError(error_msg)
294 return sorted(out)
295
FileNotFoundError: Unable to find 'https://huggingface.co/datasets/bigscience/xP3/resolve/main/ur/xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl' at /content/https:/huggingface.co/datasets/bigscience/xP3/resolve/main
```
### Steps to reproduce the bug
```
!pip install -q datasets
from datasets import load_dataset
lang = "ur"
fname = "xp3_facebook_flores_spa_Latn-urd_Arab_devtest_ab-spa_Latn-urd_Arab.jsonl"
dataset = load_dataset("bigscience/xP3", data_files=f"{lang}/{fname}")
```
### Expected behavior
Correctly downloads
### Environment info
latest versions | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5825/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5824 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5824/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5824/comments | https://api.github.com/repos/huggingface/datasets/issues/5824/events | https://github.com/huggingface/datasets/pull/5824 | 1,697,152,148 | PR_kwDODunzps5P1rIZ | 5,824 | Fix incomplete docstring for `BuilderConfig` | {
"login": "Laurent2916",
"id": 21087104,
"node_id": "MDQ6VXNlcjIxMDg3MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Laurent2916",
"html_url": "https://github.com/Laurent2916",
"followers_url": "https://api.github.com/users/Laurent2916/followers",
"following_url": "https://api.github.com/users/Laurent2916/following{/other_user}",
"gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions",
"organizations_url": "https://api.github.com/users/Laurent2916/orgs",
"repos_url": "https://api.github.com/users/Laurent2916/repos",
"events_url": "https://api.github.com/users/Laurent2916/events{/privacy}",
"received_events_url": "https://api.github.com/users/Laurent2916/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-05T07:34:28 | 2023-05-05T12:39:14 | 2023-05-05T12:31:54 | CONTRIBUTOR | null | Fixes #5820
Also fixed a couple of typos I spotted | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5824/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5824",
"html_url": "https://github.com/huggingface/datasets/pull/5824",
"diff_url": "https://github.com/huggingface/datasets/pull/5824.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5824.patch",
"merged_at": "2023-05-05T12:31:54"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5823 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5823/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5823/comments | https://api.github.com/repos/huggingface/datasets/issues/5823/events | https://github.com/huggingface/datasets/issues/5823 | 1,697,024,789 | I_kwDODunzps5lJosV | 5,823 | [2.12.0] DatasetDict.save_to_disk not saving to S3 | {
"login": "thejamesmarq",
"id": 5233185,
"node_id": "MDQ6VXNlcjUyMzMxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/5233185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thejamesmarq",
"html_url": "https://github.com/thejamesmarq",
"followers_url": "https://api.github.com/users/thejamesmarq/followers",
"following_url": "https://api.github.com/users/thejamesmarq/following{/other_user}",
"gists_url": "https://api.github.com/users/thejamesmarq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thejamesmarq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thejamesmarq/subscriptions",
"organizations_url": "https://api.github.com/users/thejamesmarq/orgs",
"repos_url": "https://api.github.com/users/thejamesmarq/repos",
"events_url": "https://api.github.com/users/thejamesmarq/events{/privacy}",
"received_events_url": "https://api.github.com/users/thejamesmarq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-05T05:22:59 | 2023-05-05T15:01:18 | 2023-05-05T15:01:17 | NONE | null | ### Describe the bug
When trying to save a `DatasetDict` to a private S3 bucket using `save_to_disk`, the artifacts are instead saved locally, and not in the S3 bucket.
I have tried using the deprecated `fs` as well as the `storage_options` arguments and I get the same results.
### Steps to reproduce the bug
1. Create a DatsetDict `dataset`
2. Create a S3FileSystem object
`s3 = datasets.filesystems.S3FileSystem(key=aws_access_key_id, secret=aws_secret_access_key)`
3. Save using `dataset_dict.save_to_disk(f"{s3_bucket}/{s3_dir}/{dataset_name}", storage_options=s3.storage_options)` or `dataset_dict.save_to_disk(f"{s3_bucket}/{s3_dir}/{dataset_name}", fs=s3)`
4. Check the corresponding S3 bucket and verify nothing has been uploaded
5. Check the path at f"{s3_bucket}/{s3_dir}/{dataset_name}" and verify that files have been saved there
### Expected behavior
Artifacts are uploaded at the f"{s3_bucket}/{s3_dir}/{dataset_name}" S3 location.
### Environment info
- `datasets` version: 2.12.0
- Platform: macOS-13.3.1-x86_64-i386-64bit
- Python version: 3.11.2
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5823/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5822 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5822/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5822/comments | https://api.github.com/repos/huggingface/datasets/issues/5822/events | https://github.com/huggingface/datasets/issues/5822 | 1,696,627,308 | I_kwDODunzps5lIHps | 5,822 | Audio Dataset with_format torch problem | {
"login": "paulbauriegel",
"id": 20282916,
"node_id": "MDQ6VXNlcjIwMjgyOTE2",
"avatar_url": "https://avatars.githubusercontent.com/u/20282916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/paulbauriegel",
"html_url": "https://github.com/paulbauriegel",
"followers_url": "https://api.github.com/users/paulbauriegel/followers",
"following_url": "https://api.github.com/users/paulbauriegel/following{/other_user}",
"gists_url": "https://api.github.com/users/paulbauriegel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/paulbauriegel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/paulbauriegel/subscriptions",
"organizations_url": "https://api.github.com/users/paulbauriegel/orgs",
"repos_url": "https://api.github.com/users/paulbauriegel/repos",
"events_url": "https://api.github.com/users/paulbauriegel/events{/privacy}",
"received_events_url": "https://api.github.com/users/paulbauriegel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-04T20:07:51 | 2023-05-11T20:45:53 | 2023-05-11T20:45:53 | NONE | null | ### Describe the bug
Common Voice v10 Delta (German) Dataset from here https://commonvoice.mozilla.org/de/datasets
```
audio_dataset = \
(Dataset
.from_dict({"audio": ('/tmp/cv-corpus-10.0-delta-2022-07-04/de/clips/' + df.path).to_list()})
.cast_column("audio", Audio(sampling_rate=16_000))
.with_format('numpy'))
audio_dataset[0]["audio"]
```
works, but
```
audio_dataset = \
(Dataset
.from_dict({"audio": ('/tmp/cv-corpus-10.0-delta-2022-07-04/de/clips/' + df.path).to_list()})
.cast_column("audio", Audio(sampling_rate=16_000))
.with_format('torch'))
audio_dataset[0]["audio"]
```
does not instead I get
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[54], line 1
----> 1 audio_dataset[0]["audio"]
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/arrow_dataset.py:2154, in Dataset.__getitem__(self, key)
2152 def __getitem__(self, key): # noqa: F811
2153 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2154 return self._getitem(
2155 key,
2156 )
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/arrow_dataset.py:2139, in Dataset._getitem(self, key, decoded, **kwargs)
2137 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs)
2138 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2139 formatted_output = format_table(
2140 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2141 )
2142 return formatted_output
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:58, in TorchFormatter.format_row(self, pa_table)
56 def format_row(self, pa_table: pa.Table) -> dict:
57 row = self.numpy_arrow_extractor().extract_row(pa_table)
---> 58 return self.recursive_tensorize(row)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:54, in TorchFormatter.recursive_tensorize(self, data_struct)
53 def recursive_tensorize(self, data_struct: dict):
---> 54 return map_nested(self._recursive_tensorize, data_struct, map_list=False)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:356, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)
354 num_proc = 1
355 if num_proc <= 1 or len(iterable) <= num_proc:
--> 356 mapped = [
357 _single_map_nested((function, obj, types, None, True, None))
358 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
359 ]
360 else:
361 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:357, in <listcomp>(.0)
354 num_proc = 1
355 if num_proc <= 1 or len(iterable) <= num_proc:
356 mapped = [
--> 357 _single_map_nested((function, obj, types, None, True, None))
358 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
359 ]
360 else:
361 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:309, in _single_map_nested(args)
306 pbar = logging.tqdm(pbar_iterable, disable=disable_tqdm, position=rank, unit="obj", desc=pbar_desc)
308 if isinstance(data_struct, dict):
--> 309 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
310 else:
311 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:309, in <dictcomp>(.0)
306 pbar = logging.tqdm(pbar_iterable, disable=disable_tqdm, position=rank, unit="obj", desc=pbar_desc)
308 if isinstance(data_struct, dict):
--> 309 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar}
310 else:
311 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar]
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/utils/py_utils.py:293, in _single_map_nested(args)
291 # Singleton first to spare some computation
292 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 293 return function(data_struct)
295 # Reduce logging to keep things readable in multiprocessing with tqdm
296 if rank is not None and logging.get_verbosity() < logging.WARNING:
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:51, in TorchFormatter._recursive_tensorize(self, data_struct)
49 if data_struct.dtype == np.object: # pytorch tensors cannot be instantied from an array of objects
50 return [self.recursive_tensorize(substruct) for substruct in data_struct]
---> 51 return self._tensorize(data_struct)
File /anaconda/envs/azureml_py38/lib/python3.8/site-packages/datasets/formatting/torch_formatter.py:38, in TorchFormatter._tensorize(self, value)
35 import torch
37 default_dtype = {}
---> 38 if np.issubdtype(value.dtype, np.integer):
39 default_dtype = {"dtype": torch.int64}
40 elif np.issubdtype(value.dtype, np.floating):
AttributeError: 'NoneType' object has no attribute 'dtype'
```
### Steps to reproduce the bug
1. Download some audio dataset in this case I used Common Voice v10 Delta (German) Dataset from here https://commonvoice.mozilla.org/de/datasets
2. Try the Code from above
### Expected behavior
It should work for torch
### Environment info
pytorch: 2.0.0
datasets: 2.3.2
numpy: 1.21.6
Python: 3.8
Linux | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5822/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5821 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5821/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5821/comments | https://api.github.com/repos/huggingface/datasets/issues/5821/events | https://github.com/huggingface/datasets/pull/5821 | 1,696,400,343 | PR_kwDODunzps5PzHLU | 5,821 | IterableDataset Arrow formatting | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 13 | 2023-05-04T17:23:43 | 2023-05-31T09:43:26 | 2023-05-31T09:36:18 | MEMBER | null | Adding an optional `.iter_arrow` to examples iterable. This allows to use Arrow formatting in map/filter.
This will also be useful for torch formatting, since we can reuse the TorchFormatter that converts Arrow data to torch tensors
Related to https://github.com/huggingface/datasets/issues/5793 and https://github.com/huggingface/datasets/issues/3444
Required for https://github.com/huggingface/datasets/pull/5852
### Example:
Speed x10 in map
```python
from datasets import Dataset
import pyarrow.compute as pc
import time
ds = Dataset.from_dict({"a": range(100_000)})
ids = ds.to_iterable_dataset()
ids = ids.map(lambda x: {"a": [a + 10 for a in x["a"]]}, batched=True)
_start = time.time()
print(f"Python ({sum(1 for _ in ids)} items):\t{(time.time() - _start) * 1000:.1f}ms")
# Python (100000 items): 695.7ms
ids = ds.to_iterable_dataset().with_format("arrow")
ids = ids.map(lambda t: t.set_column(0, "a", pc.add(t[0], 10)), batched=True)
ids = ids.with_format(None)
_start = time.time()
print(f"Arrow ({sum(1 for _ in ids)} items):\t{(time.time() - _start) * 1000:.1f}ms)")
# Arrow (100000 items): 81.0ms)
```
### Implementation details
I added an optional `iter_arrow` method to examples iterable. If an example iterable has this method, then it can be used to iterate on the examples by batch of arrow tables. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5821/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5821/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5821",
"html_url": "https://github.com/huggingface/datasets/pull/5821",
"diff_url": "https://github.com/huggingface/datasets/pull/5821.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5821.patch",
"merged_at": "2023-05-31T09:36:18"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5820 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5820/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5820/comments | https://api.github.com/repos/huggingface/datasets/issues/5820/events | https://github.com/huggingface/datasets/issues/5820 | 1,695,892,811 | I_kwDODunzps5lFUVL | 5,820 | Incomplete docstring for `BuilderConfig` | {
"login": "Laurent2916",
"id": 21087104,
"node_id": "MDQ6VXNlcjIxMDg3MTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Laurent2916",
"html_url": "https://github.com/Laurent2916",
"followers_url": "https://api.github.com/users/Laurent2916/followers",
"following_url": "https://api.github.com/users/Laurent2916/following{/other_user}",
"gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions",
"organizations_url": "https://api.github.com/users/Laurent2916/orgs",
"repos_url": "https://api.github.com/users/Laurent2916/repos",
"events_url": "https://api.github.com/users/Laurent2916/events{/privacy}",
"received_events_url": "https://api.github.com/users/Laurent2916/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | 1 | 2023-05-04T12:14:34 | 2023-05-05T12:31:56 | 2023-05-05T12:31:56 | CONTRIBUTOR | null | Hi guys !
I stumbled upon this docstring while working on a project.
Some of the attributes have missing descriptions.
https://github.com/huggingface/datasets/blob/bc5fef5b6d91f009e4101684adcb374df2c170f6/src/datasets/builder.py#L104-L117 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5820/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5819 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5819/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5819/comments | https://api.github.com/repos/huggingface/datasets/issues/5819/events | https://github.com/huggingface/datasets/issues/5819 | 1,695,536,738 | I_kwDODunzps5lD9Zi | 5,819 | Cannot pickle error in Dataset.from_generator() | {
"login": "xinghaow99",
"id": 50691954,
"node_id": "MDQ6VXNlcjUwNjkxOTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/50691954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xinghaow99",
"html_url": "https://github.com/xinghaow99",
"followers_url": "https://api.github.com/users/xinghaow99/followers",
"following_url": "https://api.github.com/users/xinghaow99/following{/other_user}",
"gists_url": "https://api.github.com/users/xinghaow99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xinghaow99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xinghaow99/subscriptions",
"organizations_url": "https://api.github.com/users/xinghaow99/orgs",
"repos_url": "https://api.github.com/users/xinghaow99/repos",
"events_url": "https://api.github.com/users/xinghaow99/events{/privacy}",
"received_events_url": "https://api.github.com/users/xinghaow99/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-05-04T08:39:09 | 2023-05-05T19:20:59 | 2023-05-05T19:20:58 | NONE | null | ### Describe the bug
I'm trying to use Dataset.from_generator() to generate a large dataset.
### Steps to reproduce the bug
Code to reproduce:
```
from transformers import T5Tokenizer, T5ForConditionalGeneration, GenerationConfig
import torch
from tqdm import tqdm
from datasets import load_dataset
tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-small")
model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-small", device_map="auto")
model = torch.compile(model)
def generate_data(data_loader):
model.eval()
for batch in tqdm(data_loader):
input_ids = tokenizer(batch['instruction'], return_tensors='pt', padding=True, truncation=True).input_ids.to("cuda:0")
with torch.no_grad():
outputs = model.generate(input_ids, generation_config=generation_config)
decoder_hidden_states = outputs.decoder_hidden_states
for i, h in zip(batch['instruction'], decoder_hidden_states):
yield {"instruction": i, "decoder_hidden_states": h}
generation_config = GenerationConfig(
temperature=1,
max_new_tokens=1024,
do_sample=False,
num_return_sequences=1,
return_dict_in_generate=True,
output_scores=True,
output_hidden_states=True,
)
from datasets import Dataset, load_dataset
from torch.utils.data import DataLoader
dataset = load_dataset("HuggingFaceH4/databricks_dolly_15k")
train_loader = DataLoader(dataset['train'], batch_size=2, shuffle=True)
dataset = Dataset.from_generator(generator=generate_data, gen_kwargs={"data_loader": train_loader})
dataset.save_to_disk("data/flant5_small_generation")
```
### Expected behavior
The dataset should be generated and saved.
But the following error occurred:
```
Traceback (most recent call last):
File "/remote-home/xhwang/alpaca-lora/data_collection_t5.py", line 46, in <module>
dataset = Dataset.from_generator(generator=generate_data, gen_kwargs={"data_loader": train_loader})
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1035, in from_generator
return GeneratorDatasetInputStream(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/io/generator.py", line 28, in __init__
self.builder = Generator(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 336, in __init__
self.config, self.config_id = self._create_builder_config(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 505, in _create_builder_config
config_id = builder_config.create_config_id(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/builder.py", line 179, in create_config_id
suffix = Hasher.hash(config_kwargs_to_add_to_suffix)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/fingerprint.py", line 236, in hash
return cls.hash_default(value)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/fingerprint.py", line 229, in hash_default
return cls.hash_bytes(dumps(value))
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 726, in dumps
dump(obj, file)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 701, in dump
Pickler(file, recurse=True).dump(obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 394, in dump
StockPickler.dump(self, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 487, in dump
self.save(obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1084, in _save_with_postproc
pickler._batch_setitems(iter(source.items()))
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 603, in save
self.save_reduce(obj=obj, *rv)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1070, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 887, in save_tuple
save(element)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1070, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 717, in save_reduce
save(state)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 887, in save_tuple
save(element)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1186, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 1003, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1311, in save_function
dill._dill._save_with_postproc(
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 1084, in _save_with_postproc
pickler._batch_setitems(iter(source.items()))
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 998, in _batch_setitems
save(v)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 691, in save
dill.Pickler.save(self, obj, save_persistent_id=save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/site-packages/dill/_dill.py", line 388, in save
StockPickler.save(self, obj, save_persistent_id)
File "/remote-home/xhwang/anaconda3/envs/alpaca-lora/lib/python3.10/pickle.py", line 578, in save
rv = reduce(self.proto)
TypeError: cannot pickle 'ConfigModuleInstance' object
```
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-4.15.0-156-generic-x86_64-with-glibc2.31
- Python version: 3.10.10
- Huggingface_hub version: 0.13.2
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5819/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5818 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5818/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5818/comments | https://api.github.com/repos/huggingface/datasets/issues/5818/events | https://github.com/huggingface/datasets/issues/5818 | 1,695,052,555 | I_kwDODunzps5lCHML | 5,818 | Ability to update a dataset | {
"login": "davidgilbertson",
"id": 4443482,
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidgilbertson",
"html_url": "https://github.com/davidgilbertson",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 3 | 2023-05-04T01:08:13 | 2023-05-04T20:43:39 | null | NONE | null | ### Feature request
The ability to load a dataset, add or change something, and save it back to disk.
Maybe it's possible, but I can't work out how to do it, e.g. this fails:
```py
import datasets
dataset = datasets.load_from_disk("data/test1")
dataset = dataset.add_item({"text": "A new item"})
dataset.save_to_disk("data/test1")
```
With the error:
```
PermissionError: Tried to overwrite /mnt/c/Users/david/py/learning/mini_projects/data_sorting_and_filtering/data/test1 but a dataset can't overwrite itself.
```
### Motivation
My use case is that I want to process a dataset in a particular way but it doesn't fit in memory if I do it in one go. So I want to perform a loop and at each step in the loop, process one shard and append it to an ever-growing dataset. The code in the loop will load a dataset, add some rows, then save it again.
Maybe I'm just thinking about things incorrectly and there's a better approach. FWIW I can't use `dataset.map()` to do the task because that doesn't work with `num_proc` when adding rows, so is confined to a single process which is too slow.
The only other way I can think of is to create a new file each time, but surely that's not how people do this sort of thing.
### Your contribution
na | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5818/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5817 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5817/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5817/comments | https://api.github.com/repos/huggingface/datasets/issues/5817/events | https://github.com/huggingface/datasets/issues/5817 | 1,694,891,866 | I_kwDODunzps5lBf9a | 5,817 | Setting `num_proc` errors when `.map` returns additional items. | {
"login": "davidgilbertson",
"id": 4443482,
"node_id": "MDQ6VXNlcjQ0NDM0ODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidgilbertson",
"html_url": "https://github.com/davidgilbertson",
"followers_url": "https://api.github.com/users/davidgilbertson/followers",
"following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}",
"gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions",
"organizations_url": "https://api.github.com/users/davidgilbertson/orgs",
"repos_url": "https://api.github.com/users/davidgilbertson/repos",
"events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidgilbertson/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-05-03T21:46:53 | 2023-05-04T21:14:21 | 2023-05-04T20:22:25 | NONE | null | ### Describe the bug
I'm using a map function that returns more rows than are passed in.
If I try to use `num_proc` I get:
```
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 563, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 528, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3097, in map
for rank, done, content in iflatmap_unordered(
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1372, in iflatmap_unordered
yield queue.get(timeout=0.05)
File "<string>", line 2, in get
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/managers.py", line 818, in _callmethod
kind, result = conn.recv()
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/connection.py", line 258, in recv
buf = self._recv_bytes()
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/connection.py", line 422, in _recv_bytes
buf = self._recv(4)
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/connection.py", line 391, in _recv
raise EOFError
EOFError
```
### Steps to reproduce the bug
This is copied from the [Datasets docs](https://huggingface.co/docs/datasets/v2.12.0/en/process#batch-processing), with `num_proc` added, and will error.
```py
import datasets
dataset = ... # any old dataset
def chunk_examples(examples):
chunks = []
for sentence in examples["text"]:
chunks += [sentence[i : i + 50] for i in range(0, len(sentence), 50)]
return {"chunks": chunks}
chunked_dataset = dataset.map(
chunk_examples,
batched=True,
remove_columns=dataset.column_names,
num_proc=2, # Remove and it works
)
```
### Expected behavior
Should work fine. On a related note, multi-processing also fails if there is a Meta class anywhere in scope (and there are plenty in the standard library). This is the fault of `dill` and is a long standing issue.
Have you considered using Loky for multiprocessing? I've found that the built-in `datasets` multi-processing breaks more than it works so have written my own function using `loky`, for reference:
```py
import datasets
import loky
def fast_loop(dataset: datasets.Dataset, func, num_proc=None):
if num_proc is None:
import os
num_proc = len(os.sched_getaffinity(0))
shards = [
dataset.shard(num_shards=num_proc, index=i, contiguous=True)
for i in range(num_proc)
]
executor = loky.get_reusable_executor(max_workers=num_proc)
results = executor.map(func, shards)
return datasets.combine.concatenate_datasets(list(results))
```
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- Huggingface_hub version: 0.12.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5817/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5817/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5816 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5816/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5816/comments | https://api.github.com/repos/huggingface/datasets/issues/5816/events | https://github.com/huggingface/datasets/pull/5816 | 1,694,590,856 | PR_kwDODunzps5Ps4t9 | 5,816 | Preserve `stopping_strategy` of shuffled interleaved dataset (random cycling case) | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-05-03T18:34:18 | 2023-05-04T14:31:55 | 2023-05-04T14:24:49 | CONTRIBUTOR | null | Preserve the `stopping_strategy` in the `RandomlyCyclingMultiSourcesExamplesIterable.shard_data_sources` to fix shuffling a dataset interleaved (from multiple sources) with probabilities.
Fix #5812
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5816/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5816",
"html_url": "https://github.com/huggingface/datasets/pull/5816",
"diff_url": "https://github.com/huggingface/datasets/pull/5816.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5816.patch",
"merged_at": "2023-05-04T14:24:49"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5814/comments | https://api.github.com/repos/huggingface/datasets/issues/5814/events | https://github.com/huggingface/datasets/pull/5814 | 1,693,216,778 | PR_kwDODunzps5PoOQ9 | 5,814 | Repro windows crash | {
"login": "maddiedawson",
"id": 106995444,
"node_id": "U_kgDOBmCe9A",
"avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maddiedawson",
"html_url": "https://github.com/maddiedawson",
"followers_url": "https://api.github.com/users/maddiedawson/followers",
"following_url": "https://api.github.com/users/maddiedawson/following{/other_user}",
"gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions",
"organizations_url": "https://api.github.com/users/maddiedawson/orgs",
"repos_url": "https://api.github.com/users/maddiedawson/repos",
"events_url": "https://api.github.com/users/maddiedawson/events{/privacy}",
"received_events_url": "https://api.github.com/users/maddiedawson/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-05-02T23:30:18 | 2023-05-02T23:47:07 | null | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5814/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5814",
"html_url": "https://github.com/huggingface/datasets/pull/5814",
"diff_url": "https://github.com/huggingface/datasets/pull/5814.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5814.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5815/comments | https://api.github.com/repos/huggingface/datasets/issues/5815/events | https://github.com/huggingface/datasets/issues/5815 | 1,693,701,743 | I_kwDODunzps5k89Zv | 5,815 | Easy way to create a Kaggle dataset from a Huggingface dataset? | {
"login": "hrbigelow",
"id": 5355286,
"node_id": "MDQ6VXNlcjUzNTUyODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5355286?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hrbigelow",
"html_url": "https://github.com/hrbigelow",
"followers_url": "https://api.github.com/users/hrbigelow/followers",
"following_url": "https://api.github.com/users/hrbigelow/following{/other_user}",
"gists_url": "https://api.github.com/users/hrbigelow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hrbigelow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hrbigelow/subscriptions",
"organizations_url": "https://api.github.com/users/hrbigelow/orgs",
"repos_url": "https://api.github.com/users/hrbigelow/repos",
"events_url": "https://api.github.com/users/hrbigelow/events{/privacy}",
"received_events_url": "https://api.github.com/users/hrbigelow/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2023-05-02T21:43:33 | 2023-07-26T16:13:31 | null | NONE | null | I'm not sure whether this is more appropriately addressed with HuggingFace or Kaggle. I would like to somehow directly create a Kaggle dataset from a HuggingFace Dataset.
While Kaggle does provide the option to create a dataset from a URI, that URI must point to a single file. For example:
![image](https://user-images.githubusercontent.com/5355286/235792394-7c559d07-4aff-45b7-ad2b-9c5280c88415.png)
Is there some mechanism from huggingface to represent a dataset (such as that from `load_dataset('wmt14', 'de-en', split='train')` as a single file? Or, some other way to get that into a Kaggle dataset so that I can use the huggingface `datasets` module to process and consume it inside of a Kaggle notebook?
Thanks in advance!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5815/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5813/comments | https://api.github.com/repos/huggingface/datasets/issues/5813/events | https://github.com/huggingface/datasets/pull/5813 | 1,691,908,535 | PR_kwDODunzps5Pj0_E | 5,813 | [DO-NOT-MERGE] Debug Windows issue at #3 | {
"login": "HyukjinKwon",
"id": 6477701,
"node_id": "MDQ6VXNlcjY0Nzc3MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6477701?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HyukjinKwon",
"html_url": "https://github.com/HyukjinKwon",
"followers_url": "https://api.github.com/users/HyukjinKwon/followers",
"following_url": "https://api.github.com/users/HyukjinKwon/following{/other_user}",
"gists_url": "https://api.github.com/users/HyukjinKwon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HyukjinKwon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HyukjinKwon/subscriptions",
"organizations_url": "https://api.github.com/users/HyukjinKwon/orgs",
"repos_url": "https://api.github.com/users/HyukjinKwon/repos",
"events_url": "https://api.github.com/users/HyukjinKwon/events{/privacy}",
"received_events_url": "https://api.github.com/users/HyukjinKwon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-05-02T07:19:34 | 2023-05-02T07:21:30 | 2023-05-02T07:21:30 | NONE | null | TBD | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5813/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5813",
"html_url": "https://github.com/huggingface/datasets/pull/5813",
"diff_url": "https://github.com/huggingface/datasets/pull/5813.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5813.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5812 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5812/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5812/comments | https://api.github.com/repos/huggingface/datasets/issues/5812/events | https://github.com/huggingface/datasets/issues/5812 | 1,691,798,169 | I_kwDODunzps5k1sqZ | 5,812 | Cannot shuffle interleaved IterableDataset with "all_exhausted" stopping strategy | {
"login": "off99555",
"id": 15215732,
"node_id": "MDQ6VXNlcjE1MjE1NzMy",
"avatar_url": "https://avatars.githubusercontent.com/u/15215732?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/off99555",
"html_url": "https://github.com/off99555",
"followers_url": "https://api.github.com/users/off99555/followers",
"following_url": "https://api.github.com/users/off99555/following{/other_user}",
"gists_url": "https://api.github.com/users/off99555/gists{/gist_id}",
"starred_url": "https://api.github.com/users/off99555/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/off99555/subscriptions",
"organizations_url": "https://api.github.com/users/off99555/orgs",
"repos_url": "https://api.github.com/users/off99555/repos",
"events_url": "https://api.github.com/users/off99555/events{/privacy}",
"received_events_url": "https://api.github.com/users/off99555/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | 0 | 2023-05-02T05:26:17 | 2023-05-04T14:24:51 | 2023-05-04T14:24:51 | NONE | null | ### Describe the bug
Shuffling interleaved `IterableDataset` with "all_exhausted" strategy yields non-exhaustive sampling.
### Steps to reproduce the bug
```py
from datasets import IterableDataset, interleave_datasets
def gen(bias, length):
for i in range(length):
yield dict(a=bias+i)
seed = 42
probabilities = [0.2, 0.6, 0.2]
d1 = IterableDataset.from_generator(lambda: gen(0, 3))
d2 = IterableDataset.from_generator(lambda: gen(10, 4))
d3 = IterableDataset.from_generator(lambda: gen(20, 3))
ds = interleave_datasets([d1, d2, d3], probabilities=probabilities, seed=seed, stopping_strategy='all_exhausted')
ds = ds.shuffle(buffer_size=1000)
for x in ds:
print(x)
```
This code produces
```
{'a': 0}
{'a': 22}
{'a': 20}
{'a': 21}
{'a': 10}
{'a': 1}
```
### Expected behavior
It should produce a longer list of examples to exhaust all the datasets.
If you comment out the shuffle line, it will exhaust all the datasets properly.
Here is the output if you comment out shuffling:
```
{'a': 10}
{'a': 11}
{'a': 20}
{'a': 12}
{'a': 0}
{'a': 21}
{'a': 13}
{'a': 10}
{'a': 1}
{'a': 11}
{'a': 12}
{'a': 22}
{'a': 13}
{'a': 20}
{'a': 10}
{'a': 11}
{'a': 12}
{'a': 2}
```
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
This was run on Google Colab. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5812/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5811 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5811/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5811/comments | https://api.github.com/repos/huggingface/datasets/issues/5811/events | https://github.com/huggingface/datasets/issues/5811 | 1,689,919,046 | I_kwDODunzps5kuh5G | 5,811 | load_dataset: TypeError: 'NoneType' object is not callable, on local dataset filename changes | {
"login": "durapensa",
"id": 50685483,
"node_id": "MDQ6VXNlcjUwNjg1NDgz",
"avatar_url": "https://avatars.githubusercontent.com/u/50685483?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/durapensa",
"html_url": "https://github.com/durapensa",
"followers_url": "https://api.github.com/users/durapensa/followers",
"following_url": "https://api.github.com/users/durapensa/following{/other_user}",
"gists_url": "https://api.github.com/users/durapensa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/durapensa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/durapensa/subscriptions",
"organizations_url": "https://api.github.com/users/durapensa/orgs",
"repos_url": "https://api.github.com/users/durapensa/repos",
"events_url": "https://api.github.com/users/durapensa/events{/privacy}",
"received_events_url": "https://api.github.com/users/durapensa/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-04-30T13:27:17 | 2023-05-05T17:44:03 | null | NONE | null | ### Describe the bug
I've adapted Databrick's [train_dolly.py](/databrickslabs/dolly/blob/master/train_dolly.py) to train using a local dataset, which has been working. Upon changing the filenames of the `.json` & `.py` files in my local dataset directory, `dataset = load_dataset(path_or_dataset)["train"]` throws the error:
```python
2023-04-30 09:10:52 INFO [training.trainer] Loading dataset from dushowxa-characters
Traceback (most recent call last):
File "/data/dushowxa-dolly/train_dushowxa.py", line 26, in <module>
load_training_dataset()
File "/data/dushowxa-dolly/training/trainer.py", line 89, in load_training_dataset
dataset = load_dataset(path_or_dataset)["train"]
File "/data/dushowxa-dolly/.venv/lib/python3.10/site-packages/datasets/load.py", line 1773, in load_dataset
builder_instance = load_dataset_builder(
File "/data/dushowxa-dolly/.venv/lib/python3.10/site-packages/datasets/load.py", line 1528, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
TypeError: 'NoneType' object is not callable
```
The local dataset filenames were of the form `dushowxa-characters/expanse-dushowxa-characters.json` and are now of the form `dushowxa-characters/dushowxa-characters.json` (the word `expanse-` was removed from the filenames). Is this perhaps a dataset caching issue?
I have attempted to manually clear caches, but to no effect:
```sh
rm -rfv ~/.cache/huggingface/datasets/*
rm -rfv ~/.cache/huggingface/modules/*
```
### Steps to reproduce the bug
Run `python3 train_dushowxa.py` (adapted from Databrick's [train_dolly.py](/databrickslabs/dolly/blob/master/train_dolly.py)).
### Expected behavior
Training succeeds as before local dataset filenames were changed.
### Environment info
Ubuntu 22.04, Python 3.10.6, venv
```python
accelerate>=0.16.0,<1
click>=8.0.4,<9
datasets>=2.10.0,<3
deepspeed>=0.9.0,<1
transformers[torch]>=4.28.1,<5
langchain>=0.0.139
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5811/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5810 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5810/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5810/comments | https://api.github.com/repos/huggingface/datasets/issues/5810/events | https://github.com/huggingface/datasets/pull/5810 | 1,689,917,822 | PR_kwDODunzps5PdJHI | 5,810 | Add `fn_kwargs` to `map` and `filter` of `IterableDataset` and `IterableDatasetDict` | {
"login": "yuukicammy",
"id": 3927621,
"node_id": "MDQ6VXNlcjM5Mjc2MjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3927621?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yuukicammy",
"html_url": "https://github.com/yuukicammy",
"followers_url": "https://api.github.com/users/yuukicammy/followers",
"following_url": "https://api.github.com/users/yuukicammy/following{/other_user}",
"gists_url": "https://api.github.com/users/yuukicammy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yuukicammy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yuukicammy/subscriptions",
"organizations_url": "https://api.github.com/users/yuukicammy/orgs",
"repos_url": "https://api.github.com/users/yuukicammy/repos",
"events_url": "https://api.github.com/users/yuukicammy/events{/privacy}",
"received_events_url": "https://api.github.com/users/yuukicammy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 9 | 2023-04-30T13:23:01 | 2023-05-22T08:12:39 | 2023-05-22T08:05:31 | CONTRIBUTOR | null | # Overview
I've added an argument`fn_kwargs` for map and filter methods of `IterableDataset` and `IterableDatasetDict` classes.
# Details
Currently, the map and filter methods of some classes related to `IterableDataset` do not allow specifing the arguments passed to the function. This pull request adds `fn_kwargs` to pass arguments to the mapping function. This allows users to preprocess data more flexibly.
Added `fn_kwargs` to the following classes and methods (description of the argument is also added).
1. class `FilteredExamplesIterable`
2. method `filter` of class `IterableDataset`
3. method `map` of class `IterableDatasetDict`
4. method `filter` of class `IterableDatasetDict`
# Example of changes
Here's an example of how to use the new functionality:
```python
from datasets import IterableDatasetDict
def preprocess_function(example, a=None, b=None):
# do something
return example
dataset = IterableDatasetDict(...)
dataset = dataset.map(preprocess_function, fn_kwargs={"a": 1, "b": 2})
```
# Related Issues
This pull request is related to the following issue:
https://github.com/huggingface/datasets/issues/3444 .
# Testing
I have added unit tests to test the new functionality.
In test_iterable_dataset.py
- Added `test_filtered_examples_iterable_with_fn_kwargs` for [1](#details).
- Added `test_iterable_dataset_filter` for [2](#details).
- Added `test_iterable_dataset_map_with_fn_kwargs`. This is not a newly added feature, but was added because it was not tested.
In test_dataset_dict.py
- Added `_create_dummy_iterable_dataset` for [3](#details) and [4](#details).
- Added `_create_dummy_iterable_dataset_dict` for [3](#details) and [4](#details).
- Added `test_iterable_map` for [3](#details).
- Added `test_iterable_filter` for [4](#details).
Note that, there is no test for `IterableDatasetDict` at the current main branch. I thought about writing tests for `IterableDatasetDict` in a new file, but I decided to add them in the test file for `DatasetDict` (test_dataset_dict.py).
# Checklist
- [x] Format the code.
- [x] Added tests.
- [x] Passed tests locally. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5810/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5810",
"html_url": "https://github.com/huggingface/datasets/pull/5810",
"diff_url": "https://github.com/huggingface/datasets/pull/5810.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5810.patch",
"merged_at": "2023-05-22T08:05:31"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5809 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5809/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5809/comments | https://api.github.com/repos/huggingface/datasets/issues/5809/events | https://github.com/huggingface/datasets/issues/5809 | 1,689,797,293 | I_kwDODunzps5kuEKt | 5,809 | wiki_dpr details for Open Domain Question Answering tasks | {
"login": "yulgok22",
"id": 64122846,
"node_id": "MDQ6VXNlcjY0MTIyODQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/64122846?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yulgok22",
"html_url": "https://github.com/yulgok22",
"followers_url": "https://api.github.com/users/yulgok22/followers",
"following_url": "https://api.github.com/users/yulgok22/following{/other_user}",
"gists_url": "https://api.github.com/users/yulgok22/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yulgok22/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yulgok22/subscriptions",
"organizations_url": "https://api.github.com/users/yulgok22/orgs",
"repos_url": "https://api.github.com/users/yulgok22/repos",
"events_url": "https://api.github.com/users/yulgok22/events{/privacy}",
"received_events_url": "https://api.github.com/users/yulgok22/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-04-30T06:12:04 | 2023-07-21T14:11:00 | 2023-07-21T14:11:00 | NONE | null | Hey guys!
Thanks for creating the wiki_dpr dataset!
I am currently trying to combine wiki_dpr and my own datasets. but I don't know how to make the embedding value the same way as wiki_dpr.
As an experiment, I embeds the text of id="7" of wiki_dpr, but this result was very different from wiki_dpr. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5809/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5807 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5807/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5807/comments | https://api.github.com/repos/huggingface/datasets/issues/5807/events | https://github.com/huggingface/datasets/pull/5807 | 1,688,977,237 | PR_kwDODunzps5PaKRE | 5,807 | Support parallelized downloading in load_dataset with Spark | {
"login": "es94129",
"id": 12763339,
"node_id": "MDQ6VXNlcjEyNzYzMzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/12763339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/es94129",
"html_url": "https://github.com/es94129",
"followers_url": "https://api.github.com/users/es94129/followers",
"following_url": "https://api.github.com/users/es94129/following{/other_user}",
"gists_url": "https://api.github.com/users/es94129/gists{/gist_id}",
"starred_url": "https://api.github.com/users/es94129/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/es94129/subscriptions",
"organizations_url": "https://api.github.com/users/es94129/orgs",
"repos_url": "https://api.github.com/users/es94129/repos",
"events_url": "https://api.github.com/users/es94129/events{/privacy}",
"received_events_url": "https://api.github.com/users/es94129/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-04-28T18:34:32 | 2023-05-25T16:54:14 | 2023-05-25T16:54:14 | CONTRIBUTOR | null | As proposed in https://github.com/huggingface/datasets/issues/5798, this adds support to parallelized downloading in `load_dataset` with Spark, which can speed up the process by distributing the workload to worker nodes.
Parallelizing dataset processing is not supported in this PR. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5807/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5807",
"html_url": "https://github.com/huggingface/datasets/pull/5807",
"diff_url": "https://github.com/huggingface/datasets/pull/5807.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5807.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5806 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5806/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5806/comments | https://api.github.com/repos/huggingface/datasets/issues/5806/events | https://github.com/huggingface/datasets/issues/5806 | 1,688,598,095 | I_kwDODunzps5kpfZP | 5,806 | Return the name of the currently loaded file in the load_dataset function. | {
"login": "s-JoL",
"id": 16948304,
"node_id": "MDQ6VXNlcjE2OTQ4MzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/16948304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/s-JoL",
"html_url": "https://github.com/s-JoL",
"followers_url": "https://api.github.com/users/s-JoL/followers",
"following_url": "https://api.github.com/users/s-JoL/following{/other_user}",
"gists_url": "https://api.github.com/users/s-JoL/gists{/gist_id}",
"starred_url": "https://api.github.com/users/s-JoL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/s-JoL/subscriptions",
"organizations_url": "https://api.github.com/users/s-JoL/orgs",
"repos_url": "https://api.github.com/users/s-JoL/repos",
"events_url": "https://api.github.com/users/s-JoL/events{/privacy}",
"received_events_url": "https://api.github.com/users/s-JoL/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | open | false | {
"login": "tsabbir96",
"id": 49894149,
"node_id": "MDQ6VXNlcjQ5ODk0MTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/49894149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tsabbir96",
"html_url": "https://github.com/tsabbir96",
"followers_url": "https://api.github.com/users/tsabbir96/followers",
"following_url": "https://api.github.com/users/tsabbir96/following{/other_user}",
"gists_url": "https://api.github.com/users/tsabbir96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tsabbir96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tsabbir96/subscriptions",
"organizations_url": "https://api.github.com/users/tsabbir96/orgs",
"repos_url": "https://api.github.com/users/tsabbir96/repos",
"events_url": "https://api.github.com/users/tsabbir96/events{/privacy}",
"received_events_url": "https://api.github.com/users/tsabbir96/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "tsabbir96",
"id": 49894149,
"node_id": "MDQ6VXNlcjQ5ODk0MTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/49894149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tsabbir96",
"html_url": "https://github.com/tsabbir96",
"followers_url": "https://api.github.com/users/tsabbir96/followers",
"following_url": "https://api.github.com/users/tsabbir96/following{/other_user}",
"gists_url": "https://api.github.com/users/tsabbir96/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tsabbir96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tsabbir96/subscriptions",
"organizations_url": "https://api.github.com/users/tsabbir96/orgs",
"repos_url": "https://api.github.com/users/tsabbir96/repos",
"events_url": "https://api.github.com/users/tsabbir96/events{/privacy}",
"received_events_url": "https://api.github.com/users/tsabbir96/received_events",
"type": "User",
"site_admin": false
}
] | null | 8 | 2023-04-28T13:50:15 | 2023-07-28T22:08:18 | null | NONE | null | ### Feature request
Add an optional parameter return_file_name in the load_dataset function. When it is set to True, the function will include the name of the file corresponding to the current line as a feature in the returned output.
### Motivation
When training large language models, machine problems may interrupt the training process. In such cases, it is common to load a previously saved checkpoint to resume training. I would like to be able to obtain the names of the previously trained data shards, so that I can skip these parts of the data during continued training to avoid overfitting and redundant training time.
### Your contribution
I currently use a dataset in jsonl format, so I am primarily interested in the json format. I suggest adding the file name to the returned table here https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/json/json.py#L92. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5806/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5806/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5805 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5805/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5805/comments | https://api.github.com/repos/huggingface/datasets/issues/5805/events | https://github.com/huggingface/datasets/issues/5805 | 1,688,558,577 | I_kwDODunzps5kpVvx | 5,805 | Improve `Create a dataset` tutorial | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | open | false | null | [] | null | 2 | 2023-04-28T13:26:22 | 2023-06-23T14:58:44 | null | CONTRIBUTOR | null | Our [tutorial on how to create a dataset](https://huggingface.co/docs/datasets/create_dataset) is a bit misleading.
1. In **Folder-based builders** section it says that we have two folder-based builders as standard builders, but we also have similar builders (that can be created from directory with data of required format) for `csv`, `json/jsonl`, `parquet` and `txt` files. We have info about these loaders in separate [guide for loading](https://huggingface.co/docs/datasets/loading#local-and-remote-files) but it's worth briefly mentioning them in the beginning tutorial because they are more common and for consistency. Would be helpful to add the link to the full guide.
2. **From local files** section lists methods for creating a dataset from in-memory data which are also described in [loading guide](https://huggingface.co/docs/datasets/loading#inmemory-data).
Maybe we should actually rethink and restructure this tutorial somehow. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5805/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/5805/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5804 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5804/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5804/comments | https://api.github.com/repos/huggingface/datasets/issues/5804/events | https://github.com/huggingface/datasets/pull/5804 | 1,688,285,666 | PR_kwDODunzps5PX0Dk | 5,804 | Set dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-04-28T10:10:01 | 2023-04-28T10:18:51 | 2023-04-28T10:10:29 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5804/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5804",
"html_url": "https://github.com/huggingface/datasets/pull/5804",
"diff_url": "https://github.com/huggingface/datasets/pull/5804.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5804.patch",
"merged_at": "2023-04-28T10:10:29"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5803 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5803/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5803/comments | https://api.github.com/repos/huggingface/datasets/issues/5803/events | https://github.com/huggingface/datasets/pull/5803 | 1,688,256,290 | PR_kwDODunzps5PXtte | 5,803 | Release: 2.12.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-04-28T09:52:11 | 2023-04-28T10:18:56 | 2023-04-28T09:54:43 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5803/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5803",
"html_url": "https://github.com/huggingface/datasets/pull/5803",
"diff_url": "https://github.com/huggingface/datasets/pull/5803.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5803.patch",
"merged_at": "2023-04-28T09:54:43"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5802/comments | https://api.github.com/repos/huggingface/datasets/issues/5802/events | https://github.com/huggingface/datasets/pull/5802 | 1,686,509,799 | PR_kwDODunzps5PR199 | 5,802 | Validate non-empty data_files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-04-27T09:51:36 | 2023-04-27T14:59:47 | 2023-04-27T14:51:40 | MEMBER | null | This PR adds validation of `data_files`, so that they are non-empty (str, list, or dict) or `None` (default).
See: https://github.com/huggingface/datasets/pull/5787#discussion_r1178862327 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5802/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5802/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5802",
"html_url": "https://github.com/huggingface/datasets/pull/5802",
"diff_url": "https://github.com/huggingface/datasets/pull/5802.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5802.patch",
"merged_at": "2023-04-27T14:51:40"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5800 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5800/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5800/comments | https://api.github.com/repos/huggingface/datasets/issues/5800/events | https://github.com/huggingface/datasets/pull/5800 | 1,686,348,096 | PR_kwDODunzps5PRTRh | 5,800 | Change downloaded file permission based on umask | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-04-27T08:13:30 | 2023-04-27T09:33:05 | 2023-04-27T09:30:16 | MEMBER | null | This PR changes the permission of downloaded files to cache, so that the umask is taken into account.
Related to:
- #2157
Fix #5799.
CC: @stas00 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5800/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5800",
"html_url": "https://github.com/huggingface/datasets/pull/5800",
"diff_url": "https://github.com/huggingface/datasets/pull/5800.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5800.patch",
"merged_at": "2023-04-27T09:30:16"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5799 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5799/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5799/comments | https://api.github.com/repos/huggingface/datasets/issues/5799/events | https://github.com/huggingface/datasets/issues/5799 | 1,686,334,572 | I_kwDODunzps5kg2xs | 5,799 | Files downloaded to cache do not respect umask | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | 0 | 2023-04-27T08:06:05 | 2023-04-27T09:30:17 | 2023-04-27T09:30:17 | MEMBER | null | As reported by @stas00, files downloaded to the cache do not respect umask:
```bash
$ ls -l /path/to/cache/datasets/downloads/
-rw------- 1 uername username 150M Apr 25 16:41 5e646c1d600f065adaeb134e536f6f2f296a6d804bd1f0e1fdcd20ee28c185c6
```
Related to:
- #2065 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5799/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5798 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5798/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5798/comments | https://api.github.com/repos/huggingface/datasets/issues/5798/events | https://github.com/huggingface/datasets/issues/5798 | 1,685,904,526 | I_kwDODunzps5kfNyO | 5,798 | Support parallelized downloading and processing in load_dataset with Spark | {
"login": "es94129",
"id": 12763339,
"node_id": "MDQ6VXNlcjEyNzYzMzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/12763339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/es94129",
"html_url": "https://github.com/es94129",
"followers_url": "https://api.github.com/users/es94129/followers",
"following_url": "https://api.github.com/users/es94129/following{/other_user}",
"gists_url": "https://api.github.com/users/es94129/gists{/gist_id}",
"starred_url": "https://api.github.com/users/es94129/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/es94129/subscriptions",
"organizations_url": "https://api.github.com/users/es94129/orgs",
"repos_url": "https://api.github.com/users/es94129/repos",
"events_url": "https://api.github.com/users/es94129/events{/privacy}",
"received_events_url": "https://api.github.com/users/es94129/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 17 | 2023-04-27T00:16:11 | 2023-05-25T14:11:41 | null | CONTRIBUTOR | null | ### Feature request
When calling `load_dataset` for datasets that have multiple files, support using Spark to distribute the downloading and processing job to worker nodes when `cache_dir` is a cloud file system shared among nodes.
```python
load_dataset(..., use_spark=True)
```
### Motivation
Further speed up `dl_manager.download` and `_prepare_split` by distributing the workloads to worker nodes.
### Your contribution
I can submit a PR to support this. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5798/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5798/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5797 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5797/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5797/comments | https://api.github.com/repos/huggingface/datasets/issues/5797/events | https://github.com/huggingface/datasets/issues/5797 | 1,685,501,199 | I_kwDODunzps5kdrUP | 5,797 | load_dataset is case sentitive? | {
"login": "haonan-li",
"id": 34729065,
"node_id": "MDQ6VXNlcjM0NzI5MDY1",
"avatar_url": "https://avatars.githubusercontent.com/u/34729065?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/haonan-li",
"html_url": "https://github.com/haonan-li",
"followers_url": "https://api.github.com/users/haonan-li/followers",
"following_url": "https://api.github.com/users/haonan-li/following{/other_user}",
"gists_url": "https://api.github.com/users/haonan-li/gists{/gist_id}",
"starred_url": "https://api.github.com/users/haonan-li/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/haonan-li/subscriptions",
"organizations_url": "https://api.github.com/users/haonan-li/orgs",
"repos_url": "https://api.github.com/users/haonan-li/repos",
"events_url": "https://api.github.com/users/haonan-li/events{/privacy}",
"received_events_url": "https://api.github.com/users/haonan-li/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2023-04-26T18:19:04 | 2023-04-27T11:56:58 | null | NONE | null | ### Describe the bug
load_dataset() function is case sensitive?
### Steps to reproduce the bug
The following two code, get totally different behavior.
1. load_dataset('mbzuai/bactrian-x','en')
2. load_dataset('MBZUAI/Bactrian-X','en')
### Expected behavior
Compare 1 and 2.
1 will download all 52 subsets, shell output:
```Downloading and preparing dataset json/MBZUAI--bactrian-X to xxx```
2 will only download single subset, shell output
```Downloading and preparing dataset bactrian-x/en to xxx```
### Environment info
Python 3.10.11
datasets Version: 2.11.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5797/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5796 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5796/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5796/comments | https://api.github.com/repos/huggingface/datasets/issues/5796/events | https://github.com/huggingface/datasets/pull/5796 | 1,685,451,919 | PR_kwDODunzps5PORm- | 5,796 | Spark docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-04-26T17:39:43 | 2023-04-27T16:41:50 | 2023-04-27T16:34:45 | MEMBER | null | Added a "Use with Spark" doc page to document `Dataset.from_spark` following https://github.com/huggingface/datasets/pull/5701
cc @maddiedawson | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5796/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5796",
"html_url": "https://github.com/huggingface/datasets/pull/5796",
"diff_url": "https://github.com/huggingface/datasets/pull/5796.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5796.patch",
"merged_at": "2023-04-27T16:34:45"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5795 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5795/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5795/comments | https://api.github.com/repos/huggingface/datasets/issues/5795/events | https://github.com/huggingface/datasets/pull/5795 | 1,685,414,505 | PR_kwDODunzps5POJo8 | 5,795 | Fix spark imports | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-04-26T17:09:32 | 2023-04-26T17:49:03 | 2023-04-26T17:39:12 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5795/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5795/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5795",
"html_url": "https://github.com/huggingface/datasets/pull/5795",
"diff_url": "https://github.com/huggingface/datasets/pull/5795.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5795.patch",
"merged_at": "2023-04-26T17:39:12"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5794 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5794/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5794/comments | https://api.github.com/repos/huggingface/datasets/issues/5794/events | https://github.com/huggingface/datasets/issues/5794 | 1,685,196,061 | I_kwDODunzps5kcg0d | 5,794 | CI ZeroDivisionError | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | 0 | 2023-04-26T14:55:23 | 2023-04-26T14:55:23 | null | MEMBER | null | Sometimes when running our CI on Windows, we get a ZeroDivisionError:
```
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore - ZeroDivisionError: float division by zero
```
See for example:
- https://github.com/huggingface/datasets/actions/runs/4809358266/jobs/8560513110
- https://github.com/huggingface/datasets/actions/runs/4798359836/jobs/8536573688
```
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
split = 'test', start_time = 1682516718.8236516, num_samples = 2, num_steps = 1
def speed_metrics(split, start_time, num_samples=None, num_steps=None):
"""
Measure and return speed performance metrics.
This function requires a time snapshot `start_time` before the operation to be measured starts and this function
should be run immediately after the operation to be measured has completed.
Args:
- split: name to prefix metric (like train, eval, test...)
- start_time: operation start time
- num_samples: number of samples processed
"""
runtime = time.time() - start_time
result = {f"{split}_runtime": round(runtime, 4)}
if num_samples is not None:
> samples_per_second = num_samples / runtime
E ZeroDivisionError: float division by zero
C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\transformers\trainer_utils.py:354: ZeroDivisionError
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5794/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5793 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5793/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5793/comments | https://api.github.com/repos/huggingface/datasets/issues/5793/events | https://github.com/huggingface/datasets/issues/5793 | 1,684,777,320 | I_kwDODunzps5ka6lo | 5,793 | IterableDataset.with_format("torch") not working | {
"login": "jiangwy99",
"id": 39762734,
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangwy99",
"html_url": "https://github.com/jiangwy99",
"followers_url": "https://api.github.com/users/jiangwy99/followers",
"following_url": "https://api.github.com/users/jiangwy99/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwy99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangwy99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwy99/subscriptions",
"organizations_url": "https://api.github.com/users/jiangwy99/orgs",
"repos_url": "https://api.github.com/users/jiangwy99/repos",
"events_url": "https://api.github.com/users/jiangwy99/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangwy99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | 1 | 2023-04-26T10:50:23 | 2023-06-13T15:57:06 | 2023-06-13T15:57:06 | NONE | null | ### Describe the bug
After calling the with_format("torch") method on an IterableDataset instance, the data format is unchanged.
### Steps to reproduce the bug
```python
from datasets import IterableDataset
def gen():
for i in range(4):
yield {"a": [i] * 4}
dataset = IterableDataset.from_generator(gen).with_format("torch")
next(iter(dataset))
```
### Expected behavior
`{"a": torch.tensor([0, 0, 0, 0])}` is expected, but `{"a": [0, 0, 0, 0]}` is observed.
### Environment info
```bash
platform==ubuntu 22.04.01
python==3.10.9
datasets==2.11.0
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5793/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5791 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5791/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5791/comments | https://api.github.com/repos/huggingface/datasets/issues/5791/events | https://github.com/huggingface/datasets/issues/5791 | 1,683,473,943 | I_kwDODunzps5kV8YX | 5,791 | TIFF/TIF support | {
"login": "sebasmos",
"id": 31293221,
"node_id": "MDQ6VXNlcjMxMjkzMjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/31293221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sebasmos",
"html_url": "https://github.com/sebasmos",
"followers_url": "https://api.github.com/users/sebasmos/followers",
"following_url": "https://api.github.com/users/sebasmos/following{/other_user}",
"gists_url": "https://api.github.com/users/sebasmos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sebasmos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sebasmos/subscriptions",
"organizations_url": "https://api.github.com/users/sebasmos/orgs",
"repos_url": "https://api.github.com/users/sebasmos/repos",
"events_url": "https://api.github.com/users/sebasmos/events{/privacy}",
"received_events_url": "https://api.github.com/users/sebasmos/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2023-04-25T16:14:18 | 2023-05-05T16:22:50 | null | NONE | null | ### Feature request
I currently have a dataset (with tiff and json files) where I have to do this:
`wget path_to_data/images.zip && unzip images.zip`
`wget path_to_data/annotations.zip && unzip annotations.zip`
Would it make sense a contribution that supports these type of files?
### Motivation
instead of using `load_dataset` have to use wget as these files are not supported for annotations with JSON and images with TIFF files.
Additionally to this, the PIL formatting from datasets does not read correctly the image channels with TIFF format, besides multichannel adaptation might be necessary as well (as my data e.g has more than 3 channels)
### Your contribution
1. Support TIFF images over multi channel format
2. Support JSON annotations | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5791/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5791/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5790/comments | https://api.github.com/repos/huggingface/datasets/issues/5790/events | https://github.com/huggingface/datasets/pull/5790 | 1,683,229,126 | PR_kwDODunzps5PG0mJ | 5,790 | Allow to run CI on push to ci-branch | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-04-25T13:57:26 | 2023-04-26T13:43:08 | 2023-04-26T13:35:47 | MEMBER | null | This PR allows to run the CI on push to a branch named "ci-*", without needing to open a PR.
- This will allow to make CI tests without opening a PR, e.g., for future `huggingface-hub` releases, future dependency releases (like `fsspec`, `pandas`,...)
Note that to build the documentation, we already allow it on push to a branch named "doc-builder*".
See:
- #5788
CC: @Wauplin | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5790/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5790",
"html_url": "https://github.com/huggingface/datasets/pull/5790",
"diff_url": "https://github.com/huggingface/datasets/pull/5790.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5790.patch",
"merged_at": "2023-04-26T13:35:47"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/5789 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5789/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5789/comments | https://api.github.com/repos/huggingface/datasets/issues/5789/events | https://github.com/huggingface/datasets/issues/5789 | 1,682,611,179 | I_kwDODunzps5kSpvr | 5,789 | Support streaming datasets that use jsonlines | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | 0 | 2023-04-25T07:40:02 | 2023-04-25T07:40:03 | null | MEMBER | null | Extend support for streaming datasets that use `jsonlines.open`.
Currently, if `jsonlines` is installed, `datasets` raises a `FileNotFoundError`:
```
FileNotFoundError: [Errno 2] No such file or directory: 'https://...'
```
See:
- https://huggingface.co/datasets/masakhane/afriqa/discussions/1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5789/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5789/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5788 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5788/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5788/comments | https://api.github.com/repos/huggingface/datasets/issues/5788/events | https://github.com/huggingface/datasets/pull/5788 | 1,681,136,256 | PR_kwDODunzps5O_v4B | 5,788 | Prepare tests for hfh 0.14 | {
"login": "Wauplin",
"id": 11801849,
"node_id": "MDQ6VXNlcjExODAxODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Wauplin",
"html_url": "https://github.com/Wauplin",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 6 | 2023-04-24T12:13:03 | 2023-04-25T14:32:56 | 2023-04-25T14:25:30 | CONTRIBUTOR | null | Related to the coming release of `huggingface_hub==0.14.0`. It will break some internal tests. The PR fixes these tests. Let's double-check the CI but I expect the fixed tests to be running fine with both `hfh<=0.13.4` and `hfh==0.14`. Worth case scenario, existing PRs will have to be rebased once this fix is merged.
See related [discussion](https://huggingface.slack.com/archives/C02V5EA0A95/p1682337463368609?thread_ts=1681994202.635609&cid=C02V5EA0A95) (private slack).
cc @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/5788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/5788/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5788",
"html_url": "https://github.com/huggingface/datasets/pull/5788",
"diff_url": "https://github.com/huggingface/datasets/pull/5788.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5788.patch",
"merged_at": "2023-04-25T14:25:30"
} | true |