url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.68B
1.88B
| node_id
stringlengths 18
19
| number
int64 5.79k
6.2k
| title
stringlengths 1
280
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
null | comments
int64 0
44
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 3
17.6k
⌀ | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6203/comments | https://api.github.com/repos/huggingface/datasets/issues/6203/events | https://github.com/huggingface/datasets/issues/6203 | 1,877,491,602 | I_kwDODunzps5v6D-S | 6,203 | Support loading from a DVC remote repository | {
"login": "bilelomrani1",
"id": 16692099,
"node_id": "MDQ6VXNlcjE2NjkyMDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bilelomrani1",
"html_url": "https://github.com/bilelomrani1",
"followers_url": "https://api.github.com/users/bilelomrani1/followers",
"following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}",
"gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions",
"organizations_url": "https://api.github.com/users/bilelomrani1/orgs",
"repos_url": "https://api.github.com/users/bilelomrani1/repos",
"events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}",
"received_events_url": "https://api.github.com/users/bilelomrani1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2023-09-01T14:04:52 | 2023-09-01T14:04:52 | null | NONE | null | ### Feature request
Adding support for loading a file from a DVC repository, tracked remotely on a SCM.
### Motivation
DVC is a popular version control system to version and manage datasets. The files are stored on a remote object storage platform, but they are tracked using Git. Integration with DVC is possible through the `DVCFileSystem`.
I have a Gitlab repository where multiple files are tracked using DVC and stored in a GCP bucket. I would like to be able to load these files using `datasets` directly using an URL. My goal is to write a generic code that abstracts the storage layer, such that my users will only have to pass in an `fsspec`-compliant URL and the corresponding files will be loaded.
### Your contribution
I managed to instantiate a `DVCFileSystem` pointing to a Gitlab repo from a `fsspec` chained URL in [this pull request](https://github.com/iterative/dvc/pull/9903) to DVC.
```python
from fsspec.core import url_to_fs
fs, _ = url_to_fs("dvc::https://gitlab.com/repository/group/my-repo")
```
From now I'm not sure how to continue, it seems that `datasets` expects the URL to be fully qualified like so: `dvc::https://gitlab.com/repository/group/my-repo/my-folder/my-file.json` but this fails because `DVCFileSystem` expects the URL to point to the root of an SCM repo. Is there a way to make this work with `datasets`? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6203/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6202/comments | https://api.github.com/repos/huggingface/datasets/issues/6202/events | https://github.com/huggingface/datasets/issues/6202 | 1,876,630,351 | I_kwDODunzps5v2xtP | 6,202 | avoid downgrading jax version | {
"login": "chrisflesher",
"id": 1332458,
"node_id": "MDQ6VXNlcjEzMzI0NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1332458?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisflesher",
"html_url": "https://github.com/chrisflesher",
"followers_url": "https://api.github.com/users/chrisflesher/followers",
"following_url": "https://api.github.com/users/chrisflesher/following{/other_user}",
"gists_url": "https://api.github.com/users/chrisflesher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/chrisflesher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chrisflesher/subscriptions",
"organizations_url": "https://api.github.com/users/chrisflesher/orgs",
"repos_url": "https://api.github.com/users/chrisflesher/repos",
"events_url": "https://api.github.com/users/chrisflesher/events{/privacy}",
"received_events_url": "https://api.github.com/users/chrisflesher/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2023-09-01T02:57:57 | 2023-09-01T02:58:53 | null | NONE | null | ### Feature request
Whenever I `pip install datasets[jax]` it downgrades jax to version 0.3.25. I seem to be able to install this library first then upgrade jax back to version 0.4.13.
### Motivation
It would be nice to not overwrite currently installed version of jax if possible.
### Your contribution
I would be willing to beta test. Or maybe write some code if I could get pointed in the right direction, I'm not super familiar with this codebase. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6202/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6201/comments | https://api.github.com/repos/huggingface/datasets/issues/6201/events | https://github.com/huggingface/datasets/pull/6201 | 1,875,256,775 | PR_kwDODunzps5ZOVbV | 6,201 | Fix to_json ValueError and remove pandas pin | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2023-08-31T10:38:08 | 2023-08-31T14:08:51 | null | MEMBER | null | This PR fixes the root cause of the issue:
- #6197
This PR also removes the temporary pin of `pandas` introduced by:
- #6200
Note that for orient in ['records', 'values'], index value is ignored but
- in `pandas` < 2.1.0, a ValueError is raised if not index and orient not in ['split', 'table']
- for orient = 'records', we need index = True
- default index value is True
- in `pandas` = 2.1.0, a ValueError is raised if index is True and orient in ['records', 'values']
- for orient = 'records', we need index = False or None
- default index value is None
This PR fixes the issue by not passing index and thus using default index value (valid for all pandas versions), unless orient is 'split' or 'table' (where we pass index = False, as it was done before this fix). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6201/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6201",
"html_url": "https://github.com/huggingface/datasets/pull/6201",
"diff_url": "https://github.com/huggingface/datasets/pull/6201.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6201.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6200/comments | https://api.github.com/repos/huggingface/datasets/issues/6200/events | https://github.com/huggingface/datasets/pull/6200 | 1,875,169,551 | PR_kwDODunzps5ZOCee | 6,200 | Temporarily pin pandas < 2.1.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-31T09:45:17 | 2023-08-31T10:33:24 | 2023-08-31T10:24:38 | MEMBER | null | Temporarily pin `pandas` < 2.1.0 until permanent solution is found.
Hot fix #6197. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6200/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6200",
"html_url": "https://github.com/huggingface/datasets/pull/6200",
"diff_url": "https://github.com/huggingface/datasets/pull/6200.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6200.patch",
"merged_at": "2023-08-31T10:24:38"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6199/comments | https://api.github.com/repos/huggingface/datasets/issues/6199/events | https://github.com/huggingface/datasets/issues/6199 | 1,875,165,185 | I_kwDODunzps5vxMAB | 6,199 | Use load_dataset for local json files, but it not works | {
"login": "Garen-in-bush",
"id": 50519434,
"node_id": "MDQ6VXNlcjUwNTE5NDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/50519434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Garen-in-bush",
"html_url": "https://github.com/Garen-in-bush",
"followers_url": "https://api.github.com/users/Garen-in-bush/followers",
"following_url": "https://api.github.com/users/Garen-in-bush/following{/other_user}",
"gists_url": "https://api.github.com/users/Garen-in-bush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Garen-in-bush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Garen-in-bush/subscriptions",
"organizations_url": "https://api.github.com/users/Garen-in-bush/orgs",
"repos_url": "https://api.github.com/users/Garen-in-bush/repos",
"events_url": "https://api.github.com/users/Garen-in-bush/events{/privacy}",
"received_events_url": "https://api.github.com/users/Garen-in-bush/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 2 | 2023-08-31T09:42:34 | 2023-08-31T19:05:07 | null | NONE | null | ### Describe the bug
when I use load_dataset to load my local datasets,it always goes to Hugging Face to download the data instead of loading the local dataset.
### Steps to reproduce the bug
`raw_datasets = load_dataset(
‘json’,
data_files=data_files)`
### Expected behavior
![image](https://github.com/huggingface/datasets/assets/50519434/add3747f-6481-4da7-b374-8f81c5a6472c)
### Environment info
python version 3.8.5
datasets version 2.12
os version unbuntu 18.04 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6199/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6198 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6198/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6198/comments | https://api.github.com/repos/huggingface/datasets/issues/6198/events | https://github.com/huggingface/datasets/pull/6198 | 1,875,092,027 | PR_kwDODunzps5ZNyBq | 6,198 | Preserve split order in DataFilesDict | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-08-31T09:00:26 | 2023-08-31T13:57:31 | 2023-08-31T13:48:42 | MEMBER | null | After investigation, I have found that this copy forces the splits to be sorted alphabetically: https://github.com/huggingface/datasets/blob/029227a116c14720afca71b9b22e78eb2a1c09a6/src/datasets/builder.py#L556
This PR removes the alphabetically sort of `DataFilesDict` keys.
- Note that for a `dict`, the order of keys is relevant when hashing:
```python
hash1 = Hasher.hash({'train': 'train.csv', 'test': 'test.csv'})
hash2 = Hasher.hash({'test': 'test.csv', 'train': 'train.csv'})
assert hash1 != hash2
```
- The `DataFilesDict` is a subclass of `dict`, thus the order should be relevant as well
```python
hash1 = Hasher.hash(DataFilesDict({'train': 'train.csv', 'test': 'test.csv'}))
hash2 = Hasher.hash(DataFilesDict({'test': 'test.csv', 'train': 'train.csv'}))
assert hash1 != hash2
```
Fix #6196. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6198/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6198",
"html_url": "https://github.com/huggingface/datasets/pull/6198",
"diff_url": "https://github.com/huggingface/datasets/pull/6198.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6198.patch",
"merged_at": "2023-08-31T13:48:42"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6197/comments | https://api.github.com/repos/huggingface/datasets/issues/6197/events | https://github.com/huggingface/datasets/issues/6197 | 1,875,078,155 | I_kwDODunzps5vw2wL | 6,197 | ValueError: 'index=True' is only valid when 'orient' is 'split', 'table', 'index', or 'columns' | {
"login": "exs-avianello",
"id": 128361578,
"node_id": "U_kgDOB6akag",
"avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/exs-avianello",
"html_url": "https://github.com/exs-avianello",
"followers_url": "https://api.github.com/users/exs-avianello/followers",
"following_url": "https://api.github.com/users/exs-avianello/following{/other_user}",
"gists_url": "https://api.github.com/users/exs-avianello/gists{/gist_id}",
"starred_url": "https://api.github.com/users/exs-avianello/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/exs-avianello/subscriptions",
"organizations_url": "https://api.github.com/users/exs-avianello/orgs",
"repos_url": "https://api.github.com/users/exs-avianello/repos",
"events_url": "https://api.github.com/users/exs-avianello/events{/privacy}",
"received_events_url": "https://api.github.com/users/exs-avianello/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | 3 | 2023-08-31T08:51:50 | 2023-09-01T10:35:10 | 2023-08-31T10:24:40 | NONE | null | ### Describe the bug
Saving a dataset `.to_json()` fails with a `ValueError` since the latest `pandas` [release](https://pandas.pydata.org/docs/dev/whatsnew/v2.1.0.html) (`2.1.0`)
In their latest release we have:
> Improved error handling when using [DataFrame.to_json()](https://pandas.pydata.org/docs/dev/reference/api/pandas.DataFrame.to_json.html#pandas.DataFrame.to_json) with incompatible index and orient arguments ([GH 52143](https://github.com/pandas-dev/pandas/issues/52143))
i.e. an error is now raised for invalid combinations of `index` and `orient`.
This means that unfortunately the custom logic at this line might sometimes lead to contradictions:
https://github.com/huggingface/datasets/blob/029227a116c14720afca71b9b22e78eb2a1c09a6/src/datasets/io/json.py#L96
e.g. for the default case `orient=records` leads to `index=True`, which now raises a `ValueError`
### Steps to reproduce the bug
```python
import datasets
if __name__ == '__main__':
dataset = datasets.Dataset.from_dict({"A": [1, 2, 3], "B": [4, 5, 6]})
dataset.to_json("dataset.json")
```
```shell
>>>
ValueError: 'index=True' is only valid when 'orient' is 'split', 'table', 'index', or 'columns'.
```
### Expected behavior
The dataset is successfully saved as `.json`
### Environment info
`python >= 3.9`
`pandas >= 2.1.0` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6197/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6197/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6196/comments | https://api.github.com/repos/huggingface/datasets/issues/6196/events | https://github.com/huggingface/datasets/issues/6196 | 1,875,070,972 | I_kwDODunzps5vw0_8 | 6,196 | Split order is not preserved | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | 0 | 2023-08-31T08:47:16 | 2023-08-31T13:48:43 | 2023-08-31T13:48:43 | MEMBER | null | I have noticed that in some cases the split order is not preserved.
For example, consider a no-script dataset with configs:
```yaml
configs:
- config_name: default
data_files:
- split: train
path: train.csv
- split: test
path: test.csv
```
- Note the defined split order is [train, test]
Once the dataset is loaded, the split order is not preserved:
```python
In [16]: ds
Out[16]:
DatasetDict({
test: Dataset({
features: ['text', 'label'],
num_rows: 1
})
train: Dataset({
features: ['text', 'label'],
num_rows: 2
})
})
```
- Note the obtained split order is [test, train] | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6196/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6195 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6195/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6195/comments | https://api.github.com/repos/huggingface/datasets/issues/6195/events | https://github.com/huggingface/datasets/issues/6195 | 1,874,195,585 | I_kwDODunzps5vtfSB | 6,195 | Force to reuse cache at given path | {
"login": "Luosuu",
"id": 43507393,
"node_id": "MDQ6VXNlcjQzNTA3Mzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/43507393?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Luosuu",
"html_url": "https://github.com/Luosuu",
"followers_url": "https://api.github.com/users/Luosuu/followers",
"following_url": "https://api.github.com/users/Luosuu/following{/other_user}",
"gists_url": "https://api.github.com/users/Luosuu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Luosuu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Luosuu/subscriptions",
"organizations_url": "https://api.github.com/users/Luosuu/orgs",
"repos_url": "https://api.github.com/users/Luosuu/repos",
"events_url": "https://api.github.com/users/Luosuu/events{/privacy}",
"received_events_url": "https://api.github.com/users/Luosuu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 1 | 2023-08-30T18:44:54 | 2023-08-30T19:00:45 | 2023-08-30T19:00:45 | NONE | null | ### Describe the bug
I have run the official example of MLM like:
```bash
python run_mlm.py \
--model_name_or_path roberta-base \
--dataset_name togethercomputer/RedPajama-Data-1T \
--dataset_config_name arxiv \
--per_device_train_batch_size 10 \
--preprocessing_num_workers 20 \
--validation_split_percentage 0 \
--cache_dir /project/huggingface_cache/datasets \
--line_by_line \
--do_train \
--pad_to_max_length \
--output_dir /project/huggingface_cache/test-mlm
```
it successfully runs and at my cache folder has `cache-1982fea76aa54a13_00001_of_00020.arrow`..... `cache-1982fea76aa54a13_00020_of_00020.arrow ` as tokenization cache of `map` method. And the cache works fine every time I run the command above.
However, when I switched to jupyter notebook (since I do not want to load datasets every time when I changed other parameters not related to the dataloading). It is not recognizing the cache files and starts to re-run the entire tokenization process.
I changed my code to
```python
tokenized_datasets = raw_datasets["train"].map(
tokenize_function,
batched=True,
num_proc=data_args.preprocessing_num_workers,
remove_columns=[text_column_name],
load_from_cache_file=True,
desc="Running tokenizer on dataset line_by_line",
# cache_file_names= {"train": "cache-1982fea76aa54a13.arrow"}
cache_file_name="cache-1982fea76aa54a13.arrow",
new_fingerprint="1982fea76aa54a13"
)
```
it still does not recognize the previously cached files and trying to re-run the tokenization process.
### Steps to reproduce the bug
use jupyter notebook for dataset map function.
### Expected behavior
the map function accepts the given cache_file_name and new_fingerprint then load the previously cached files.
### Environment info
- `datasets` version: 2.14.4.dev0
- Platform: Linux-3.10.0-1160.59.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.8
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6195/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6194/comments | https://api.github.com/repos/huggingface/datasets/issues/6194/events | https://github.com/huggingface/datasets/issues/6194 | 1,872,598,223 | I_kwDODunzps5vnZTP | 6,194 | Support custom fingerprinting with `Dataset.from_generator` | {
"login": "bilelomrani1",
"id": 16692099,
"node_id": "MDQ6VXNlcjE2NjkyMDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bilelomrani1",
"html_url": "https://github.com/bilelomrani1",
"followers_url": "https://api.github.com/users/bilelomrani1/followers",
"following_url": "https://api.github.com/users/bilelomrani1/following{/other_user}",
"gists_url": "https://api.github.com/users/bilelomrani1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bilelomrani1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bilelomrani1/subscriptions",
"organizations_url": "https://api.github.com/users/bilelomrani1/orgs",
"repos_url": "https://api.github.com/users/bilelomrani1/repos",
"events_url": "https://api.github.com/users/bilelomrani1/events{/privacy}",
"received_events_url": "https://api.github.com/users/bilelomrani1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2023-08-29T22:43:13 | 2023-08-30T17:33:21 | null | NONE | null | ### Feature request
When using `Dataset.from_generator`, the generator is hashed when building the fingerprint. Similar to `.map`, it would be interesting to let the user bypass this hashing by accepting a `fingerprint` argument to `.from_generator`.
### Motivation
Using the `.from_generator` constructor with a non-picklable generator fails. By accepting a `fingerprint` argument to `.from_generator`, the user would have the opportunity to manually fingerprint the dataset and thus bypass the crash.
### Your contribution
If validated, I can try to submit a PR for this. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6194/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6193/comments | https://api.github.com/repos/huggingface/datasets/issues/6193/events | https://github.com/huggingface/datasets/issues/6193 | 1,872,285,153 | I_kwDODunzps5vmM3h | 6,193 | Dataset loading script method does not work with .pyc file | {
"login": "riteshkumarumassedu",
"id": 43389071,
"node_id": "MDQ6VXNlcjQzMzg5MDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/43389071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riteshkumarumassedu",
"html_url": "https://github.com/riteshkumarumassedu",
"followers_url": "https://api.github.com/users/riteshkumarumassedu/followers",
"following_url": "https://api.github.com/users/riteshkumarumassedu/following{/other_user}",
"gists_url": "https://api.github.com/users/riteshkumarumassedu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/riteshkumarumassedu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riteshkumarumassedu/subscriptions",
"organizations_url": "https://api.github.com/users/riteshkumarumassedu/orgs",
"repos_url": "https://api.github.com/users/riteshkumarumassedu/repos",
"events_url": "https://api.github.com/users/riteshkumarumassedu/events{/privacy}",
"received_events_url": "https://api.github.com/users/riteshkumarumassedu/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2023-08-29T19:35:06 | 2023-08-31T19:47:29 | null | NONE | null | ### Describe the bug
The huggingface dataset library specifically looks for ‘.py’ file while loading the dataset using loading script approach and it does not work with ‘.pyc’ file.
While deploying in production, it becomes an issue when we are restricted to use only .pyc files. Is there any work around for this ?
### Steps to reproduce the bug
1. Create a dataset loading script to read the custom data.
2. compile the code to make sure that .pyc file is created
3. Delete the loading script and re-run the code. Usually, python should make use of complied .pyc files. However, in this case, the dataset library errors out with the message that it's unable to find the data loader loading script.
### Expected behavior
The code should make use of .pyc file and run without any error.
### Environment info
NA | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6193/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6192/comments | https://api.github.com/repos/huggingface/datasets/issues/6192/events | https://github.com/huggingface/datasets/pull/6192 | 1,871,911,640 | PR_kwDODunzps5ZDGnI | 6,192 | Set minimal fsspec version requirement to 2023.1.0 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-08-29T15:23:41 | 2023-08-30T14:01:56 | 2023-08-30T13:51:32 | CONTRIBUTOR | null | Fix https://github.com/huggingface/datasets/issues/6141
Colab installs 2023.6.0, so we should be good 🙂
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6192/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6192",
"html_url": "https://github.com/huggingface/datasets/pull/6192",
"diff_url": "https://github.com/huggingface/datasets/pull/6192.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6192.patch",
"merged_at": "2023-08-30T13:51:32"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6191/comments | https://api.github.com/repos/huggingface/datasets/issues/6191/events | https://github.com/huggingface/datasets/pull/6191 | 1,871,634,840 | PR_kwDODunzps5ZCKmv | 6,191 | Add missing `revision` argument | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-29T13:05:04 | 2023-08-31T14:19:54 | 2023-08-31T13:50:00 | CONTRIBUTOR | null | I've noticed that when you're not working on the main branch, there are sometimes errors in the files returned. After some investigation, I realized that the revision was not properly passed everywhere. This PR proposes a fix. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6191/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6191",
"html_url": "https://github.com/huggingface/datasets/pull/6191",
"diff_url": "https://github.com/huggingface/datasets/pull/6191.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6191.patch",
"merged_at": "2023-08-31T13:50:00"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6190 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6190/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6190/comments | https://api.github.com/repos/huggingface/datasets/issues/6190/events | https://github.com/huggingface/datasets/issues/6190 | 1,871,582,175 | I_kwDODunzps5vjhPf | 6,190 | `Invalid user token` even when correct user token is passed! | {
"login": "Vaibhavs10",
"id": 18682411,
"node_id": "MDQ6VXNlcjE4NjgyNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/18682411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vaibhavs10",
"html_url": "https://github.com/Vaibhavs10",
"followers_url": "https://api.github.com/users/Vaibhavs10/followers",
"following_url": "https://api.github.com/users/Vaibhavs10/following{/other_user}",
"gists_url": "https://api.github.com/users/Vaibhavs10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Vaibhavs10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Vaibhavs10/subscriptions",
"organizations_url": "https://api.github.com/users/Vaibhavs10/orgs",
"repos_url": "https://api.github.com/users/Vaibhavs10/repos",
"events_url": "https://api.github.com/users/Vaibhavs10/events{/privacy}",
"received_events_url": "https://api.github.com/users/Vaibhavs10/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-08-29T12:37:03 | 2023-08-29T13:01:10 | 2023-08-29T13:01:09 | MEMBER | null | ### Describe the bug
I'm working on a dataset which comprises other datasets on the hub.
URL: https://huggingface.co/datasets/open-asr-leaderboard/datasets-test-only
Note: Some of the sub-datasets in this metadataset require explicit access.
All the other datasets work fine, except, `common_voice`.
### Steps to reproduce the bug
https://github.com/Vaibhavs10/scratchpad/blob/main/cv_datasets_bug_repro.ipynb
### Expected behavior
It should work if the provided access token is valid (as it does for all the other datasets)
### Environment info
datasets version -> 2.14.4 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6190/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6189/comments | https://api.github.com/repos/huggingface/datasets/issues/6189/events | https://github.com/huggingface/datasets/pull/6189 | 1,871,569,855 | PR_kwDODunzps5ZB8Z9 | 6,189 | Don't alter input in Features.from_dict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-29T12:29:47 | 2023-08-29T13:04:59 | 2023-08-29T12:52:48 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6189/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6189",
"html_url": "https://github.com/huggingface/datasets/pull/6189",
"diff_url": "https://github.com/huggingface/datasets/pull/6189.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6189.patch",
"merged_at": "2023-08-29T12:52:48"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6188/comments | https://api.github.com/repos/huggingface/datasets/issues/6188/events | https://github.com/huggingface/datasets/issues/6188 | 1,870,987,640 | I_kwDODunzps5vhQF4 | 6,188 | [Feature Request] Check the length of batch before writing so that empty batch is allowed | {
"login": "namespace-Pt",
"id": 61188463,
"node_id": "MDQ6VXNlcjYxMTg4NDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/61188463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/namespace-Pt",
"html_url": "https://github.com/namespace-Pt",
"followers_url": "https://api.github.com/users/namespace-Pt/followers",
"following_url": "https://api.github.com/users/namespace-Pt/following{/other_user}",
"gists_url": "https://api.github.com/users/namespace-Pt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/namespace-Pt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/namespace-Pt/subscriptions",
"organizations_url": "https://api.github.com/users/namespace-Pt/orgs",
"repos_url": "https://api.github.com/users/namespace-Pt/repos",
"events_url": "https://api.github.com/users/namespace-Pt/events{/privacy}",
"received_events_url": "https://api.github.com/users/namespace-Pt/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-08-29T06:37:34 | 2023-08-30T13:37:14 | null | NONE | null | ### Use Case
I use `dataset.map(process_fn, batched=True)` to process the dataset, with data **augmentations or filtering**. However, when all examples within a batch is filtered out, i.e. **an empty batch is returned**, the following error will be thrown:
```
ValueError: Schema and number of arrays unequal
```
This is because the empty batch does not comply with the schema of other batches. I think an empty batch should be allowed to facilitate coding (one does not need to assign an empty list manually for all keys.)
A simple fix is to check the length of `batch` before writing:
```
if len(batch):
writer.write_batch(batch)
```
instead of
https://github.com/huggingface/datasets/blob/74d60213dcbd7c99484c62ce1d3dfd90a1df0770/src/datasets/arrow_dataset.py#L3493
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6188/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6187/comments | https://api.github.com/repos/huggingface/datasets/issues/6187/events | https://github.com/huggingface/datasets/issues/6187 | 1,870,936,143 | I_kwDODunzps5vhDhP | 6,187 | Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"repos_url": "https://api.github.com/users/andysingal/repos",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-08-29T05:49:56 | 2023-08-29T16:21:45 | null | NONE | null | ### Describe the bug
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[<ipython-input-48-6a7b3e847019>](https://localhost:8080/#) in <cell line: 7>()
5 }
6
----> 7 csv_datasets_reloaded = load_dataset("tsv", data_files=data_files)
8 csv_datasets_reloaded
2 frames
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1489 raise e1 from None
1490 if isinstance(e1, FileNotFoundError):
-> 1491 raise FileNotFoundError(
1492 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1493 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
FileNotFoundError: Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Dataset 'tsv' doesn't exist on the Hub
```
### Steps to reproduce the bug
```
data_files = {
"train": "/content/PUBHEALTH/train.tsv",
"validation": "/content/PUBHEALTH/dev.tsv",
"test": "/content/PUBHEALTH/test.tsv",
}
tsv_datasets_reloaded = load_dataset("tsv", data_files=data_files)
tsv_datasets_reloaded
```
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-48-6a7b3e847019> in <cell line: 7>()
5 }
6
----> 7 csv_datasets_reloaded = load_dataset("tsv", data_files=data_files)
8 csv_datasets_reloaded
2 frames
/usr/local/lib/python3.10/dist-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1489 raise e1 from None
1490 if isinstance(e1, FileNotFoundError):
-> 1491 raise FileNotFoundError(
1492 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1493 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
FileNotFoundError: Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Dataset 'tsv' doesn't exist on the Hub
```
### Expected behavior
load the data, push to hub
### Environment info
jupyter notebook RTX 3090 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6187/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6186/comments | https://api.github.com/repos/huggingface/datasets/issues/6186/events | https://github.com/huggingface/datasets/issues/6186 | 1,869,431,457 | I_kwDODunzps5vbUKh | 6,186 | Feature request: add code example of multi-GPU processing | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 2 | 2023-08-28T10:00:59 | 2023-08-30T13:34:14 | null | CONTRIBUTOR | null | ### Feature request
Would be great to add a code example of how to do multi-GPU processing with 🤗 Datasets in the documentation. cc @stevhliu
Currently the docs has a small [section](https://huggingface.co/docs/datasets/v2.3.2/en/process#map) on this saying "your big GPU call goes here", however it didn't work for me out-of-the-box.
Let's say you have a PyTorch model that can do translation, and you have multiple GPUs. In that case, you'd like to duplicate the model on each GPU, each processing (translating) a chunk of the data in parallel.
Here's how I tried to do that:
```
from datasets import load_dataset
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from multiprocess import set_start_method
import torch
import os
dataset = load_dataset("mlfoundations/datacomp_small")
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M")
# put model on each available GPU
# also, should I do it like this or use nn.DataParallel?
model.to("cuda:0")
model.to("cuda:1")
set_start_method("spawn")
def translate_captions(batch, rank):
os.environ["CUDA_VISIBLE_DEVICES"] = str(rank % torch.cuda.device_count())
texts = batch["text"]
inputs = tokenizer(texts, padding=True, truncation=True, return_tensors="pt").to(model.device)
translated_tokens = model.generate(
**inputs, forced_bos_token_id=tokenizer.lang_code_to_id["eng_Latn"], max_length=30
)
translated_texts = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)
batch["translated_text"] = translated_texts
return batch
updated_dataset = dataset.map(translate_captions, with_rank=True, num_proc=2, batched=True, batch_size=256)
```
I've personally tried running this script on a machine with 2 A100 GPUs.
## Error 1
Running the code snippet above from the terminal (python script.py) resulted in the following error:
```
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 125, in _main
prepare(preparation_data)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 236, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 287, in _fixup_main_from_path
main_content = runpy.run_path(main_path,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 289, in run_path
return _run_module_code(code, init_globals, run_name,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 96, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/home/niels/python_projects/datacomp/datasets_multi_gpu.py", line 16, in <module>
set_start_method("spawn")
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py", line 247, in set_start_method
raise RuntimeError('context has already been set')
RuntimeError: context has already been set
```
## Error 2
Then, based on [this Stackoverflow answer](https://stackoverflow.com/a/71616344/7762882), I put the `set_start_method("spawn")` section in a try: catch block. This resulted in the following error:
```
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/datasets/dataset_dict.py", line 817, in <dictcomp>
k: dataset.map(
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2926, in map
with Pool(nb_of_missing_shards, initargs=initargs, initializer=initializer) as pool:
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 215, in __init__
self._repopulate_pool()
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/pool.py", line 329, in _repopulate_pool_static
w.start()
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/process.py", line 121, in start
self._popen = self._Popen(self)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/context.py", line 288, in _Popen
return Popen(process_obj)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/popen_spawn_posix.py", line 42, in _launch
prep_data = spawn.get_preparation_data(process_obj._name)
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 154, in get_preparation_data
_check_not_importing_main()
File "/home/niels/anaconda3/envs/datacomp/lib/python3.10/site-packages/multiprocess/spawn.py", line 134, in _check_not_importing_main
raise RuntimeError('''
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
```
So then I put the last line under a `if __name__ == '__main__':` block. Then the code snippet seemed to work, but it seemed that it's only leveraging a single GPU (based on monitoring `nvidia-smi`):
```
Mon Aug 28 12:19:24 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A100-SXM... On | 00000000:01:00.0 Off | 0 |
| N/A 55C P0 76W / 275W | 8747MiB / 81920MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA A100-SXM... On | 00000000:47:00.0 Off | 0 |
| N/A 67C P0 274W / 275W | 59835MiB / 81920MiB | 100% Default |
| | | Disabled |
```
Both GPUs should have equal GPU usage, but I've always noticed that the last GPU has way more usage than the other ones. This made me think that `os.environ["CUDA_VISIBLE_DEVICES"] = str(rank % torch.cuda.device_count())` might not work inside a Python script, especially if done after importing PyTorch?
### Motivation
Would be great to clarify how to do multi-GPU data processing.
### Your contribution
If my code snippet can be fixed, I can contribute it to the docs :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6186/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6185/comments | https://api.github.com/repos/huggingface/datasets/issues/6185/events | https://github.com/huggingface/datasets/issues/6185 | 1,868,077,748 | I_kwDODunzps5vWJq0 | 6,185 | Error in saving the PIL image into *.arrow files using datasets.arrow_writer | {
"login": "HaozheZhao",
"id": 14247682,
"node_id": "MDQ6VXNlcjE0MjQ3Njgy",
"avatar_url": "https://avatars.githubusercontent.com/u/14247682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HaozheZhao",
"html_url": "https://github.com/HaozheZhao",
"followers_url": "https://api.github.com/users/HaozheZhao/followers",
"following_url": "https://api.github.com/users/HaozheZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/HaozheZhao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/HaozheZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HaozheZhao/subscriptions",
"organizations_url": "https://api.github.com/users/HaozheZhao/orgs",
"repos_url": "https://api.github.com/users/HaozheZhao/repos",
"events_url": "https://api.github.com/users/HaozheZhao/events{/privacy}",
"received_events_url": "https://api.github.com/users/HaozheZhao/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-08-26T12:15:57 | 2023-08-29T14:49:58 | null | NONE | null | ### Describe the bug
I am using the ArrowWriter from datasets.arrow_writer to save a json-style file as arrow files. Within the dictionary, it contains a feature called "image" which is a list of PIL.Image objects.
I am saving the json using the following script:
```
def save_to_arrow(path,temp):
with ArrowWriter(path=path,writer_batch_size=20) as writer:
writer.write_batch(temp)
writer.finalize()
```
However, when I attempt to restore the dataset and use the ```Dataset.from_file(path)``` function to load the arrow file, there seems to be an issue with the PIL.Image object in the dataset. The list of PIL.Images appears as follows rather than a normal PIL.Image object:
![1693051705440](https://github.com/huggingface/datasets/assets/14247682/03b204c2-d0fa-4d19-beff-6f4d7b83c848)
### Steps to reproduce the bug
1. Storing the data json into arrow files:
```
def save_to_arrow(path,temp):
with ArrowWriter(path=path,writer_batch_size=20) as writer:
writer.write_batch(temp)
writer.finalize()
save_to_arrow( path, json_file )
```
2. try to load the arrow file into the Dataset object using the ```Dataset.from_file(path)```
### Expected behavior
Except to saving the contained "image" feature as a list PIL.Image objects as the arrow file. And I can restore the dataset from the file.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-5.4.0-150-generic-x86_64-with-glibc2.17
- Python version: 3.8.17
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.4.4 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6185/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6184 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6184/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6184/comments | https://api.github.com/repos/huggingface/datasets/issues/6184/events | https://github.com/huggingface/datasets/issues/6184 | 1,867,766,143 | I_kwDODunzps5vU9l_ | 6,184 | Map cache does not detect function changes in another module | {
"login": "jonathanasdf",
"id": 511073,
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanasdf",
"html_url": "https://github.com/jonathanasdf",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | null | [] | null | 2 | 2023-08-25T22:59:14 | 2023-08-29T20:57:07 | 2023-08-29T20:56:49 | NONE | null | ```python
# dataset.py
import os
import datasets
if not os.path.exists('/tmp/test.json'):
with open('/tmp/test.json', 'w') as file:
file.write('[{"text": "hello"}]')
def transform(example):
text = example['text']
# text += ' world'
return {'text': text}
data = datasets.load_dataset('json', data_files=['/tmp/test.json'], split='train')
data = data.map(transform)
```
```python
# test.py
import dataset
print(next(iter(dataset.data)))
```
Initialize cache
```
python3 test.py
# {'text': 'hello'}
```
Edit dataset.py and uncomment the commented line, run again
```
python3 test.py
# {'text': 'hello'}
# expected: {'text': 'hello world'}
```
Clear cache and run again
```
rm -rf ~/.cache/huggingface/datasets/*
python3 test.py
# {'text': 'hello world'}
```
If instead the two files are combined, then changes to the function are detected correctly. But it's expected when working on any realistic codebase that things will be modularized into separate files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6184/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6183/comments | https://api.github.com/repos/huggingface/datasets/issues/6183/events | https://github.com/huggingface/datasets/issues/6183 | 1,867,743,276 | I_kwDODunzps5vU4As | 6,183 | Load dataset with non-existent file | {
"login": "freQuensy23-coder",
"id": 64750224,
"node_id": "MDQ6VXNlcjY0NzUwMjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/64750224?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/freQuensy23-coder",
"html_url": "https://github.com/freQuensy23-coder",
"followers_url": "https://api.github.com/users/freQuensy23-coder/followers",
"following_url": "https://api.github.com/users/freQuensy23-coder/following{/other_user}",
"gists_url": "https://api.github.com/users/freQuensy23-coder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/freQuensy23-coder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/freQuensy23-coder/subscriptions",
"organizations_url": "https://api.github.com/users/freQuensy23-coder/orgs",
"repos_url": "https://api.github.com/users/freQuensy23-coder/repos",
"events_url": "https://api.github.com/users/freQuensy23-coder/events{/privacy}",
"received_events_url": "https://api.github.com/users/freQuensy23-coder/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-08-25T22:21:22 | 2023-08-29T13:26:22 | 2023-08-29T13:26:22 | NONE | null | ### Describe the bug
When load a dataset from datasets and pass a wrong path to json with the data, error message does not contain something abount "wrong path" or "file do not exist" -
```SchemaInferenceError: Please pass `features` or at least one example when writing data```
### Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset('json', data_files='/home/alexey/unreal_file.json')
```
### Expected behavior
Raise os FileNotFound error or custom error with informative message
### Environment info
```
# packages in environment at /home/alexey/.conda/envs/alex_LoRA:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
accelerate 0.21.0 pypi_0 pypi
aiohttp 3.8.5 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
antlr4-python3-runtime 4.9.3 pypi_0 pypi
appdirs 1.4.4 pypi_0 pypi
asttokens 2.0.5 pyhd3eb1b0_0
async-timeout 4.0.3 pypi_0 pypi
attrs 23.1.0 pypi_0 pypi
backcall 0.2.0 pyhd3eb1b0_0
bitsandbytes 0.41.1 pypi_0 pypi
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.05.30 h06a4308_0
certifi 2023.7.22 pypi_0 pypi
charset-normalizer 3.2.0 pypi_0 pypi
click 8.1.6 pypi_0 pypi
cmake 3.27.2 pypi_0 pypi
comm 0.1.2 py310h06a4308_0
contourpy 1.1.0 pypi_0 pypi
cycler 0.11.0 pypi_0 pypi
datasets 2.14.4 pypi_0 pypi
debugpy 1.6.7 py310h6a678d5_0
decorator 5.1.1 pyhd3eb1b0_0
dill 0.3.7 pypi_0 pypi
docker-pycreds 0.4.0 pypi_0 pypi
executing 0.8.3 pyhd3eb1b0_0
filelock 3.12.2 pypi_0 pypi
fire 0.5.0 pypi_0 pypi
fonttools 4.42.0 pypi_0 pypi
frozenlist 1.4.0 pypi_0 pypi
fsspec 2023.6.0 pypi_0 pypi
gitdb 4.0.10 pypi_0 pypi
gitpython 3.1.32 pypi_0 pypi
huggingface-hub 0.16.4 pypi_0 pypi
idna 3.4 pypi_0 pypi
ipykernel 6.25.0 py310h2f386ee_0
ipython 8.12.2 py310h06a4308_0
ipython-genutils 0.2.0 pypi_0 pypi
ipywidgets 8.0.4 py310h06a4308_0
jedi 0.18.1 py310h06a4308_1
jinja2 3.1.2 pypi_0 pypi
jsonschema 4.19.0 pypi_0 pypi
jsonschema-specifications 2023.7.1 pypi_0 pypi
jupyter_client 8.1.0 py310h06a4308_0
jupyter_core 5.3.0 py310h06a4308_0
jupyterlab_widgets 3.0.5 py310h06a4308_0
kiwisolver 1.4.4 pypi_0 pypi
ld_impl_linux-64 2.38 h1181459_1
libffi 3.3 he6710b0_2
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libsodium 1.0.18 h7b6447c_0
libstdcxx-ng 11.2.0 h1234567_1
libuuid 1.41.5 h5eee18b_0
lightning-utilities 0.9.0 pypi_0 pypi
lit 16.0.6 pypi_0 pypi
markupsafe 2.1.3 pypi_0 pypi
matplotlib 3.7.2 pypi_0 pypi
matplotlib-inline 0.1.6 py310h06a4308_0
mpmath 1.3.0 pypi_0 pypi
multidict 6.0.4 pypi_0 pypi
multiprocess 0.70.15 pypi_0 pypi
nbformat 4.2.0 pypi_0 pypi
ncurses 6.4 h6a678d5_0
nest-asyncio 1.5.6 py310h06a4308_0
networkx 3.1 pypi_0 pypi
numpy 1.25.2 pypi_0 pypi
nvidia-cublas-cu11 11.10.3.66 pypi_0 pypi
nvidia-cuda-cupti-cu11 11.7.101 pypi_0 pypi
nvidia-cuda-nvrtc-cu11 11.7.99 pypi_0 pypi
nvidia-cuda-runtime-cu11 11.7.99 pypi_0 pypi
nvidia-cudnn-cu11 8.5.0.96 pypi_0 pypi
nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
nvidia-curand-cu11 10.2.10.91 pypi_0 pypi
nvidia-cusolver-cu11 11.4.0.1 pypi_0 pypi
nvidia-cusparse-cu11 11.7.4.91 pypi_0 pypi
nvidia-nccl-cu11 2.14.3 pypi_0 pypi
nvidia-nvtx-cu11 11.7.91 pypi_0 pypi
omegaconf 2.3.0 pypi_0 pypi
openssl 1.1.1v h7f8727e_0
packaging 23.0 py310h06a4308_0
pandas 2.0.3 pypi_0 pypi
parso 0.8.3 pyhd3eb1b0_0
pathtools 0.1.2 pypi_0 pypi
peft 0.4.0 pypi_0 pypi
pexpect 4.8.0 pyhd3eb1b0_3
pickleshare 0.7.5 pyhd3eb1b0_1003
pillow 10.0.0 pypi_0 pypi
pip 23.2.1 py310h06a4308_0
platformdirs 2.5.2 py310h06a4308_0
plotly 5.16.1 pypi_0 pypi
prompt-toolkit 3.0.36 py310h06a4308_0
protobuf 4.24.0 pypi_0 pypi
psutil 5.9.0 py310h5eee18b_0
ptyprocess 0.7.0 pyhd3eb1b0_2
pure_eval 0.2.2 pyhd3eb1b0_0
pyarrow 12.0.1 pypi_0 pypi
pygments 2.15.1 py310h06a4308_1
pyparsing 3.0.9 pypi_0 pypi
python 3.10.0 h12debd9_5
python-dateutil 2.8.2 pyhd3eb1b0_0
pytorch-lightning 2.0.6 pypi_0 pypi
pytz 2023.3 pypi_0 pypi
pyyaml 6.0.1 pypi_0 pypi
pyzmq 25.1.0 py310h6a678d5_0
readline 8.2 h5eee18b_0
referencing 0.30.2 pypi_0 pypi
regex 2023.8.8 pypi_0 pypi
requests 2.31.0 pypi_0 pypi
rpds-py 0.9.2 pypi_0 pypi
safetensors 0.3.2 pypi_0 pypi
scipy 1.11.1 pypi_0 pypi
sentencepiece 0.1.99 pypi_0 pypi
sentry-sdk 1.29.2 pypi_0 pypi
setproctitle 1.3.2 pypi_0 pypi
setuptools 68.0.0 py310h06a4308_0
six 1.16.0 pyhd3eb1b0_1
smmap 5.0.0 pypi_0 pypi
sqlite 3.41.2 h5eee18b_0
stack_data 0.2.0 pyhd3eb1b0_0
sympy 1.12 pypi_0 pypi
tenacity 8.2.3 pypi_0 pypi
termcolor 2.3.0 pypi_0 pypi
tk 8.6.12 h1ccaba5_0
tokenizers 0.13.3 pypi_0 pypi
torch 2.0.1 pypi_0 pypi
torchmetrics 1.0.3 pypi_0 pypi
tornado 6.3.2 py310h5eee18b_0
tqdm 4.66.1 pypi_0 pypi
traitlets 5.7.1 py310h06a4308_0
transformers 4.31.0 pypi_0 pypi
triton 2.0.0 pypi_0 pypi
typing-extensions 4.7.1 pypi_0 pypi
tzdata 2023.3 pypi_0 pypi
urllib3 2.0.4 pypi_0 pypi
wandb 0.15.8 pypi_0 pypi
wcwidth 0.2.5 pyhd3eb1b0_0
wheel 0.38.4 py310h06a4308_0
widgetsnbextension 4.0.5 py310h06a4308_0
xxhash 3.3.0 pypi_0 pypi
xz 5.4.2 h5eee18b_0
yarl 1.9.2 pypi_0 pypi
zeromq 4.3.4 h2531618_0
zlib 1.2.13 h5eee18b_0
active environment : None
user config file : /home/alexey/.condarc
populated config files :
conda version : 23.1.0
conda-build version : 3.22.0
python version : 3.9.13.final.0
virtual packages : __archspec=1=x86_64
__cuda=12.0=0
__glibc=2.35=0
__linux=5.19.0=0
__unix=0=0
base environment : /opt/anaconda/anaconda3 (read only)
conda av data dir : /opt/anaconda/anaconda3/etc/conda
conda av metadata url : None
channel URLs : https://repo.anaconda.com/pkgs/main/linux-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/linux-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /opt/anaconda/anaconda3/pkgs
/home/alexey/.conda/pkgs
envs directories : /home/alexey/.conda/envs
/opt/anaconda/anaconda3/envs
platform : linux-64
user-agent : conda/23.1.0 requests/2.31.0 CPython/3.9.13 Linux/5.19.0-46-generic ubuntu/22.04.2 glibc/2.35
UID:GID : 1009:1009
netrc file : /home/alexey/.netrc
offline mode : False
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6183/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6183/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6182 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6182/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6182/comments | https://api.github.com/repos/huggingface/datasets/issues/6182/events | https://github.com/huggingface/datasets/issues/6182 | 1,867,203,131 | I_kwDODunzps5vS0I7 | 6,182 | Loading Meteor metric in HF evaluate module crashes due to datasets import issue | {
"login": "dsashulya",
"id": 42322648,
"node_id": "MDQ6VXNlcjQyMzIyNjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/42322648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dsashulya",
"html_url": "https://github.com/dsashulya",
"followers_url": "https://api.github.com/users/dsashulya/followers",
"following_url": "https://api.github.com/users/dsashulya/following{/other_user}",
"gists_url": "https://api.github.com/users/dsashulya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dsashulya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dsashulya/subscriptions",
"organizations_url": "https://api.github.com/users/dsashulya/orgs",
"repos_url": "https://api.github.com/users/dsashulya/repos",
"events_url": "https://api.github.com/users/dsashulya/events{/privacy}",
"received_events_url": "https://api.github.com/users/dsashulya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-08-25T14:54:06 | 2023-09-01T18:51:12 | 2023-08-31T14:38:23 | NONE | null | ### Describe the bug
When using python3.9 and ```evaluate``` module loading Meteor metric crashes at a non-existent import from ```datasets.config``` in ```datasets v2.14```
### Steps to reproduce the bug
```
from evaluate import load
meteor = load("meteor")
```
produces the following error:
```
from datasets.config import importlib_metadata, version
ImportError: cannot import name 'importlib_metadata' from 'datasets.config' (<path_to_project>/venv/lib/python3.9/site-packages/datasets/config.py)
```
### Expected behavior
```datasets``` of v2.10 has the following workaround in ```config.py```:
```
if PY_VERSION < version.parse("3.8"):
import importlib_metadata
else:
import importlib.metadata as importlib_metadata
```
However, it's absent in v2.14 which might be the cause of the issue.
### Environment info
- `datasets` version: 2.14.4
- Platform: macOS-13.5-arm64-arm-64bit
- Python version: 3.9.6
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
- Evaluate version: 0.4.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6182/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6181/comments | https://api.github.com/repos/huggingface/datasets/issues/6181/events | https://github.com/huggingface/datasets/pull/6181 | 1,867,035,522 | PR_kwDODunzps5Yy2VO | 6,181 | Fix import in `image_load` doc | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-25T13:12:19 | 2023-08-25T16:12:46 | 2023-08-25T16:02:24 | CONTRIBUTOR | null | Reported on [Discord](https://discord.com/channels/879548962464493619/1144295822209581168/1144295822209581168) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6181/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6181",
"html_url": "https://github.com/huggingface/datasets/pull/6181",
"diff_url": "https://github.com/huggingface/datasets/pull/6181.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6181.patch",
"merged_at": "2023-08-25T16:02:24"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6180/comments | https://api.github.com/repos/huggingface/datasets/issues/6180/events | https://github.com/huggingface/datasets/pull/6180 | 1,867,032,578 | PR_kwDODunzps5Yy1r- | 6,180 | Use `hf-internal-testing` repos for hosting test dataset repos | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-08-25T13:10:26 | 2023-08-25T16:58:02 | 2023-08-25T16:46:22 | CONTRIBUTOR | null | Use `hf-internal-testing` for hosting instead of the maintainers' dataset repos. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6180/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6180",
"html_url": "https://github.com/huggingface/datasets/pull/6180",
"diff_url": "https://github.com/huggingface/datasets/pull/6180.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6180.patch",
"merged_at": "2023-08-25T16:46:22"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6179/comments | https://api.github.com/repos/huggingface/datasets/issues/6179/events | https://github.com/huggingface/datasets/issues/6179 | 1,867,009,016 | I_kwDODunzps5vSEv4 | 6,179 | Map cache with tokenizer | {
"login": "jonathanasdf",
"id": 511073,
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanasdf",
"html_url": "https://github.com/jonathanasdf",
"followers_url": "https://api.github.com/users/jonathanasdf/followers",
"following_url": "https://api.github.com/users/jonathanasdf/following{/other_user}",
"gists_url": "https://api.github.com/users/jonathanasdf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jonathanasdf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonathanasdf/subscriptions",
"organizations_url": "https://api.github.com/users/jonathanasdf/orgs",
"repos_url": "https://api.github.com/users/jonathanasdf/repos",
"events_url": "https://api.github.com/users/jonathanasdf/events{/privacy}",
"received_events_url": "https://api.github.com/users/jonathanasdf/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2023-08-25T12:55:18 | 2023-08-31T15:17:24 | null | NONE | null | Similar issue to https://github.com/huggingface/datasets/issues/5985, but across different sessions rather than two calls in the same session.
Unlike that issue, explicitly calling tokenizer(my_args) before the map() doesn't help, because the tokenizer was created with a different hash to begin with...
setup
```
from transformers import AutoTokenizer
AutoTokenizer.from_pretrained('bert-base-uncased').save_pretrained("tok")
```
this prints different value each time
```
from transformers import AutoTokenizer
from datasets.utils.py_utils import dumps # Huggingface datasets
print(hash(dumps(AutoTokenizer.from_pretrained("tok"))))
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6179/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6178/comments | https://api.github.com/repos/huggingface/datasets/issues/6178/events | https://github.com/huggingface/datasets/issues/6178 | 1,866,610,102 | I_kwDODunzps5vQjW2 | 6,178 | 'import datasets' throws "invalid syntax error" | {
"login": "elia-ashraf",
"id": 128580829,
"node_id": "U_kgDOB6n83Q",
"avatar_url": "https://avatars.githubusercontent.com/u/128580829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elia-ashraf",
"html_url": "https://github.com/elia-ashraf",
"followers_url": "https://api.github.com/users/elia-ashraf/followers",
"following_url": "https://api.github.com/users/elia-ashraf/following{/other_user}",
"gists_url": "https://api.github.com/users/elia-ashraf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elia-ashraf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elia-ashraf/subscriptions",
"organizations_url": "https://api.github.com/users/elia-ashraf/orgs",
"repos_url": "https://api.github.com/users/elia-ashraf/repos",
"events_url": "https://api.github.com/users/elia-ashraf/events{/privacy}",
"received_events_url": "https://api.github.com/users/elia-ashraf/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-08-25T08:35:14 | 2023-08-29T14:57:17 | null | NONE | null | ### Describe the bug
Hi,
I have been trying to import the datasets library but I keep gtting this error.
`Traceback (most recent call last):
File /opt/local/jupyterhub/lib64/python3.9/site-packages/IPython/core/interactiveshell.py:3508 in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
Cell In[2], line 1
import datasets
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/__init__.py:22
from .arrow_dataset import Dataset
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/arrow_dataset.py:67
from .arrow_writer import ArrowWriter, OptimizedTypedSequence
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/arrow_writer.py:27
from .features import Features, Image, Value
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/features/__init__.py:17
from .audio import Audio
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/features/audio.py:11
from ..download.streaming_download_manager import xopen, xsplitext
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/download/__init__.py:10
from .streaming_download_manager import StreamingDownloadManager
File /opt/local/jupyterhub/lib64/python3.9/site-packages/datasets/download/streaming_download_manager.py:18
from aiohttp.client_exceptions import ClientError
File /opt/local/jupyterhub/lib64/python3.9/site-packages/aiohttp/__init__.py:7
from .connector import * # noqa
File /opt/local/jupyterhub/lib64/python3.9/site-packages/aiohttp/connector.py:12
from .client import ClientRequest
File /opt/local/jupyterhub/lib64/python3.9/site-packages/aiohttp/client.py:144
yield from asyncio.async(resp.release(), loop=loop)
^
SyntaxError: invalid syntax`
I have simply used these commands:
`import datasets`
and
`from datasets import load_dataset`
### Environment info
The library has been installed a virtual machine on JupyterHub. Although I have used this library multiple times (on the same VM) before, to train/test an ASR or other ML models, I had never encountered this error. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6178/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6177 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6177/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6177/comments | https://api.github.com/repos/huggingface/datasets/issues/6177/events | https://github.com/huggingface/datasets/pull/6177 | 1,865,490,962 | PR_kwDODunzps5Ytky- | 6,177 | Use object detection images from `huggingface/documentation-images` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-08-24T16:16:09 | 2023-08-25T16:30:00 | 2023-08-25T16:21:17 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6177/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6177",
"html_url": "https://github.com/huggingface/datasets/pull/6177",
"diff_url": "https://github.com/huggingface/datasets/pull/6177.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6177.patch",
"merged_at": "2023-08-25T16:21:17"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6176/comments | https://api.github.com/repos/huggingface/datasets/issues/6176/events | https://github.com/huggingface/datasets/issues/6176 | 1,864,436,408 | I_kwDODunzps5vIQq4 | 6,176 | how to limit the size of memory mapped file? | {
"login": "williamium3000",
"id": 47763855,
"node_id": "MDQ6VXNlcjQ3NzYzODU1",
"avatar_url": "https://avatars.githubusercontent.com/u/47763855?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/williamium3000",
"html_url": "https://github.com/williamium3000",
"followers_url": "https://api.github.com/users/williamium3000/followers",
"following_url": "https://api.github.com/users/williamium3000/following{/other_user}",
"gists_url": "https://api.github.com/users/williamium3000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/williamium3000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/williamium3000/subscriptions",
"organizations_url": "https://api.github.com/users/williamium3000/orgs",
"repos_url": "https://api.github.com/users/williamium3000/repos",
"events_url": "https://api.github.com/users/williamium3000/events{/privacy}",
"received_events_url": "https://api.github.com/users/williamium3000/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2023-08-24T05:33:45 | 2023-08-26T05:09:56 | null | NONE | null | ### Describe the bug
Huggingface datasets use memory-mapped file to map large datasets in memory for fast access.
However, it seems like huggingface will occupy all the memory for memory-mapped files, which makes a troublesome situation since we cluster will distribute a small portion of memory to me (once it's over the limit, memory cannot be allocated), however, when the dataset checks the total memory, all of the memory will be taken into account which makes huggingface dataset try to allocate more memory than allowed.
So is there a way to explicitly limit the size of memory mapped file?
### Steps to reproduce the bug
python
>>> from datasets import load_dataset
>>> dataset = load_dataset("c4", "en", streaming=True)
### Expected behavior
In a normal environment, this will not have any problem.
However, when the system allocates a portion of the memory to the program and when the dataset checks the total memory, all of the memory will be taken into account which makes huggingface dataset try to allocate more memory than allowed.
### Environment info
linux cluster with SGE(Sun Grid Engine) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6176/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6175/comments | https://api.github.com/repos/huggingface/datasets/issues/6175/events | https://github.com/huggingface/datasets/pull/6175 | 1,863,592,678 | PR_kwDODunzps5YnKlx | 6,175 | PyArrow 13 CI fixes | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-23T15:45:53 | 2023-08-25T13:15:59 | 2023-08-25T13:06:52 | CONTRIBUTOR | null | Fixes:
* bumps the PyArrow version check in the `cast_array_to_feature` to avoid the offset bug (still not fixed)
* aligns the Pandas formatting tests with the Numpy ones (the current test fails due to https://github.com/apache/arrow/pull/35656, which requires `.to_pandas(coerce_temporal_nanoseconds=True)` to always return `datetime [ns]` objects)
Fix #6173
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6175/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6175",
"html_url": "https://github.com/huggingface/datasets/pull/6175",
"diff_url": "https://github.com/huggingface/datasets/pull/6175.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6175.patch",
"merged_at": "2023-08-25T13:06:52"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6173 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6173/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6173/comments | https://api.github.com/repos/huggingface/datasets/issues/6173/events | https://github.com/huggingface/datasets/issues/6173 | 1,863,422,065 | I_kwDODunzps5vEZBx | 6,173 | Fix CI for pyarrow 13.0.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 0 | 2023-08-23T14:11:20 | 2023-08-25T13:06:53 | 2023-08-25T13:06:53 | MEMBER | null | pyarrow 13.0.0 just came out
```
FAILED tests/test_formatting.py::ArrowExtractorTest::test_pandas_extractor - AssertionError: Attributes of Series are different
Attribute "dtype" are different
[left]: datetime64[us, UTC]
[right]: datetime64[ns, UTC]
```
```
FAILED tests/test_table.py::test_cast_sliced_fixed_size_array_to_features - TypeError: Couldn't cast array of type
fixed_size_list<item: int32>[3]
to
Sequence(feature=Value(dtype='int64', id=None), length=3, id=None)
```
e.g. in https://github.com/huggingface/datasets/actions/runs/5952253963/job/16143847230
first error may be related to https://github.com/apache/arrow/issues/33321
second one maybe because `feature.length * len(array) == len(array_values)` is not satisfied anymore somehow ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6173/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/6173/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6172 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6172/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6172/comments | https://api.github.com/repos/huggingface/datasets/issues/6172/events | https://github.com/huggingface/datasets/issues/6172 | 1,863,318,027 | I_kwDODunzps5vD_oL | 6,172 | Make Dataset streaming queries retryable | {
"login": "rojagtap",
"id": 42299342,
"node_id": "MDQ6VXNlcjQyMjk5MzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/42299342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rojagtap",
"html_url": "https://github.com/rojagtap",
"followers_url": "https://api.github.com/users/rojagtap/followers",
"following_url": "https://api.github.com/users/rojagtap/following{/other_user}",
"gists_url": "https://api.github.com/users/rojagtap/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rojagtap/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rojagtap/subscriptions",
"organizations_url": "https://api.github.com/users/rojagtap/orgs",
"repos_url": "https://api.github.com/users/rojagtap/repos",
"events_url": "https://api.github.com/users/rojagtap/events{/privacy}",
"received_events_url": "https://api.github.com/users/rojagtap/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2023-08-23T13:15:38 | 2023-08-24T14:29:27 | null | NONE | null | ### Feature request
Streaming datasets, as intended, do not load the entire dataset in memory or disk. However, while querying the next data chunk from the remote, sometimes it is possible that the service is down or there might be other issues that may cause the query to fail. In such a scenario, it would be nice to make these queries retryable (perhaps with a backoff strategy).
### Motivation
I was working on a model and the model checkpoints after every 1000 steps. At step 1800 I got a 504 HTTP status code error from Huggingface hub for my pytorch `dataloader`. Given the size of my model and data, it took around 2 hours to reach 1800 steps and now it will take about an hour to recover the lost 800. It would be better to get a retryable querying strategy.
### Your contribution
It would be better if someone having experience in this area takes this up as this would require some testing. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6172/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6171 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6171/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6171/comments | https://api.github.com/repos/huggingface/datasets/issues/6171/events | https://github.com/huggingface/datasets/pull/6171 | 1,862,922,767 | PR_kwDODunzps5Yk4AS | 6,171 | Fix typo in about_mapstyle_vs_iterable.mdx | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-23T09:21:11 | 2023-08-23T09:32:59 | 2023-08-23T09:21:19 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6171/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6171",
"html_url": "https://github.com/huggingface/datasets/pull/6171",
"diff_url": "https://github.com/huggingface/datasets/pull/6171.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6171.patch",
"merged_at": "2023-08-23T09:21:19"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6170/comments | https://api.github.com/repos/huggingface/datasets/issues/6170/events | https://github.com/huggingface/datasets/pull/6170 | 1,862,705,731 | PR_kwDODunzps5YkJOV | 6,170 | feat: Return the name of the currently loaded file | {
"login": "Amitesh-Patel",
"id": 124021133,
"node_id": "U_kgDOB2RpjQ",
"avatar_url": "https://avatars.githubusercontent.com/u/124021133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Amitesh-Patel",
"html_url": "https://github.com/Amitesh-Patel",
"followers_url": "https://api.github.com/users/Amitesh-Patel/followers",
"following_url": "https://api.github.com/users/Amitesh-Patel/following{/other_user}",
"gists_url": "https://api.github.com/users/Amitesh-Patel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Amitesh-Patel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Amitesh-Patel/subscriptions",
"organizations_url": "https://api.github.com/users/Amitesh-Patel/orgs",
"repos_url": "https://api.github.com/users/Amitesh-Patel/repos",
"events_url": "https://api.github.com/users/Amitesh-Patel/events{/privacy}",
"received_events_url": "https://api.github.com/users/Amitesh-Patel/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-08-23T07:08:17 | 2023-08-29T12:41:05 | null | NONE | null | Added an optional parameter return_file_name in the load_dataset function. When it is set to True, the function will include the name of the file corresponding to the current line as a feature in the returned output.
I added this here https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/json/json.py#L92.
fixes #5806 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6170/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6170",
"html_url": "https://github.com/huggingface/datasets/pull/6170",
"diff_url": "https://github.com/huggingface/datasets/pull/6170.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6170.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6169/comments | https://api.github.com/repos/huggingface/datasets/issues/6169/events | https://github.com/huggingface/datasets/issues/6169 | 1,862,360,199 | I_kwDODunzps5vAVyH | 6,169 | Configurations in yaml not working | {
"login": "tsor13",
"id": 45085098,
"node_id": "MDQ6VXNlcjQ1MDg1MDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/45085098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tsor13",
"html_url": "https://github.com/tsor13",
"followers_url": "https://api.github.com/users/tsor13/followers",
"following_url": "https://api.github.com/users/tsor13/following{/other_user}",
"gists_url": "https://api.github.com/users/tsor13/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tsor13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tsor13/subscriptions",
"organizations_url": "https://api.github.com/users/tsor13/orgs",
"repos_url": "https://api.github.com/users/tsor13/repos",
"events_url": "https://api.github.com/users/tsor13/events{/privacy}",
"received_events_url": "https://api.github.com/users/tsor13/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2023-08-23T00:13:22 | 2023-08-23T15:35:31 | null | NONE | null | ### Dataset configurations cannot be created in YAML/README
Hello! I'm trying to follow the docs here in order to create structure in my dataset as added from here (#5331): https://github.com/huggingface/datasets/blob/8b8e6ee067eb74e7965ca2a6768f15f9398cb7c8/docs/source/repository_structure.mdx#L110-L118
I have the exact example in my config file for [my data repo](https://huggingface.co/datasets/tsor13/test):
```
configs:
- config_name: main_data
data_files: "main_data.csv"
- config_name: additional_data
data_files: "additional_data.csv"
```
Yet, I'm unable to load different configurations:
```
from datasets import get_dataset_config_names
get_dataset_config_names('tsor13/test', use_auth_token=True)
```
returns a single split, `['tsor13--test']`
Does anyone have any insights?
@polinaeterna thank you for adding this feature, it is super useful. Do you happen to have any ideas?
### Steps to reproduce the bug
from datasets import get_dataset_config_names
get_dataset_config_names('tsor13/test')
### Expected behavior
I would expect there to be two splits, `main_data` and `additional_data`. However, only `['tsor13--test']` test is returned.
### Environment info
- `datasets` version: 2.14.4
- Platform: macOS-13.4-arm64-arm-64bit
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.5.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6169/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6168/comments | https://api.github.com/repos/huggingface/datasets/issues/6168/events | https://github.com/huggingface/datasets/pull/6168 | 1,861,867,274 | PR_kwDODunzps5YhT7Y | 6,168 | Fix ArrayXD YAML conversion | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 3 | 2023-08-22T17:02:54 | 2023-08-29T12:42:32 | null | CONTRIBUTOR | null | Replace the `shape` tuple with a list in the `ArrayXD` YAML conversion.
Fix #6112 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6168/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6168/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6168",
"html_url": "https://github.com/huggingface/datasets/pull/6168",
"diff_url": "https://github.com/huggingface/datasets/pull/6168.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6168.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6167/comments | https://api.github.com/repos/huggingface/datasets/issues/6167/events | https://github.com/huggingface/datasets/pull/6167 | 1,861,474,327 | PR_kwDODunzps5Yf9-t | 6,167 | Allow hyphen in split name | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-08-22T13:30:59 | 2023-08-22T15:39:24 | 2023-08-22T15:38:53 | CONTRIBUTOR | null | To fix https://discuss.huggingface.co/t/error-when-setting-up-the-dataset-viewer-streamingrowserror/51276.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6167/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6167",
"html_url": "https://github.com/huggingface/datasets/pull/6167",
"diff_url": "https://github.com/huggingface/datasets/pull/6167.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6167.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6166 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6166/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6166/comments | https://api.github.com/repos/huggingface/datasets/issues/6166/events | https://github.com/huggingface/datasets/pull/6166 | 1,861,259,055 | PR_kwDODunzps5YfOt0 | 6,166 | Document BUILDER_CONFIG_CLASS | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-22T11:27:41 | 2023-08-23T14:01:25 | 2023-08-23T13:52:36 | MEMBER | null | Related to https://github.com/huggingface/datasets/issues/6130 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6166/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6166",
"html_url": "https://github.com/huggingface/datasets/pull/6166",
"diff_url": "https://github.com/huggingface/datasets/pull/6166.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6166.patch",
"merged_at": "2023-08-23T13:52:36"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6165/comments | https://api.github.com/repos/huggingface/datasets/issues/6165/events | https://github.com/huggingface/datasets/pull/6165 | 1,861,124,284 | PR_kwDODunzps5YexBL | 6,165 | Fix multiprocessing with spawn in iterable datasets | {
"login": "Hubert-Bonisseur",
"id": 48770768,
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hubert-Bonisseur",
"html_url": "https://github.com/Hubert-Bonisseur",
"followers_url": "https://api.github.com/users/Hubert-Bonisseur/followers",
"following_url": "https://api.github.com/users/Hubert-Bonisseur/following{/other_user}",
"gists_url": "https://api.github.com/users/Hubert-Bonisseur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hubert-Bonisseur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hubert-Bonisseur/subscriptions",
"organizations_url": "https://api.github.com/users/Hubert-Bonisseur/orgs",
"repos_url": "https://api.github.com/users/Hubert-Bonisseur/repos",
"events_url": "https://api.github.com/users/Hubert-Bonisseur/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hubert-Bonisseur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-08-22T10:07:23 | 2023-08-29T13:27:14 | 2023-08-29T13:18:11 | CONTRIBUTOR | null | The "Spawn" method is preferred when multiprocessing on macOS or Windows systems, instead of the "Fork" method on linux systems.
This causes some methods of Iterable Datasets to break when using a dataloader with more than 0 workers.
I fixed the issue by replacing lambda and local methods which are not pickle-able.
See the example below:
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
if __name__ == "__main__":
dataset = load_dataset("lhoestq/demo1", split="train")
dataset = dataset.to_iterable_dataset(num_shards=3)
dataset = dataset.remove_columns(["package_name"])
dataset = dataset.rename_columns({
"review": "review1"
})
dataset = dataset.rename_column("date", "date1")
for sample in DataLoader(dataset, batch_size=None, num_workers=3):
print(sample)
```
To notice the fix on a linux system, adding these lines should do the trick:
```python
import multiprocessing
multiprocessing.set_start_method('spawn')
```
I also removed what looks like code duplication between rename_colums and rename_column
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6165/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6165",
"html_url": "https://github.com/huggingface/datasets/pull/6165",
"diff_url": "https://github.com/huggingface/datasets/pull/6165.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6165.patch",
"merged_at": "2023-08-29T13:18:11"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6164/comments | https://api.github.com/repos/huggingface/datasets/issues/6164/events | https://github.com/huggingface/datasets/pull/6164 | 1,859,560,007 | PR_kwDODunzps5YZZAJ | 6,164 | Fix: Missing a MetadataConfigs init when the repo has a `datasets_info.json` but no README | {
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/users/clefourrier/followers",
"following_url": "https://api.github.com/users/clefourrier/following{/other_user}",
"gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions",
"organizations_url": "https://api.github.com/users/clefourrier/orgs",
"repos_url": "https://api.github.com/users/clefourrier/repos",
"events_url": "https://api.github.com/users/clefourrier/events{/privacy}",
"received_events_url": "https://api.github.com/users/clefourrier/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-21T14:57:54 | 2023-08-21T16:27:05 | 2023-08-21T16:18:26 | CONTRIBUTOR | null | When I try to push to an arrow repo (can provide the link on Slack), it uploads the files but fails to update the metadata, with
```
File "app.py", line 123, in add_new_eval
eval_results[level].push_to_hub(my_repo, token=TOKEN, split=SPLIT)
File "blabla_my_env_path/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 5501, in push_to_hub
if not metadata_configs:
UnboundLocalError: local variable 'metadata_configs' referenced before assignment
```
This fixes it. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6164/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6164",
"html_url": "https://github.com/huggingface/datasets/pull/6164",
"diff_url": "https://github.com/huggingface/datasets/pull/6164.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6164.patch",
"merged_at": "2023-08-21T16:18:26"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6163/comments | https://api.github.com/repos/huggingface/datasets/issues/6163/events | https://github.com/huggingface/datasets/issues/6163 | 1,857,682,241 | I_kwDODunzps5uuftB | 6,163 | Error type: ArrowInvalid Details: Failed to parse string: '[254,254]' as a scalar of type int32 | {
"login": "shishirCTC",
"id": 90616801,
"node_id": "MDQ6VXNlcjkwNjE2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/90616801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shishirCTC",
"html_url": "https://github.com/shishirCTC",
"followers_url": "https://api.github.com/users/shishirCTC/followers",
"following_url": "https://api.github.com/users/shishirCTC/following{/other_user}",
"gists_url": "https://api.github.com/users/shishirCTC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shishirCTC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shishirCTC/subscriptions",
"organizations_url": "https://api.github.com/users/shishirCTC/orgs",
"repos_url": "https://api.github.com/users/shishirCTC/repos",
"events_url": "https://api.github.com/users/shishirCTC/events{/privacy}",
"received_events_url": "https://api.github.com/users/shishirCTC/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 1 | 2023-08-19T11:34:40 | 2023-08-21T13:28:16 | null | NONE | null | ### Describe the bug
I am getting the following error while I am trying to upload the CSV sheet to train a model. My CSV sheet content is exactly same as shown in the example CSV file in the Auto Train page. Attaching screenshot of error for reference. I have also tried converting the index of the answer that are integer into string by placing inverted commas and also without inverted commas.
Can anyone please help me out?
FYI : I am using Chrome browser.
Error type: ArrowInvalid
Details: Failed to parse string: '[254,254]' as a scalar of type int32
![Screenshot 2023-08-19 165827](https://github.com/huggingface/datasets/assets/90616801/95fad96e-7dce-4bb5-9f83-9f1659a32891)
### Steps to reproduce the bug
Kindly let me know how to fix this?
### Expected behavior
Kindly let me know how to fix this?
### Environment info
Kindly let me know how to fix this? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6163/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6162/comments | https://api.github.com/repos/huggingface/datasets/issues/6162/events | https://github.com/huggingface/datasets/issues/6162 | 1,856,198,342 | I_kwDODunzps5uo1bG | 6,162 | load_dataset('json',...) from togethercomputer/RedPajama-Data-1T errors when jsonl rows contains different data fields | {
"login": "rbrugaro",
"id": 82971690,
"node_id": "MDQ6VXNlcjgyOTcxNjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/82971690?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rbrugaro",
"html_url": "https://github.com/rbrugaro",
"followers_url": "https://api.github.com/users/rbrugaro/followers",
"following_url": "https://api.github.com/users/rbrugaro/following{/other_user}",
"gists_url": "https://api.github.com/users/rbrugaro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rbrugaro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rbrugaro/subscriptions",
"organizations_url": "https://api.github.com/users/rbrugaro/orgs",
"repos_url": "https://api.github.com/users/rbrugaro/repos",
"events_url": "https://api.github.com/users/rbrugaro/events{/privacy}",
"received_events_url": "https://api.github.com/users/rbrugaro/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2023-08-18T07:19:39 | 2023-08-18T17:00:35 | null | NONE | null | ### Describe the bug
When loading some jsonl from redpajama-data-1T github source [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) fails due to one row of the file containing an extra field called **symlink_target: string>**.
When deleting that line the loading is successful.
We also tried loading this file with the discrepancy using this function and it is successful
```python
os.environ["RED_PAJAMA_DATA_DIR"] ="/path_to_local_copy_of_RedPajama-Data-1T"
ds = load_dataset('togethercomputer/RedPajama-Data-1T', 'github',cache_dir="/path_to_folder_with_jsonl",streaming=True)['train']
```
### Steps to reproduce the bug
Steps to reproduce the behavior:
1. Load one jsonl from the redpajama-data-1T
```bash
wget https://data.together.xyz/redpajama-data-1T/v1.0.0/github/filtered_27f05c041a1c401783f90b9415e40e4b.sampled.jsonl
```
2.Load dataset will give error:
```python
from datasets import load_dataset
ds = load_dataset('json', data_files='/path_to/filtered_27f05c041a1c401783f90b9415e40e4b.sampled.jsonl')
```
_TypeError: Couldn't cast array of type
Struct
<content_hash: string,
timestamp: string,
source: string,
line_count: int64,
max_line_length: int64,
avg_line_length: double,
alnum_prop: double,
repo_name: string,
id: string,
size: string,
binary: bool,
copies: string,
ref: string,
path: string,
mode: string,
license: string,
language: list<item: struct<name: string, bytes: string>>, **symlink_target: string>**
to
{'content_hash': Value(dtype='string', id=None),
'timestamp': Value(dtype='string', id=None),
'source': Value(dtype='string', id=None),
'line_count': Value(dtype='int64', id=None),
'max_line_length': Value(dtype='int64', id=None),
'avg_line_length': Value(dtype='float64', id=None),
'alnum_prop': Value(dtype='float64', id=None),
'repo_name': Value(dtype='string', id=None),
'id': Value(dtype='string', id=None),
'size': Value(dtype='string', id=None),
'binary': Value(dtype='bool', id=None),
'copies': Value(dtype='string', id=None),
'ref': Value(dtype='string', id=None),
'path': Value(dtype='string', id=None),
'mode': Value(dtype='string', id=None),
'license': Value(dtype='string', id=None),
'language': [{'name': Value(dtype='string', id=None), 'bytes': Value(dtype='string', id=None)}]}_
3. To remove the line causing the problem that includes the **symlink_target: string>** do:
```bash
sed -i '112252d' filtered_27f05c041a1c401783f90b9415e40e4b.sampled.jsonl
```
4. Rerun the loading function now is succesful:
```python
from datasets import load_dataset
ds = load_dataset('json', data_files='/path_to/filtered_27f05c041a1c401783f90b9415e40e4b.sampled.jsonl')
```
### Expected behavior
Have a clean dataset without discrepancies on the jsonl fields or have the load_dataset('json',...) method not error out.
### Environment info
- `datasets` version: 2.14.1
- Platform: Linux-4.18.0-425.13.1.el8_7.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.17
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6162/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6161/comments | https://api.github.com/repos/huggingface/datasets/issues/6161/events | https://github.com/huggingface/datasets/pull/6161 | 1,855,794,354 | PR_kwDODunzps5YM0g7 | 6,161 | Fix protocol prefix for Beam | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 4 | 2023-08-17T22:40:37 | 2023-08-18T13:47:59 | null | CONTRIBUTOR | null | Fix #6147 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6161/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6161",
"html_url": "https://github.com/huggingface/datasets/pull/6161",
"diff_url": "https://github.com/huggingface/datasets/pull/6161.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6161.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6160 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6160/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6160/comments | https://api.github.com/repos/huggingface/datasets/issues/6160/events | https://github.com/huggingface/datasets/pull/6160 | 1,855,760,543 | PR_kwDODunzps5YMtLQ | 6,160 | Fix Parquet loading with `columns` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 4 | 2023-08-17T21:58:24 | 2023-08-17T22:44:59 | 2023-08-17T22:36:04 | CONTRIBUTOR | null | Fix #6149 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6160/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6160",
"html_url": "https://github.com/huggingface/datasets/pull/6160",
"diff_url": "https://github.com/huggingface/datasets/pull/6160.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6160.patch",
"merged_at": "2023-08-17T22:36:04"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6159/comments | https://api.github.com/repos/huggingface/datasets/issues/6159/events | https://github.com/huggingface/datasets/issues/6159 | 1,855,691,512 | I_kwDODunzps5um5r4 | 6,159 | Add `BoundingBox` feature | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2023-08-17T20:49:51 | 2023-08-17T20:49:51 | null | CONTRIBUTOR | null | ... to make working with object detection datasets easier. Currently, `Sequence(int_or_float, length=4)` can be used to represent this feature optimally (in the storage backend), so I only see this feature being useful if we make it work with the viewer. Also, bounding boxes usually come in 4 different formats (explained [here](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/)), so we need to decide which one to support (or maybe all of them).
cc @NielsRogge @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6159/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6159/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6158/comments | https://api.github.com/repos/huggingface/datasets/issues/6158/events | https://github.com/huggingface/datasets/pull/6158 | 1,855,374,220 | PR_kwDODunzps5YLZBf | 6,158 | [docs] Complete `to_iterable_dataset` | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 2 | 2023-08-17T17:02:11 | 2023-08-17T19:24:20 | 2023-08-17T19:13:15 | MEMBER | null | Finishes the `to_iterable_dataset` documentation by adding it to the relevant sections in the tutorial and guide. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6158/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6158",
"html_url": "https://github.com/huggingface/datasets/pull/6158",
"diff_url": "https://github.com/huggingface/datasets/pull/6158.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6158.patch",
"merged_at": "2023-08-17T19:13:15"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6157/comments | https://api.github.com/repos/huggingface/datasets/issues/6157/events | https://github.com/huggingface/datasets/issues/6157 | 1,855,265,663 | I_kwDODunzps5ulRt_ | 6,157 | DatasetInfo.__init__() got an unexpected keyword argument '_column_requires_decoding' | {
"login": "AisingioroHao0",
"id": 51043929,
"node_id": "MDQ6VXNlcjUxMDQzOTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/51043929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AisingioroHao0",
"html_url": "https://github.com/AisingioroHao0",
"followers_url": "https://api.github.com/users/AisingioroHao0/followers",
"following_url": "https://api.github.com/users/AisingioroHao0/following{/other_user}",
"gists_url": "https://api.github.com/users/AisingioroHao0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AisingioroHao0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AisingioroHao0/subscriptions",
"organizations_url": "https://api.github.com/users/AisingioroHao0/orgs",
"repos_url": "https://api.github.com/users/AisingioroHao0/repos",
"events_url": "https://api.github.com/users/AisingioroHao0/events{/privacy}",
"received_events_url": "https://api.github.com/users/AisingioroHao0/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | 11 | 2023-08-17T15:48:11 | 2023-09-01T17:38:26 | null | NONE | null | ### Describe the bug
When I was in load_dataset, it said "DatasetInfo.__init__() got an unexpected keyword argument '_column_requires_decoding'". The second time I ran it, there was no error and the dataset object worked
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[3], line 1
----> 1 dataset = load_dataset(
2 "/home/aihao/workspace/DeepLearningContent/datasets/manga",
3 data_dir="/home/aihao/workspace/DeepLearningContent/datasets/manga",
4 split="train",
5 )
File [~/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/load.py:2146](https://vscode-remote+ssh-002dremote-002bhome.vscode-resource.vscode-cdn.net/home/aihao/workspace/DeepLearningContent/datasets/~/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/load.py:2146), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2142 # Build dataset for splits
2143 keep_in_memory = (
2144 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2145 )
-> 2146 ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
2147 # Rename and cast features to match task schema
2148 if task is not None:
2149 # To avoid issuing the same warning twice
File [~/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py:1190](https://vscode-remote+ssh-002dremote-002bhome.vscode-resource.vscode-cdn.net/home/aihao/workspace/DeepLearningContent/datasets/~/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py:1190), in DatasetBuilder.as_dataset(self, split, run_post_process, verification_mode, ignore_verifications, in_memory)
1187 verification_mode = VerificationMode(verification_mode or VerificationMode.BASIC_CHECKS)
1189 # Create a dataset for each of the given splits
-> 1190 datasets = map_nested(
1191 partial(
1192 self._build_single_dataset,
...
File [~/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/info.py:379](https://vscode-remote+ssh-002dremote-002bhome.vscode-resource.vscode-cdn.net/home/aihao/workspace/DeepLearningContent/datasets/~/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/info.py:379), in DatasetInfo.copy(self)
378 def copy(self) -> "DatasetInfo":
--> 379 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
TypeError: DatasetInfo.__init__() got an unexpected keyword argument '_column_requires_decoding'
```
### Steps to reproduce the bug
/home/aihao/workspace/DeepLearningContent/datasets/images/images.py
```python
from logging import config
import datasets
import os
from PIL import Image
import csv
import json
class ImagesConfig(datasets.BuilderConfig):
def __init__(self, **kwargs):
super(ImagesConfig, self).__init__(**kwargs)
class Images(datasets.GeneratorBasedBuilder):
def _split_generators(self, dl_manager: datasets.DownloadManager):
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={"split": datasets.Split.TRAIN},
)
]
BUILDER_CONFIGS = [
ImagesConfig(
name="similar_pairs",
description="simliar pair dataset,item is a pair of similar images",
),
ImagesConfig(
name="image_prompt_pairs",
description="image prompt pairs",
),
]
def _info(self):
if self.config.name == "similar_pairs":
return datasets.Features(
{
"image1": datasets.features.Image(),
"image2": datasets.features.Image(),
"similarity": datasets.Value("float32"),
}
)
elif self.config.name == "image_prompt_pairs":
return datasets.Features(
{"image": datasets.features.Image(), "prompt": datasets.Value("string")}
)
def _generate_examples(self, split):
data_path = os.path.join(self.config.data_dir, "data")
if self.config.name == "similar_pairs":
prompts = {}
with open(os.path.join(data_path ,"prompts.json"), "r") as f:
prompts = json.load(f)
with open(os.path.join(data_path, "similar_pairs.csv"), "r") as f:
reader = csv.reader(f)
for row in reader:
image1_path, image2_path, similarity = row
yield image1_path + ":" + image2_path + ":", {
"image1": Image.open(image1_path),
"prompt1": prompts[image1_path],
"image2": Image.open(image2_path),
"prompt2": prompts[image2_path],
"similarity": float(similarity),
}
```
Code that indicates an error:
```python
from datasets import load_dataset
import json
import csv
import ast
import torch
data_dir = "/home/aihao/workspace/DeepLearningContent/datasets/images"
dataset = load_dataset(data_dir, data_dir=data_dir, name="similar_pairs")
```
### Expected behavior
The first execution gives an error, but it works fine
### Environment info
- `datasets` version: 2.14.3
- Platform: Linux-6.2.0-26-generic-x86_64-with-glibc2.35
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6157/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6156 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6156/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6156/comments | https://api.github.com/repos/huggingface/datasets/issues/6156/events | https://github.com/huggingface/datasets/issues/6156 | 1,854,768,618 | I_kwDODunzps5ujYXq | 6,156 | Why not use self._epoch as seed to shuffle in distributed training with IterableDataset | {
"login": "npuichigo",
"id": 11533479,
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npuichigo",
"html_url": "https://github.com/npuichigo",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 3 | 2023-08-17T10:58:20 | 2023-08-17T14:33:15 | 2023-08-17T14:33:14 | CONTRIBUTOR | null | ### Describe the bug
Currently, distributed training with `IterableDataset` needs to pass fixed seed to shuffle to keep each node use the same seed to avoid overlapping.
https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1174-L1177
My question is why not directly use `self._epoch` which is set by `set_epoch` as seed? It's almost the same across nodes.
https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1790-L1801
If not using `self._epoch` as shuffling seed, what does this method do to prepare an epoch seeded generator?
https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1206
### Steps to reproduce the bug
As mentioned above.
### Expected behavior
As mentioned above.
### Environment info
Not related | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6156/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6155/comments | https://api.github.com/repos/huggingface/datasets/issues/6155/events | https://github.com/huggingface/datasets/pull/6155 | 1,854,661,682 | PR_kwDODunzps5YI8Pc | 6,155 | Raise FileNotFoundError when passing data_files that don't exist | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 5 | 2023-08-17T09:49:48 | 2023-08-18T13:45:58 | 2023-08-18T13:35:13 | MEMBER | null | e.g. when running `load_dataset("parquet", data_files="doesnt_exist.parquet")` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6155/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6155",
"html_url": "https://github.com/huggingface/datasets/pull/6155",
"diff_url": "https://github.com/huggingface/datasets/pull/6155.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6155.patch",
"merged_at": "2023-08-18T13:35:13"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6154 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6154/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6154/comments | https://api.github.com/repos/huggingface/datasets/issues/6154/events | https://github.com/huggingface/datasets/pull/6154 | 1,854,595,943 | PR_kwDODunzps5YItlH | 6,154 | Use yaml instead of get data patterns when possible | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | 6 | 2023-08-17T09:17:05 | 2023-08-17T20:46:25 | 2023-08-17T20:37:19 | MEMBER | null | This would make the data files resolution faster: no need to list all the data files to infer the dataset builder to use.
fix https://github.com/huggingface/datasets/issues/6140 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6154/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6154",
"html_url": "https://github.com/huggingface/datasets/pull/6154",
"diff_url": "https://github.com/huggingface/datasets/pull/6154.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6154.patch",
"merged_at": "2023-08-17T20:37:19"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/6152 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6152/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6152/comments | https://api.github.com/repos/huggingface/datasets/issues/6152/events | https://github.com/huggingface/datasets/issues/6152 | 1,852,494,646 | I_kwDODunzps5uatM2 | 6,152 | FolderBase Dataset automatically resolves under current directory when data_dir is not specified | {
"login": "npuichigo",
"id": 11533479,
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npuichigo",
"html_url": "https://github.com/npuichigo",
"followers_url": "https://api.github.com/users/npuichigo/followers",
"following_url": "https://api.github.com/users/npuichigo/following{/other_user}",
"gists_url": "https://api.github.com/users/npuichigo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/npuichigo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/npuichigo/subscriptions",
"organizations_url": "https://api.github.com/users/npuichigo/orgs",
"repos_url": "https://api.github.com/users/npuichigo/repos",
"events_url": "https://api.github.com/users/npuichigo/events{/privacy}",
"received_events_url": "https://api.github.com/users/npuichigo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | open | false | null | [] | null | 4 | 2023-08-16T04:38:09 | 2023-08-17T13:45:18 | null | CONTRIBUTOR | null | ### Describe the bug
FolderBase Dataset automatically resolves under current directory when data_dir is not specified.
For example:
```
load_dataset("audiofolder")
```
takes long time to resolve and collect data_files from current directory. But I think it should reach out to this line for error handling https://github.com/huggingface/datasets/blob/cb8c5de5145c7e7eee65391cb7f4d92f0d565d62/src/datasets/packaged_modules/folder_based_builder/folder_based_builder.py#L58-L59
### Steps to reproduce the bug
```
load_dataset("audiofolder")
```
### Expected behavior
Error report
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-5.15.0-78-generic-x86_64-with-glibc2.17
- Python version: 3.8.15
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.5.3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6152/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6151/comments | https://api.github.com/repos/huggingface/datasets/issues/6151/events | https://github.com/huggingface/datasets/issues/6151 | 1,851,497,818 | I_kwDODunzps5uW51a | 6,151 | Faster sorting for single key items | {
"login": "jackapbutler",
"id": 47942453,
"node_id": "MDQ6VXNlcjQ3OTQyNDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/47942453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackapbutler",
"html_url": "https://github.com/jackapbutler",
"followers_url": "https://api.github.com/users/jackapbutler/followers",
"following_url": "https://api.github.com/users/jackapbutler/following{/other_user}",
"gists_url": "https://api.github.com/users/jackapbutler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jackapbutler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jackapbutler/subscriptions",
"organizations_url": "https://api.github.com/users/jackapbutler/orgs",
"repos_url": "https://api.github.com/users/jackapbutler/repos",
"events_url": "https://api.github.com/users/jackapbutler/events{/privacy}",
"received_events_url": "https://api.github.com/users/jackapbutler/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2023-08-15T14:02:31 | 2023-08-21T14:38:26 | 2023-08-21T14:38:25 | NONE | null | ### Feature request
A faster way to sort a dataset which contains a large number of rows.
### Motivation
The current sorting implementations took significantly longer than expected when I was running on a dataset trying to sort by timestamps.
**Code snippet:**
```python
ds = datasets.load_dataset( "json", **{"data_files": {"train": "path-to-jsonlines"}, "split": "train"}, num_proc=os.cpu_count(), keep_in_memory=True)
sorted_ds = ds.sort("pubDate", keep_in_memory=True)
```
However, once I switched to a different method which
1. unpacked to a list of tuples
2. sorted tuples by key
3. run `.select` with the sorted list of indices
It was significantly faster (orders of magnitude, especially with M's of rows)
### Your contribution
I'd be happy to implement a crude single key sorting algorithm so that other users can benefit from this trick. Broadly, this would take a `Dataset` and perform;
```python
# ds is a Dataset object
# key_name is the sorting key
class Dataset:
...
def _sort(key_name: str) -> Dataset:
index_keys = [(i,x) for i,x in enumerate(self[key_name])]
sorted_rows = sorted(row_pubdate, key=lambda x: x[1])
sorted_indicies = [x[0] for x in sorted_rows]
return self.select(sorted_indicies)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6151/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6150/comments | https://api.github.com/repos/huggingface/datasets/issues/6150/events | https://github.com/huggingface/datasets/issues/6150 | 1,850,740,456 | I_kwDODunzps5uUA7o | 6,150 | Allow dataset implement .take | {
"login": "brando90",
"id": 1855278,
"node_id": "MDQ6VXNlcjE4NTUyNzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1855278?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brando90",
"html_url": "https://github.com/brando90",
"followers_url": "https://api.github.com/users/brando90/followers",
"following_url": "https://api.github.com/users/brando90/following{/other_user}",
"gists_url": "https://api.github.com/users/brando90/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brando90/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brando90/subscriptions",
"organizations_url": "https://api.github.com/users/brando90/orgs",
"repos_url": "https://api.github.com/users/brando90/repos",
"events_url": "https://api.github.com/users/brando90/events{/privacy}",
"received_events_url": "https://api.github.com/users/brando90/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 4 | 2023-08-15T00:17:51 | 2023-08-17T13:49:37 | null | NONE | null | ### Feature request
I want to do:
```
dataset.take(512)
```
but it only works with streaming = True
### Motivation
uniform interface to data sets. Really surprising the above only works with streaming = True.
### Your contribution
Should be trivial to copy paste the IterableDataset .take to use the local path in the data (when streaming = False) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6150/timeline | null | null | null | null | false |
End of preview. Expand
in Dataset Viewer.
- Downloads last month
- 36