url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.28B
| node_id
stringlengths 18
32
| number
int64 1
4.56k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,656B
| updated_at
int64 1,587B
1,656B
| closed_at
int64 1,587B
1,656B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 1
value | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3146/comments | https://api.github.com/repos/huggingface/datasets/issues/3146/events | https://github.com/huggingface/datasets/issues/3146 | 1,033,605,947 | I_kwDODunzps49m5M7 | 3,146 | CLI test command throws NonMatchingSplitsSizesError when saving infos | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,634,910,653,000 | 1,635,321,709,000 | 1,635,321,709,000 | MEMBER | null | null | null | When trying to generate a datset JSON metadata, a `NonMatchingSplitsSizesError` is thrown:
```
$ datasets-cli test datasets/arabic_billion_words --save_infos --all_configs
Testing builder 'Alittihad' (1/10)
Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: Unknown size, post-processed: Unknown size, total: 332.13 MiB) to .cache\arabic_billion_words\Alittihad\1.1.0\8175ff1c9714c6d5d15b1141b6042e5edf048276bb81a9c14e35e149a7a62ae4...
Traceback (most recent call last):
File "path\huggingface\datasets\.venv\Scripts\datasets-cli-script.py", line 33, in <module>
sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())
File "path\huggingface\datasets\src\datasets\commands\datasets_cli.py", line 33, in main
service.run()
File "path\huggingface\datasets\src\datasets\commands\test.py", line 144, in run
builder.download_and_prepare(
File "path\huggingface\datasets\src\datasets\builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "path\huggingface\datasets\src\datasets\builder.py", line 709, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "path\huggingface\datasets\src\datasets\utils\info_utils.py", line 74, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='arabic_billion_words'), 'recorded': SplitInfo(name='train', num_bytes=1601790302, num_examples=349342, dataset_name='arabic_billion_words')}]
```
This is due because a previous run generated a wrong `dataset_info.json`.
This error can be avoided by passing `--ignore_verifications`, but I think this should be assumed when passing `--save_infos`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3146/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3145/comments | https://api.github.com/repos/huggingface/datasets/issues/3145/events | https://github.com/huggingface/datasets/issues/3145 | 1,033,580,009 | I_kwDODunzps49my3p | 3,145 | [when Image type will exist] provide a way to get the data as binary + filename | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"@severo, maybe somehow related to this PR ?\r\n- #3129",
"@severo I'll keep that in mind.\r\n\r\nYou can track progress on the Image feature in #3163 (still in the early stage). ",
"Hi ! As discussed with @severo offline it looks like the dataset viewer already supports reading PIL images, so maybe the dataset viewer doesn't need to disable decoding after all",
"Fixed with https://github.com/huggingface/datasets/pull/3163"
] | 1,634,909,029,000 | 1,640,171,137,000 | 1,640,171,136,000 | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
When a dataset cell contains a value of type Image (be it from a remote URL, an Array2D/3D, or any other way to represent images), I want to be able to write the image to the disk, with the correct filename, and optionally to know its mimetype, in order to serve it on the web.
Note: this issue would apply exactly the same for the `Audio` type.
**Describe the solution you'd like**
If a "cell" has the type `Image`, provide a way to get the binary content of the file, and the filename, eg as:
```python
filename: str
data: bytes
```
**Describe alternatives you've considered**
A way to write the cell to the disk (passing a local directory), and then return the pathname, filename, and mimetype.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3145/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3144/comments | https://api.github.com/repos/huggingface/datasets/issues/3144/events | https://github.com/huggingface/datasets/issues/3144 | 1,033,573,760 | I_kwDODunzps49mxWA | 3,144 | Infer the features if missing | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | null | [] | null | [] | 1,634,908,653,000 | 1,634,908,653,000 | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
Some datasets, in particular community datasets, have no info file, thus no features.
**Describe the solution you'd like**
If a dataset has no features, the first loaded data (5-10 rows) could be used to infer the type.
Related: `datasets` would provide a way to load the data, and get the rows AND the features as the result.
**Describe alternatives you've considered**
The HF hub could also provide some UI to help the dataset maintainers to explicit the types of their rows, or automatically infer them as an initial proposal. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3144/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3143/comments | https://api.github.com/repos/huggingface/datasets/issues/3143/events | https://github.com/huggingface/datasets/issues/3143 | 1,033,569,655 | I_kwDODunzps49mwV3 | 3,143 | Provide a way to check if the features (in info) match with the data of a split | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | null | [] | null | [
"Related: #3144 "
] | 1,634,908,416,000 | 1,634,908,676,000 | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
I understand that currently the data loaded has not always the type described in the info features
**Describe the solution you'd like**
Provide a way to check if the rows have the type described by info features
**Describe alternatives you've considered**
Always check it, and raise an error when loading the data if their type doesn't match the features.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3143/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3143/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3142/comments | https://api.github.com/repos/huggingface/datasets/issues/3142/events | https://github.com/huggingface/datasets/issues/3142 | 1,033,566,034 | I_kwDODunzps49mvdS | 3,142 | Provide a way to write a streamed dataset to the disk | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | null | [] | null | [
"Yes, I agree this feature is much needed. We could do something similar to what TF does (https://www.tensorflow.org/api_docs/python/tf/data/Dataset#cache). \r\n\r\nIdeally, if the entire streamed dataset is consumed/cached, the generated cache should be reusable for the Arrow dataset."
] | 1,634,908,193,000 | 1,635,506,079,000 | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
The streaming mode allows to get the 100 first rows of a dataset very quickly. But it does not cache the answer, so a posterior call to get the same 100 rows will send a request to the server again and again.
**Describe the solution you'd like**
Provide a way to write the streamed rows of a dataset on the disk, and to load from it later.
**Describe alternatives you've considered**
Provide a third mode: `lazy`, which would use the local cache for the data that have already been fetched previously, and use streaming to get the rest of the requested data.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3142/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3142/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3141/comments | https://api.github.com/repos/huggingface/datasets/issues/3141/events | https://github.com/huggingface/datasets/pull/3141 | 1,033,555,910 | PR_kwDODunzps4tjGYz | 3,141 | Fix caching bugs | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,907,565,000 | 1,634,935,928,000 | 1,634,910,425,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3141",
"html_url": "https://github.com/huggingface/datasets/pull/3141",
"diff_url": "https://github.com/huggingface/datasets/pull/3141.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3141.patch",
"merged_at": 1634910424000
} | This PR fixes some caching bugs (most likely introduced in the latest refactor):
* remove ")" added by accident in the dataset dir name
* correctly pass the namespace kwargs in `CachedDatasetModuleFactory`
* improve the warning message if `HF_DATASETS_OFFLINE is `True`
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3141/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3141/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3139/comments | https://api.github.com/repos/huggingface/datasets/issues/3139/events | https://github.com/huggingface/datasets/issues/3139 | 1,033,524,079 | I_kwDODunzps49mlNv | 3,139 | Fix file/directory deletion on Windows | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,634,905,328,000 | 1,634,905,328,000 | null | CONTRIBUTOR | null | null | null | Currently, on Windows, some attempts to delete a dataset file/directory will fail with the `PerimissionError`.
Examples:
- download a dataset, then force redownload it in the same session while keeping a reference to the downloaded dataset
```python
from datasets import load_dataset
dset = load_dataset("sst", split="train")
dset = load_dataset("sst", split="train", download_mode="force_redownload")
```
- try to clean up the cache files while keeping a reference to those files (via the mapped dataset):
```python
from datasets import load_dataset
dset = load_dataset("sst", split="train")
dset_mapped = dset.map(lambda _: {"dummy_col": 1})
dset.cleanup_cache_files()
```
We should fix those.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3139/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3138 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3138/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3138/comments | https://api.github.com/repos/huggingface/datasets/issues/3138/events | https://github.com/huggingface/datasets/issues/3138 | 1,033,379,997 | I_kwDODunzps49mCCd | 3,138 | More fine-grained taxonomy of error types | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | null | [] | null | [] | 1,634,895,329,000 | 1,634,895,335,000 | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
Exceptions like `FileNotFoundError` can be raised by different parts of the code, and it's hard to detect which one did
**Describe the solution you'd like**
Give a specific exception type for every group of similar errors
**Describe alternatives you've considered**
Rely on the error message, using regex
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3138/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3138/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3137/comments | https://api.github.com/repos/huggingface/datasets/issues/3137/events | https://github.com/huggingface/datasets/pull/3137 | 1,033,363,652 | PR_kwDODunzps4tievk | 3,137 | Fix numpy deprecation warning for ragged tensors | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This'll be a really helpful fix, thank you!"
] | 1,634,894,266,000 | 1,634,918,655,000 | 1,634,918,654,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3137",
"html_url": "https://github.com/huggingface/datasets/pull/3137",
"diff_url": "https://github.com/huggingface/datasets/pull/3137.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3137.patch",
"merged_at": 1634918654000
} | Numpy shows a deprecation warning when we call `np.array` on a list of ragged tensors without specifying the `dtype`. If their shapes match, the tensors can be collated together, otherwise the resulting array should have `dtype=np.object`.
Fix #3084
cc @Rocketknight1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3137/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3137/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3136/comments | https://api.github.com/repos/huggingface/datasets/issues/3136/events | https://github.com/huggingface/datasets/pull/3136 | 1,033,360,396 | PR_kwDODunzps4tieFi | 3,136 | Fix script of Arabic Billion Words dataset to return all data | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,894,064,000 | 1,634,909,321,000 | 1,634,909,320,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3136",
"html_url": "https://github.com/huggingface/datasets/pull/3136",
"diff_url": "https://github.com/huggingface/datasets/pull/3136.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3136.patch",
"merged_at": 1634909319000
} | The script has a bug and only parses and generates a portion of the entire dataset.
This PR fixes the loading script so that is properly parses the entire dataset.
Current implementation generates the same number of examples as reported in the [original paper](https://arxiv.org/abs/1611.04033) for all configurations except for one:
- For "Youm7" we generate more examples (1172136) than the ones reported by the paper (1025027)
| | Number of examples | Number of examples according to the source |
|:---------------|-------------------:|-----:|
| Alittihad | 349342 |349342 |
| Almasryalyoum | 291723 |291723 |
| Almustaqbal | 446873 |446873 |
| Alqabas | 817274 |817274 |
| Echoroukonline | 139732 |139732 |
| Ryiadh | 858188 | 858188 |
| Sabanews | 92149 |92149 |
| SaudiYoum | 888068 |888068 |
| Techreen | 314597 |314597 |
| Youm7 | 1172136 |1025027 |
Fix #3126. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3136/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3135/comments | https://api.github.com/repos/huggingface/datasets/issues/3135/events | https://github.com/huggingface/datasets/issues/3135 | 1,033,294,299 | I_kwDODunzps49ltHb | 3,135 | Make inspect.get_dataset_config_names always return a non-empty list of configs | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @severo, I guess this issue requests not only to be able to access the configuration name (by using `inspect.get_dataset_config_names`), but the configuration itself as well (I mean you use the name to get the configuration afterwards, maybe using `builder_cls.builder_configs`), is this right?",
"Yes, maybe the issue could be reformulated. As a user, I want to avoid having to manage special cases:\r\n- I want to be able to get the names of a dataset's configs, and use them in the rest of the API (get the data, get the split names, etc).\r\n- I don't want to have to manage datasets with named configs (`glue`) differently from datasets without named configs (`acronym_identification`, `Check/region_1`)"
] | 1,634,889,770,000 | 1,635,399,889,000 | 1,635,399,889,000 | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
Currently, some datasets have a configuration, while others don't. It would be simpler for the user to always have configuration names to refer to
**Describe the solution you'd like**
In that sense inspect.get_dataset_config_names should always return at least one configuration name, be it `default` or `Check___region_1` (for community datasets like `Check/region_1`).
https://github.com/huggingface/datasets/blob/c5747a5e1dde2670b7f2ca6e79e2ffd99dff85af/src/datasets/inspect.py#L161
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3135/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3134/comments | https://api.github.com/repos/huggingface/datasets/issues/3134/events | https://github.com/huggingface/datasets/issues/3134 | 1,033,251,755 | I_kwDODunzps49liur | 3,134 | Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py | {
"login": "yananchen1989",
"id": 26405281,
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yananchen1989",
"html_url": "https://github.com/yananchen1989",
"followers_url": "https://api.github.com/users/yananchen1989/followers",
"following_url": "https://api.github.com/users/yananchen1989/following{/other_user}",
"gists_url": "https://api.github.com/users/yananchen1989/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yananchen1989/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yananchen1989/subscriptions",
"organizations_url": "https://api.github.com/users/yananchen1989/orgs",
"repos_url": "https://api.github.com/users/yananchen1989/repos",
"events_url": "https://api.github.com/users/yananchen1989/events{/privacy}",
"received_events_url": "https://api.github.com/users/yananchen1989/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi,\r\n\r\nDid you try to run the code multiple times (GitHub URLs can be down sometimes for various reasons)? I can access `https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py`, so this code is working without an error on my side. \r\n\r\nAdditionally, can you please run the `datasets-cli env` command because it seems to me that you are using the `datasets` version different from `1.12.1`?",
"Same issue when running `metric = datasets.load_metric(\"accuracy\")`.\r\nError info is:\r\n```\r\nmetric = datasets.load_metric(\"accuracy\")\r\nTraceback (most recent call last):\r\n\r\n File \"<ipython-input-2-d25db38b26c5>\", line 1, in <module>\r\n metric = datasets.load_metric(\"accuracy\")\r\n\r\n File \"D:\\anaconda3\\lib\\site-packages\\datasets\\load.py\", line 610, in load_metric\r\n module_path, _ = prepare_module(\r\n\r\n File \"D:\\anaconda3\\lib\\site-packages\\datasets\\load.py\", line 330, in prepare_module\r\n local_path = cached_path(file_path, download_config=download_config)\r\n\r\n File \"D:\\anaconda3\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 288, in cached_path\r\n output_path = get_from_cache(\r\n\r\n File \"D:\\anaconda3\\lib\\site-packages\\datasets\\utils\\file_utils.py\", line 605, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/accuracy/accuracy.py\r\n```\r\n\r\n\r\n My `datasets-cli env` result is as follows:\r\n- `datasets` version: 1.11.0\r\n- Platform: Windows-10-10.0.19041-SP0\r\n- Python version: 3.8.8\r\n- PyArrow version: 6.0.0\r\n\r\n@yananchen1989 did you find a way to solve this?",
"It seems to be able to solve this issue by adding the equivalent `accuracy.py` locally. \r\nchange `metric = datasets.load_metric(\"accuracy\")` to `metric = datasets.load_metric(path = \"./accuracy.py\")`.\r\nCopy `accuracy.py` from browser at [accuracy.py](https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/accuracy/accuracy.py)"
] | 1,634,886,472,000 | 1,642,600,952,000 | 1,642,600,951,000 | NONE | null | null | null | datasets version: 1.12.1
`metric = datasets.load_metric('rouge')`
The error:
> ConnectionError Traceback (most recent call last)
> <ipython-input-3-dd10a0c5212f> in <module>
> ----> 1 metric = datasets.load_metric('rouge')
>
> /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs)
> 613 download_config=download_config,
> 614 download_mode=download_mode,
> --> 615 dataset=False,
> 616 )
> 617 metric_cls = import_main_class(module_path, dataset=False)
>
> /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, **download_kwargs)
> 328 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version)
> 329 try:
> --> 330 local_path = cached_path(file_path, download_config=download_config)
> 331 except FileNotFoundError:
> 332 if script_version is not None:
>
> /usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
> 296 use_etag=download_config.use_etag,
> 297 max_retries=download_config.max_retries,
> --> 298 use_auth_token=download_config.use_auth_token,
> 299 )
> 300 elif os.path.exists(url_or_filename):
>
> /usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
> 603 raise FileNotFoundError("Couldn't find file at {}".format(url))
> 604 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
> --> 605 raise ConnectionError("Couldn't reach {}".format(url))
> 606
> 607 # Try a second time
>
> ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py
Is there any remedy to solve the connection issue ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3134/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3133/comments | https://api.github.com/repos/huggingface/datasets/issues/3133/events | https://github.com/huggingface/datasets/pull/3133 | 1,032,511,710 | PR_kwDODunzps4tftyZ | 3,133 | Support Audio feature in streaming mode | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,823,477,000 | 1,636,726,385,000 | 1,636,726,384,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3133",
"html_url": "https://github.com/huggingface/datasets/pull/3133",
"diff_url": "https://github.com/huggingface/datasets/pull/3133.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3133.patch",
"merged_at": 1636726384000
} | Fix #3132. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3133/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3133/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3132/comments | https://api.github.com/repos/huggingface/datasets/issues/3132/events | https://github.com/huggingface/datasets/issues/3132 | 1,032,505,430 | I_kwDODunzps49ishW | 3,132 | Support Audio feature in streaming mode | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,634,823,138,000 | 1,636,726,384,000 | 1,636,726,384,000 | MEMBER | null | null | null | Currently, Audio feature is only supported for non-streaming datasets.
Due to the large size of many speech datasets, we should also support Audio feature in streaming mode.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3132/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3132/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3131 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3131/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3131/comments | https://api.github.com/repos/huggingface/datasets/issues/3131/events | https://github.com/huggingface/datasets/issues/3131 | 1,032,309,865 | I_kwDODunzps49h8xp | 3,131 | Add ADE20k | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | open | false | null | [] | null | [
"I think we can close this issue since PR [#3607](https://github.com/huggingface/datasets/pull/3607) solves this."
] | 1,634,811,189,000 | 1,647,925,399,000 | null | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** ADE20k (actually it's called the MIT Scene Parsing Benchmark, it's actually a subset of ADE20k but a lot of authors still call it ADE20k)
- **Description:** A semantic segmentation dataset, consisting of 150 classes.
- **Paper:** http://people.csail.mit.edu/bzhou/publication/scene-parse-camera-ready.pdf
- **Data:** http://sceneparsing.csail.mit.edu/
- **Motivation:** I am currently adding Transformer-based semantic segmentation models that achieve SOTA on this dataset. It would be great to directly access this dataset using HuggingFace Datasets, in order to make example scripts in HuggingFace Transformers.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3131/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3131/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3130 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3130/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3130/comments | https://api.github.com/repos/huggingface/datasets/issues/3130/events | https://github.com/huggingface/datasets/pull/3130 | 1,032,299,417 | PR_kwDODunzps4tfBJU | 3,130 | Create SECURITY.md | {
"login": "zidingz",
"id": 28839565,
"node_id": "MDQ6VXNlcjI4ODM5NTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/28839565?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zidingz",
"html_url": "https://github.com/zidingz",
"followers_url": "https://api.github.com/users/zidingz/followers",
"following_url": "https://api.github.com/users/zidingz/following{/other_user}",
"gists_url": "https://api.github.com/users/zidingz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zidingz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zidingz/subscriptions",
"organizations_url": "https://api.github.com/users/zidingz/orgs",
"repos_url": "https://api.github.com/users/zidingz/repos",
"events_url": "https://api.github.com/users/zidingz/events{/privacy}",
"received_events_url": "https://api.github.com/users/zidingz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @zidingz, thanks for your contribution.\r\n\r\nHowever I am closing it because it is a duplicate of a previous PR:\r\n - #2958\r\n\r\n"
] | 1,634,810,583,000 | 1,634,826,808,000 | 1,634,826,710,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3130",
"html_url": "https://github.com/huggingface/datasets/pull/3130",
"diff_url": "https://github.com/huggingface/datasets/pull/3130.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3130.patch",
"merged_at": null
} | To let the repository confirm feedback@huggingface.co as its security contact. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3130/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3129/comments | https://api.github.com/repos/huggingface/datasets/issues/3129/events | https://github.com/huggingface/datasets/pull/3129 | 1,032,234,167 | PR_kwDODunzps4tezlA | 3,129 | Support Audio feature for TAR archives in sequential access | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Also do you think we can adapt `cast_column` to keep the same value for this new parameter when the user only wants to change the sampling rate ?",
"Thanks for your comments, @lhoestq, I will address them afterwards.\r\n\r\nBut, I think it is more important/urgent first address the current blocking non-passing test: https://github.com/huggingface/datasets/runs/4143579241?check_suite_focus=true\r\n- I am thinking of a way of solving it, but if you have any hint, it will be more than welcome! 😅 \r\n\r\nBasically:\r\n```\r\n{'audio': '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_dataset_with_audio_featur1/data/test_audio_44100.wav'}\r\n``` \r\nbecomes\r\n```\r\n{'audio': {'bytes': None, 'path': '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_dataset_with_audio_featur1/data/test_audio_44100.wav'}}\r\n```\r\nafter a `map`, which is what was stored in the Arrow file. However we expect it remains invariant after this `map`.",
"@lhoestq, @mariosasko I finally proposed another implementation different from my last one:\r\n- Before: store Audio always a struct<path: string, bytes: binary>, where bytes can be None\r\n- Now, depending on the examples, either store Audio as a struct (as before), or as a string.\r\n\r\nPlease note that the main motivation for this change was the issue mentioned above: https://github.com/huggingface/datasets/pull/3129#issuecomment-964347056\r\n",
"Until here we had the assumption that a Features object always has an associated, deterministic, pyarrow schema. This is useful to ensure that we are able to concatenate two datasets that have the same features for example.\r\n\r\nBy breaking this assumption for the Audio type, how can we ensure that we can concatenate two audio datasets if one has Audio as a struct and the other a string ?",
"Oh I noticed that the Audio feature type has a private attribute `_storage_dtype`, so the assumption still holds, since they are now different feature types depending on the this attribute :)\r\n(i mean different from the python equal operator point of view)",
"I think this PR is ready, @lhoestq, @mariosasko. ",
"Nit: We should also mention the new storage structure in the `Features` docstring [here](https://github.com/huggingface/datasets/blob/b29fb550c31de337b952035a7584147e0f18c0cf/src/datasets/features/features.py#L966) for users to know what type of value to return in their dataset scripts (we also have a link to that docstring in the `ADD_NEW_DATASET` template)."
] | 1,634,806,611,000 | 1,637,170,928,000 | 1,637,170,927,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3129",
"html_url": "https://github.com/huggingface/datasets/pull/3129",
"diff_url": "https://github.com/huggingface/datasets/pull/3129.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3129.patch",
"merged_at": 1637170927000
} | Add Audio feature support for TAR archived files in sequential access.
Fix #3128. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3129/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3129/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3128/comments | https://api.github.com/repos/huggingface/datasets/issues/3128/events | https://github.com/huggingface/datasets/issues/3128 | 1,032,201,870 | I_kwDODunzps49hiaO | 3,128 | Support Audio feature for TAR archives in sequential access | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,634,804,581,000 | 1,637,170,927,000 | 1,637,170,927,000 | MEMBER | null | null | null | Currently, Audio feature accesses each audio file by their file path.
However, streamed TAR archive files do not allow random access to their archived files.
Therefore, we should enhance the Audio feature to support TAR archived files in sequential access. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3128/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3128/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3127/comments | https://api.github.com/repos/huggingface/datasets/issues/3127/events | https://github.com/huggingface/datasets/issues/3127 | 1,032,100,613 | I_kwDODunzps49hJsF | 3,127 | datasets-cli: convertion of a tfds dataset to a huggingface one. | {
"login": "vitalyshalumov",
"id": 33824221,
"node_id": "MDQ6VXNlcjMzODI0MjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/33824221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vitalyshalumov",
"html_url": "https://github.com/vitalyshalumov",
"followers_url": "https://api.github.com/users/vitalyshalumov/followers",
"following_url": "https://api.github.com/users/vitalyshalumov/following{/other_user}",
"gists_url": "https://api.github.com/users/vitalyshalumov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vitalyshalumov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitalyshalumov/subscriptions",
"organizations_url": "https://api.github.com/users/vitalyshalumov/orgs",
"repos_url": "https://api.github.com/users/vitalyshalumov/repos",
"events_url": "https://api.github.com/users/vitalyshalumov/events{/privacy}",
"received_events_url": "https://api.github.com/users/vitalyshalumov/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi,\r\n\r\nthe MNIST dataset is already available on the Hub. You can use it as follows:\r\n```python\r\nimport datasets\r\ndataset_dict = datasets.load_dataset(\"mnist\")\r\n```\r\n\r\nAs for the conversion of TFDS datasets to HF datasets, we will be working on it in the coming months, so stay tuned."
] | 1,634,796,867,000 | 1,635,334,565,000 | null | NONE | null | null | null | ### Discussed in https://github.com/huggingface/datasets/discussions/3079
<div type='discussions-op-text'>
<sup>Originally posted by **vitalyshalumov** October 14, 2021</sup>
I'm trying to convert a tfds dataset to a huggingface one.
I've tried:
1. datasets-cli convert --tfds_path ~/tensorflow_datasets/mnist/3.0.1/ --datasets_directory ~/.cache/huggingface/datasets/mnist/3.0.1/
2. datasets-cli convert --tfds_path ~/tensorflow_datasets/mnist/3.0.1/ --datasets_directory ~/.cache/huggingface/datasets/
and other permutations.
The script appears to be running and finishing without an error but when looking in the huggingface/datasets/ folder nothing is created.
</div> | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3127/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3126/comments | https://api.github.com/repos/huggingface/datasets/issues/3126/events | https://github.com/huggingface/datasets/issues/3126 | 1,032,093,055 | I_kwDODunzps49hH1_ | 3,126 | "arabic_billion_words" dataset does not create the full dataset | {
"login": "vitalyshalumov",
"id": 33824221,
"node_id": "MDQ6VXNlcjMzODI0MjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/33824221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vitalyshalumov",
"html_url": "https://github.com/vitalyshalumov",
"followers_url": "https://api.github.com/users/vitalyshalumov/followers",
"following_url": "https://api.github.com/users/vitalyshalumov/following{/other_user}",
"gists_url": "https://api.github.com/users/vitalyshalumov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vitalyshalumov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitalyshalumov/subscriptions",
"organizations_url": "https://api.github.com/users/vitalyshalumov/orgs",
"repos_url": "https://api.github.com/users/vitalyshalumov/repos",
"events_url": "https://api.github.com/users/vitalyshalumov/events{/privacy}",
"received_events_url": "https://api.github.com/users/vitalyshalumov/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @vitalyshalumov.\r\n\r\nApparently the script to parse the data has a bug, and does not generate the entire dataset.\r\n\r\nI'm fixing it."
] | 1,634,796,158,000 | 1,634,909,320,000 | 1,634,909,320,000 | NONE | null | null | null | ## Describe the bug
When running:
raw_dataset = load_dataset('arabic_billion_words','Alittihad')
the correct dataset file is pulled from the url.
But, the generated dataset includes just a small portion of the data included in the file.
This is true for all other portions of the "arabic_billion_words" dataset ('Almasryalyoum',.....)
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
raw_dataset = load_dataset('arabic_billion_words','Alittihad')
#The screen message
Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: 20.62 MiB, post-processed: Unknown size, total: 352.74 MiB)
## Expected results
over 100K sentences
## Actual results
only 11K sentences
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.14.0
- Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3126/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3125/comments | https://api.github.com/repos/huggingface/datasets/issues/3125/events | https://github.com/huggingface/datasets/pull/3125 | 1,032,046,666 | PR_kwDODunzps4teNPC | 3,125 | Add SLR83 to OpenSLR | {
"login": "tyrius02",
"id": 4561309,
"node_id": "MDQ6VXNlcjQ1NjEzMDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyrius02",
"html_url": "https://github.com/tyrius02",
"followers_url": "https://api.github.com/users/tyrius02/followers",
"following_url": "https://api.github.com/users/tyrius02/following{/other_user}",
"gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions",
"organizations_url": "https://api.github.com/users/tyrius02/orgs",
"repos_url": "https://api.github.com/users/tyrius02/repos",
"events_url": "https://api.github.com/users/tyrius02/events{/privacy}",
"received_events_url": "https://api.github.com/users/tyrius02/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,790,360,000 | 1,634,933,405,000 | 1,634,891,422,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3125",
"html_url": "https://github.com/huggingface/datasets/pull/3125",
"diff_url": "https://github.com/huggingface/datasets/pull/3125.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3125.patch",
"merged_at": 1634891422000
} | The PR resolves #3119, adding SLR83 (UK and Ireland dialects) to the previously created OpenSLR dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3125/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3124 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3124/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3124/comments | https://api.github.com/repos/huggingface/datasets/issues/3124/events | https://github.com/huggingface/datasets/pull/3124 | 1,031,976,286 | PR_kwDODunzps4td-5w | 3,124 | More efficient nested features encoding | {
"login": "eladsegal",
"id": 13485709,
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladsegal",
"html_url": "https://github.com/eladsegal",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq @albertvillanova @mariosasko\r\nCan you please check this out?",
"Thanks, done!"
] | 1,634,781,331,000 | 1,635,865,633,000 | 1,635,851,044,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3124",
"html_url": "https://github.com/huggingface/datasets/pull/3124",
"diff_url": "https://github.com/huggingface/datasets/pull/3124.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3124.patch",
"merged_at": 1635851044000
} | Nested encoding of features wastes a lot of time on operations which are effectively doing nothing when lists are used.
For example, if in the input we have a list of integers, `encoded_nested_example` will iterate over it and apply `encoded_nested_example` on every element even though it just return the int as is.
A similar issue is handled at an earlier stage when casting pytorch/tensorflow/pandas objects to python lists/numpy arrays:
https://github.com/huggingface/datasets/blob/c98c23c4260edadab00f997d1a5d66b7f2e93ce9/src/datasets/features/features.py#L149-L156
https://github.com/huggingface/datasets/blob/c98c23c4260edadab00f997d1a5d66b7f2e93ce9/src/datasets/features/features.py#L212-L228
In this pull request I suggest to use the same approach in `encoded_nested_example`.
In my setup there was a major speedup with this change: loading the data was at least x4 faster. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3124/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3123/comments | https://api.github.com/repos/huggingface/datasets/issues/3123/events | https://github.com/huggingface/datasets/issues/3123 | 1,031,793,207 | I_kwDODunzps49f-o3 | 3,123 | Segmentation fault when loading datasets from file | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! I created an issue on Arrow's JIRA after making a minimum reproducible example\r\n\r\nhttps://issues.apache.org/jira/browse/ARROW-14439\r\n\r\n```python\r\nimport io\r\n\r\nimport pyarrow.json as paj\r\n\r\nbatch = b'{\"a\": [], \"b\": 1}\\n{\"b\": 1}'\r\nblock_size = 12\r\n\r\npaj.read_json(\r\n io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)\r\n)\r\n```\r\n\r\nI don't see a way to workaround this properly now without hurting the performance of the JSON loader significantly though",
"The issue has been fixed in pyarrow 6.0.0, please update pyarrow :)\r\n\r\nThe issue was due to missing fields in the JSON data of type list. Now it's working fine and missing list fields are replaced with empty lists"
] | 1,634,760,971,000 | 1,635,865,027,000 | 1,635,865,027,000 | MEMBER | null | null | null | ## Describe the bug
Custom dataset loading sometimes segfaults and kills the process if chunks contain a variety of features/
## Steps to reproduce the bug
Download an example file:
```
wget https://gist.githubusercontent.com/TevenLeScao/11e2184394b3fa47d693de2550942c6b/raw/4232704d08fbfcaf93e5b51def9e5051507651ad/tiny_kelm.jsonl
```
Then in Python:
```
import datasets
tiny_kelm = datasets.load_dataset("json", data_files="tiny_kelm.jsonl", chunksize=100000)
```
## Expected results
a `tiny_kelm` functional dataset
## Actual results
☠️ `Segmentation fault (core dumped)` ☠️
## Environment info
- `datasets` version: 1.14.0
- Platform: Linux-5.11.0-38-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 5.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3123/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3122/comments | https://api.github.com/repos/huggingface/datasets/issues/3122/events | https://github.com/huggingface/datasets/issues/3122 | 1,031,787,509 | I_kwDODunzps49f9P1 | 3,122 | OSError with a custom dataset loading script | {
"login": "suzanab",
"id": 38602977,
"node_id": "MDQ6VXNlcjM4NjAyOTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/38602977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suzanab",
"html_url": "https://github.com/suzanab",
"followers_url": "https://api.github.com/users/suzanab/followers",
"following_url": "https://api.github.com/users/suzanab/following{/other_user}",
"gists_url": "https://api.github.com/users/suzanab/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suzanab/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suzanab/subscriptions",
"organizations_url": "https://api.github.com/users/suzanab/orgs",
"repos_url": "https://api.github.com/users/suzanab/repos",
"events_url": "https://api.github.com/users/suzanab/events{/privacy}",
"received_events_url": "https://api.github.com/users/suzanab/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi,\r\n\r\nthere is a difference in how the `data_dir` is zipped between the `classla/janes_tag` and the `classla/reldi_hr` dataset. After unzipping, for the former, the data files (`*.conllup`) are in the root directory (root -> data files), and for the latter, they are inside the `data` directory (root -> `data` -> data files).\r\n\r\nThis can be fixed by removing the `os.path.join` call in https://huggingface.co/datasets/classla/janes_tag/blob/main/janes_tag.py#L86\r\n\r\nLet me know if this works for you.",
"Hi Mario,\r\n\r\nI had already tried that before, but it didn't work. I have now recreated the `classla/janes_tag` zip file so that it also contains the `data` directory, but I am still getting the same error.",
"Hi,\r\n\r\nI just tried to download the `classla/janes_tag` dataset, and this time the zip file is extracted correctly. However, the script is now throwing the IndexError, probably due to a bug in the `_generate_examples`.\r\n\r\nLet me know if you are still getting the same error.",
"I am still getting the same error.",
"Hi, \r\n\r\ncould you try to download the dataset with a different `cache_dir` like so:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('classla/janes_tag', split='validation', cache_dir=\"path/to/different/cache/dir\")\r\n```\r\nIf this works, then most likely the cached extracted data is causing issues. This data is stored at `~/.cache/huggingface/datasets/downloads/extracted` and needs to be deleted, and then it should work (you can easily locate the directory with the path given in the `OSError` message). Additionally, I'd suggest you to update `datasets` to the newest version with:\r\n```\r\npip install -U datasets\r\n```",
"Thank you, deleting the `~/.cache/huggingface/datasets/downloads/extracted` directory helped. However, I am still having problems.\r\n\r\nThere was indeed a bug in the script that was throwing an `IndexError`, which I have now corrected (added the condition to skip the lines starting with '# text') and it is working locally, but still throws an error when I try to load the dataset from HuggingFace. I literally copied and pasted the `_generate_examples` function and ran it on the `dev_all.conllup` file, which I even re-downloaded from the repository to be certain that the files are exactly the same. I also deleted everything again just in case, but it didn't help. The code works locally, but throws an `IndexError` when loading from `datasets.`",
"Hi,\r\n\r\nDid some investigation.\r\n\r\nTo fix the dataset script on the Hub, append the following labels to the `names` list of the `upos_tags` field:\r\n```'INTJ NOUN', 'AUX PRON', 'PART ADV', 'PRON ADP', 'INTJ INTJ', 'VERB NOUN', 'NOUN AUX'```.\r\n\r\nThis step is required to avoid an error due to missing labels in the following step which is:\r\n```python\r\nload_dataset(\"classla/janes_tag\", split=\"validation\", download_mode=\"force_redownload\")\r\n```\r\nThis will generate and cache the dataset, so specifying `download_mode` will not be required anymore unless you update the script/data on the Hub.",
"It works now, thank you!"
] | 1,634,760,519,000 | 1,637,661,338,000 | 1,637,661,338,000 | NONE | null | null | null | ## Describe the bug
I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory structure, yet I am only getting an error with janes_tag.
## Steps to reproduce the bug
```python
dataset = datasets.load_dataset('classla/janes_tag', split='validation')
```
## Expected results
Dataset correctly loaded.
## Actual results
Traceback (most recent call last):
File "C:/mypath/test.py", line 91, in <module>
load_and_print('janes_tag')
File "C:/mypath/test.py", line 32, in load_and_print
dataset = datasets.load_dataset('classla/{}'.format(ds_name), split='validation')
File "C:\mypath\venv\lib\site-packages\datasets\load.py", line 1632, in load_dataset
use_auth_token=use_auth_token,
File "C:\mypath\venv\lib\site-packages\datasets\builder.py", line 608, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "C:\mypath\venv\lib\site-packages\datasets\builder.py", line 704, in _download_and_prepare
) from None
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: 'C:\\mypath\\.cache\\huggingface\\datasets\\downloads\\2c9996e44bdc5af9c89bffb9e6d7a3e42fdb2f56bacab45de13b20f3032ea7ca\\data\\train_all.conllup'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.14.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.5
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3122/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3121/comments | https://api.github.com/repos/huggingface/datasets/issues/3121/events | https://github.com/huggingface/datasets/pull/3121 | 1,031,673,115 | PR_kwDODunzps4tc_6q | 3,121 | Use huggingface_hub.HfApi to list datasets/metrics | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,752,109,000 | 1,636,112,708,000 | 1,636,105,716,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3121",
"html_url": "https://github.com/huggingface/datasets/pull/3121",
"diff_url": "https://github.com/huggingface/datasets/pull/3121.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3121.patch",
"merged_at": 1636105715000
} | Delete `datasets.inspect.HfApi` and use `huggingface_hub.HfApi` instead.
WIP until https://github.com/huggingface/huggingface_hub/pull/429 is merged, then wait for the new release of `huggingface_hub`, update the `huggingface_hub` version in `setup.py` and merge this PR.
cc: @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3121/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3120/comments | https://api.github.com/repos/huggingface/datasets/issues/3120/events | https://github.com/huggingface/datasets/pull/3120 | 1,031,574,511 | PR_kwDODunzps4tcril | 3,120 | Correctly update metadata to preserve features when concatenating datasets with axis=1 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,745,298,000 | 1,634,891,331,000 | 1,634,827,821,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3120",
"html_url": "https://github.com/huggingface/datasets/pull/3120",
"diff_url": "https://github.com/huggingface/datasets/pull/3120.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3120.patch",
"merged_at": 1634827821000
} | This PR correctly updates metadata to preserve higher-level feature types (e.g. `ClassLabel`) in `datasets.concatenate_datasets` when `axis=1`. Previously, we would delete the feature metadata in `datasets.concatenate_datasets` if `axis=1` and restore the feature types from the arrow table schema in `Dataset.__init__`. However, this approach only works for simple feature types (e.g. `Value`).
Fixes #3111 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3120/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3119/comments | https://api.github.com/repos/huggingface/datasets/issues/3119/events | https://github.com/huggingface/datasets/issues/3119 | 1,031,328,044 | I_kwDODunzps49eNEs | 3,119 | Add OpenSLR 83 - Crowdsourced high-quality UK and Ireland English Dialect speech | {
"login": "tyrius02",
"id": 4561309,
"node_id": "MDQ6VXNlcjQ1NjEzMDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyrius02",
"html_url": "https://github.com/tyrius02",
"followers_url": "https://api.github.com/users/tyrius02/followers",
"following_url": "https://api.github.com/users/tyrius02/following{/other_user}",
"gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions",
"organizations_url": "https://api.github.com/users/tyrius02/orgs",
"repos_url": "https://api.github.com/users/tyrius02/repos",
"events_url": "https://api.github.com/users/tyrius02/events{/privacy}",
"received_events_url": "https://api.github.com/users/tyrius02/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "tyrius02",
"id": 4561309,
"node_id": "MDQ6VXNlcjQ1NjEzMDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyrius02",
"html_url": "https://github.com/tyrius02",
"followers_url": "https://api.github.com/users/tyrius02/followers",
"following_url": "https://api.github.com/users/tyrius02/following{/other_user}",
"gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions",
"organizations_url": "https://api.github.com/users/tyrius02/orgs",
"repos_url": "https://api.github.com/users/tyrius02/repos",
"events_url": "https://api.github.com/users/tyrius02/events{/privacy}",
"received_events_url": "https://api.github.com/users/tyrius02/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "tyrius02",
"id": 4561309,
"node_id": "MDQ6VXNlcjQ1NjEzMDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyrius02",
"html_url": "https://github.com/tyrius02",
"followers_url": "https://api.github.com/users/tyrius02/followers",
"following_url": "https://api.github.com/users/tyrius02/following{/other_user}",
"gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions",
"organizations_url": "https://api.github.com/users/tyrius02/orgs",
"repos_url": "https://api.github.com/users/tyrius02/repos",
"events_url": "https://api.github.com/users/tyrius02/events{/privacy}",
"received_events_url": "https://api.github.com/users/tyrius02/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Ugh. The index files for SLR83 are CSV, not TSV. I need to add logic to process these index files."
] | 1,634,731,507,000 | 1,634,929,252,000 | 1,634,891,422,000 | CONTRIBUTOR | null | null | null | ## Adding a Dataset
- **Name:** *openslr**
- **Description:** *Data set which contains male and female recordings of English from various dialects of the UK and Ireland.*
- **Paper:** *https://www.openslr.org/resources/83/about.html*
- **Data:** *Eleven separate data files can be found via https://www.openslr.org/resources/83/*
- **Motivation:** *Increase english ASR data with UK and Irish dialects*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
The *openslr* dataset already exists, this will add additional subset, *SLR83*. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3119/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3118 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3118/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3118/comments | https://api.github.com/repos/huggingface/datasets/issues/3118/events | https://github.com/huggingface/datasets/pull/3118 | 1,031,309,549 | PR_kwDODunzps4tb0LY | 3,118 | Fix CI error at each release commit | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,730,278,000 | 1,634,734,956,000 | 1,634,734,956,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3118",
"html_url": "https://github.com/huggingface/datasets/pull/3118",
"diff_url": "https://github.com/huggingface/datasets/pull/3118.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3118.patch",
"merged_at": 1634734955000
} | Fix test_load_dataset_canonical at release commit.
Fix #3117. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3118/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3117 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3117/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3117/comments | https://api.github.com/repos/huggingface/datasets/issues/3117/events | https://github.com/huggingface/datasets/issues/3117 | 1,031,308,083 | I_kwDODunzps49eIMz | 3,117 | CI error at each release commit | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,634,730,173,000 | 1,634,734,955,000 | 1,634,734,955,000 | MEMBER | null | null | null | After 1.12.0, there is a recurrent CI error at each release commit: https://app.circleci.com/pipelines/github/huggingface/datasets/8289/workflows/665d954d-e409-4602-8202-e678594d2946/jobs/51110
```
____________________ LoadTest.test_load_dataset_canonical _____________________
[gw0] win32 -- Python 3.6.8 C:\tools\miniconda3\python.exe
self = <tests.test_load.LoadTest testMethod=test_load_dataset_canonical>
def test_load_dataset_canonical(self):
scripts_version = os.getenv("HF_SCRIPTS_VERSION", SCRIPTS_VERSION)
with self.assertRaises(FileNotFoundError) as context:
datasets.load_dataset("_dummy")
self.assertIn(
f"https://raw.githubusercontent.com/huggingface/datasets/{scripts_version}/datasets/_dummy/_dummy.py",
> str(context.exception),
)
E AssertionError: 'https://raw.githubusercontent.com/huggingface/datasets/1.14.0/datasets/_dummy/_dummy.py' not found in "Couldn't find a dataset script at C:\\Users\\circleci\\datasets\\_dummy\\_dummy.py or any data file in the same directory. Couldn't find '_dummy' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/_dummy/_dummy.py"
tests\test_load.py:358: AssertionError
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3117/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3117/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3116/comments | https://api.github.com/repos/huggingface/datasets/issues/3116/events | https://github.com/huggingface/datasets/pull/3116 | 1,031,270,611 | PR_kwDODunzps4tbr6g | 3,116 | Update doc links to point to new docs | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [] | 1,634,727,647,000 | 1,634,891,368,000 | 1,634,891,205,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3116",
"html_url": "https://github.com/huggingface/datasets/pull/3116",
"diff_url": "https://github.com/huggingface/datasets/pull/3116.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3116.patch",
"merged_at": 1634891205000
} | This PR:
* updates the README links and the ADD_NEW_DATASET template to point to the new docs (the new docs don't have a section with the list of all the possible features, so I added that info to the `Features` docstring, which is then referenced in the ADD_NEW_DATASET template)
* fixes some broken links in the `.rst` files (fixed with the `make linkcheck` tool) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3116/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3115/comments | https://api.github.com/repos/huggingface/datasets/issues/3115/events | https://github.com/huggingface/datasets/pull/3115 | 1,030,737,524 | PR_kwDODunzps4tZ-Vr | 3,115 | Fill in dataset card for NCBI disease dataset | {
"login": "edugp",
"id": 17855740,
"node_id": "MDQ6VXNlcjE3ODU1NzQw",
"avatar_url": "https://avatars.githubusercontent.com/u/17855740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/edugp",
"html_url": "https://github.com/edugp",
"followers_url": "https://api.github.com/users/edugp/followers",
"following_url": "https://api.github.com/users/edugp/following{/other_user}",
"gists_url": "https://api.github.com/users/edugp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/edugp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edugp/subscriptions",
"organizations_url": "https://api.github.com/users/edugp/orgs",
"repos_url": "https://api.github.com/users/edugp/repos",
"events_url": "https://api.github.com/users/edugp/events{/privacy}",
"received_events_url": "https://api.github.com/users/edugp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,677,025,000 | 1,634,891,107,000 | 1,634,891,107,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3115",
"html_url": "https://github.com/huggingface/datasets/pull/3115",
"diff_url": "https://github.com/huggingface/datasets/pull/3115.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3115.patch",
"merged_at": 1634891107000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3115/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3114 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3114/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3114/comments | https://api.github.com/repos/huggingface/datasets/issues/3114/events | https://github.com/huggingface/datasets/issues/3114 | 1,030,693,130 | I_kwDODunzps49byEK | 3,114 | load_from_disk in DatasetsDict/Dataset not working with PyArrowHDFS wrapper implementing fsspec.spec.AbstractFileSystem | {
"login": "francisco-perez-sorrosal",
"id": 918006,
"node_id": "MDQ6VXNlcjkxODAwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/francisco-perez-sorrosal",
"html_url": "https://github.com/francisco-perez-sorrosal",
"followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers",
"following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}",
"gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions",
"organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs",
"repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos",
"events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}",
"received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! Can you try again with pyarrow 6.0.0 ? I think it includes some changes regarding filesystems compatibility with fsspec.",
"Hi @lhoestq! I ended up using `fsspec.implementations.arrow.HadoopFileSystem` which doesn't have the problem I described with pyarrow 5.0.0.\r\n\r\nI'll try again with `PyArrowHDFS` once I update arrow to 6.0.0.\r\n\r\nThanks!"
] | 1,634,673,705,000 | 1,644,847,228,000 | 1,644,847,228,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
Passing a PyArrowHDFS implementation of fsspec.spec.AbstractFileSystem (in the `fs` param required by `load_from_disk` methods in `DatasetDict` (in datasets_dict.py) and `Dataset` (in arrow_dataset.py) results in an error when calling the download method in the `fs` parameter.
## Steps to reproduce the bug
The documentation for the `fs` parameter states:
```
fs (:class:`~filesystems.S3FileSystem` or ``fsspec.spec.AbstractFileSystem``, optional, default ``None``):
Instance of the remote filesystem used to download the files from.
```
`PyArrowHDFS` from [fsspec](https://filesystem-spec.readthedocs.io/en/latest/_modules/fsspec/implementations/hdfs.html) implements `fsspec.spec.AbstractFileSystem`. However, when using it as shown below, I get an error.
```python
from fsspec.implementations.hdfs import PyArrowHDFS
...
transformed_corpus_path = "/user/my_user/clickbait/transformed_ds/"
fs = PyArrowHDFS(host, port, user, kerb_ticket=kerb_ticket)
dss = DatasetDict.load_from_disk(transformed_corpus_path, fs, True)
```
## Expected results
Previous to load from disk, I have managed to successfully store in HDFS the data and meta-information of a DatasetDict by doing:
```python
transformed_corpus_path = "/user/my_user/clickbait/transformed_ds/"
fs = PyArrowHDFS(host, port, user, kerb_ticket=kerb_ticket)
my_datasets.save_to_disk(transformed_corpus_path, fs=fs)
```
As I have 3 datasets in the DatasetDict named `my_datasets`, the previous Python code creates the following contents in HDFS:
```sh
$ hadoop fs -ls "/user/my_user/clickbait/transformed_ds/"
Found 4 items
-rw------- 3 my_user users 43 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/dataset_dict.json
drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/test
drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/train
drwx------ - my_user users 0 2021-10-19 03:08 /user/my_user/clickbait/transformed_ds/validation
```
I would expect to recover on `dss` the Arrow-backed datasets I previously saved in HDFS calling the `save_to_disk` method on the `DatasetDict` object when invoking `DatasetDict.load_from_disk(...)` as described above.
## Actual results
However, when trying to recover the saved datasets, I get this error:
```
...
File "/home/fperez/dev/neuromancer/neuromancer/corpus.py", line 186, in load_transformed_corpus_from_disk
dss = DatasetDict.load_from_disk(transformed_corpus_path, fs, True)
File "/home/fperez/anaconda3/envs/neuromancer/lib/python3.9/site-packages/datasets/dataset_dict.py", line 748, in load_from_disk
dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)
File "/home/fperez/anaconda3/envs/neuromancer/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1048, in load_from_disk
fs.download(src_dataset_path, dataset_path.as_posix(), recursive=True)
File "pyarrow/_hdfsio.pyx", line 438, in pyarrow._hdfsio.HadoopFileSystem.download
TypeError: download() got an unexpected keyword argument 'recursive'
```
Examining the [signature of the download method in pyarrow 5.0.0](https://github.com/apache/arrow/blob/54d2bd89c99df72fa091b025452f85dd5d88e3cf/python/pyarrow/_hdfsio.pyx#L438) we can see that there's no download parameter:
```python
def download(self, path, stream, buffer_size=None):
with self.open(path, 'rb') as f:
f.download(stream, buffer_size=buffer_size)
```
## Environment info
- `datasets` version: 1.13.3
- Platform: Linux-3.10.0-1160.15.2.el7.x86_64-x86_64-with-glibc2.33
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3114/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3113 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3113/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3113/comments | https://api.github.com/repos/huggingface/datasets/issues/3113/events | https://github.com/huggingface/datasets/issues/3113 | 1,030,667,547 | I_kwDODunzps49br0b | 3,113 | Loading Data from HDF files | {
"login": "FeryET",
"id": 30388648,
"node_id": "MDQ6VXNlcjMwMzg4NjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/30388648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FeryET",
"html_url": "https://github.com/FeryET",
"followers_url": "https://api.github.com/users/FeryET/followers",
"following_url": "https://api.github.com/users/FeryET/following{/other_user}",
"gists_url": "https://api.github.com/users/FeryET/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FeryET/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FeryET/subscriptions",
"organizations_url": "https://api.github.com/users/FeryET/orgs",
"repos_url": "https://api.github.com/users/FeryET/repos",
"events_url": "https://api.github.com/users/FeryET/events{/privacy}",
"received_events_url": "https://api.github.com/users/FeryET/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3761482852,
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue",
"name": "good second issue",
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues"
}
] | open | false | null | [] | null | [
"I'm currently working on bringing [Ecoset](https://www.pnas.org/doi/10.1073/pnas.2011417118) to huggingface datasets and I would second this request...",
"I would also like this support or something similar. Geospatial datasets come in netcdf which is derived from hdf5, or zarr. I've gotten zarr stores to work with datasets and streaming, but it takes awhile to convert the data to zarr if it's not stored in that natively. "
] | 1,634,671,606,000 | 1,655,333,572,000 | null | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
More often than not I come along big HDF datasets, and currently there is no straight forward way to feed them to a dataset.
**Describe the solution you'd like**
I would love to see a `from_h5` method that gets an interface implemented by the user on how items are extracted from dataset (in case of multiple datasets containing elements like arrays and metadata and etc).
**Describe alternatives you've considered**
Currently I manually load hdf files using `h5py` and implement PyTorch dataset interface. For small h5 files I load them into a pandas dataframe and use `from_pandas` function in the `datasets` package to load them, but for big datasets this is not feasible.
**Additional context**
HDF files are widespread throughout different domains and are one of the go to's for many researchers/scientists/engineers who work with numerical data. Given `datasets`' usecases have outgrown NLP use cases, it will make a lot of sense focusing on things like supporting HDF files.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3113/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3113/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3112 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3112/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3112/comments | https://api.github.com/repos/huggingface/datasets/issues/3112/events | https://github.com/huggingface/datasets/issues/3112 | 1,030,613,083 | I_kwDODunzps49behb | 3,112 | OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB | {
"login": "BenoitDalFerro",
"id": 69694610,
"node_id": "MDQ6VXNlcjY5Njk0NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/69694610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenoitDalFerro",
"html_url": "https://github.com/BenoitDalFerro",
"followers_url": "https://api.github.com/users/BenoitDalFerro/followers",
"following_url": "https://api.github.com/users/BenoitDalFerro/following{/other_user}",
"gists_url": "https://api.github.com/users/BenoitDalFerro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenoitDalFerro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenoitDalFerro/subscriptions",
"organizations_url": "https://api.github.com/users/BenoitDalFerro/orgs",
"repos_url": "https://api.github.com/users/BenoitDalFerro/repos",
"events_url": "https://api.github.com/users/BenoitDalFerro/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenoitDalFerro/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"I am very unsure on why you tagged me here. I am not a maintainer of the Datasets library and have no idea how to help you.",
"fixed",
"Ok got it, tensor full of NaNs, cf.\r\n\r\n~\\anaconda3\\envs\\xxx\\lib\\site-packages\\datasets\\arrow_writer.py in write_examples_on_file(self)\r\n315 # This check fails with FloatArrays with nans, which is not what we want, so account for that:",
"Actually this is is a live bug, documented yet still live so reopening"
] | 1,634,667,701,000 | 1,634,669,549,000 | null | NONE | null | null | null | ## Describe the bug
Despite having batches way under 2Gb when running `datasets.map()`, after processing correctly the data of the first batch without fuss and irrespective of writer_batch_size (say 2,4,8,16,32,64 and 128 in my case), it returns the following error :
> OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
Note that I always run `batch_size=writer_batch_size` :
## Steps to reproduce the bug
```python
datasets.map(lambda example : {"column_name" : function(arguments)}, batched=False, remove_columns = datasets.column_names, batch_size=batch_size, writer_batch_size=batch_size, disable_nullable=True, num_proc=None, desc="blablabla")
```
## Introspecting CUDA memory during bug
Placed within `function(arguments)` the following statement to introspect memory usage, merely a little over 1/4 of 2Gb
`print(torch.cuda.memory_summary(device=device, abbreviated=False))`
> |===========================================================================|
| PyTorch CUDA memory summary, device ID 0 |
|---------------------------------------------------------------------------|
| CUDA OOMs: 0 | cudaMalloc retries: 0 |
|===========================================================================|
| Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed |
|---------------------------------------------------------------------------|
| Allocated memory | 541418 KB | 545725 KB | 555695 KB | 14276 KB |
| from large pool | 540672 KB | 544431 KB | 544431 KB | 3759 KB |
| from small pool | 746 KB | 1714 KB | 11264 KB | 10517 KB |
|---------------------------------------------------------------------------|
| Active memory | 541418 KB | 545725 KB | 555695 KB | 14276 KB |
| from large pool | 540672 KB | 544431 KB | 544431 KB | 3759 KB |
| from small pool | 746 KB | 1714 KB | 11264 KB | 10517 KB |
|---------------------------------------------------------------------------|
| GPU reserved memory | 598016 KB | 598016 KB | 598016 KB | 0 B |
| from large pool | 595968 KB | 595968 KB | 595968 KB | 0 B |
| from small pool | 2048 KB | 2048 KB | 2048 KB | 0 B |
|---------------------------------------------------------------------------|
| Non-releasable memory | 36117 KB | 52292 KB | 274275 KB | 238158 KB |
| from large pool | 34816 KB | 51537 KB | 261713 KB | 226897 KB |
| from small pool | 1301 KB | 2045 KB | 12562 KB | 11261 KB |
|---------------------------------------------------------------------------|
| Allocations | 198 | 224 | 478 | 280 |
| from large pool | 74 | 75 | 75 | 1 |
| from small pool | 124 | 150 | 403 | 279 |
|---------------------------------------------------------------------------|
| Active allocs | 198 | 224 | 478 | 280 |
| from large pool | 74 | 75 | 75 | 1 |
| from small pool | 124 | 150 | 403 | 279 |
|---------------------------------------------------------------------------|
| GPU reserved segments | 21 | 21 | 21 | 0 |
| from large pool | 20 | 20 | 20 | 0 |
| from small pool | 1 | 1 | 1 | 0 |
|---------------------------------------------------------------------------|
| Non-releasable allocs | 18 | 23 | 166 | 148 |
| from large pool | 17 | 18 | 19 | 2 |
| from small pool | 1 | 6 | 147 | 146 |
|===========================================================================|
## Expected results
Efficiently process the datasets and write it down to disk.
## Actual results
--------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2390 else:
-> 2391 writer.write(example)
2392 else:
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in write(self, example, key, writer_batch_size)
367
--> 368 self.write_examples_on_file()
369
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in write_examples_on_file(self)
316 if not isinstance(pa_array[0], pa.lib.FloatScalar):
--> 317 raise OverflowError(
318 "There was an overflow in the {}. Try to reduce writer_batch_size to have batches smaller than 2GB".format(
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
During handling of the above exception, another exception occurred:
OverflowError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_16268/2456940807.py in <module>
3 #tracker = OfflineEmissionsTracker(country_iso_code="FRA", project_name='xxx'+time_stamp,output_dir='./codecarbon')
4 #tracker.start()
----> 5 process_datasets(source_datasets_paths, dataset_dir, LM_tokenizer, LMhead_model, datasets_selection=['wikipedia'], from_scratch=True,
6 clean_sentences=False, negative_sampling=False, translate=False, tokenize=False, generate_embeddings=True, concatenate_embeddings=False,
7 max_sample=10000, padding='do_not_pad', truncation=True, cpu_batch_size=1000, gpu_batch_size=2, cpu_writer_batch_size=1000, gpu_writer_batch_size=2, disable_nullable=True, num_proc=None) #
~\xxx\xxx.py in process_datasets(source_datasets_paths, dataset_dir, LM_tokenizer, LMhead_model, datasets_selection, from_scratch, clean_sentences, translate, negative_sampling, tokenize, generate_embeddings, concatenate_embeddings, max_sample, padding, truncation, cpu_batch_size, gpu_batch_size, cpu_writer_batch_size, gpu_writer_batch_size, disable_nullable, num_proc)
481 for column in tqdm(dataset.column_names, desc=f'Processing column', leave=False):
482 if "xxx_" in column:
--> 483 dataset = dataset.map(lambda example :
484 {"embeddings_"+str(column).replace("translated_",""):function(input_ids=example[column],
485 token_type_ids=example[column.replace("input_ids","token_type_ids")],
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2034
2035 if num_proc is None or num_proc == 1:
-> 2036 return self._map_single(
2037 function=function,
2038 with_indices=with_indices,
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in wrapper(*args, **kwargs)
501 self: "Dataset" = kwargs.pop("self")
502 # apply actual function
--> 503 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
504 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
505 for dataset in datasets:
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in wrapper(*args, **kwargs)
468 }
469 # apply actual function
--> 470 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
471 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
472 # re-apply format to the output
~\anaconda3\envs\xxx\lib\site-packages\datasets\fingerprint.py in wrapper(*args, **kwargs)
404 # Call actual function
405
--> 406 out = func(self, *args, **kwargs)
407
408 # Update fingerprint of in-place transforms + update in-place history of transforms
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2425 if update_data:
2426 if writer is not None:
-> 2427 writer.finalize()
2428 if tmp_file is not None:
2429 tmp_file.close()
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in finalize(self, close_stream)
440 # Re-intializing to empty list for next batch
441 self.hkey_record = []
--> 442 self.write_examples_on_file()
443 if self.pa_writer is None:
444 if self._schema is not None:
~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in write_examples_on_file(self)
315 # This check fails with FloatArrays with nans, which is not what we want, so account for that:
316 if not isinstance(pa_array[0], pa.lib.FloatScalar):
--> 317 raise OverflowError(
318 "There was an overflow in the {}. Try to reduce writer_batch_size to have batches smaller than 2GB".format(
319 type(pa_array)
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.13.3
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.8.11
- PyArrow version: 3.0.0
##Next steps
Testing on Linux.
@albertvillanova
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3112/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3111/comments | https://api.github.com/repos/huggingface/datasets/issues/3111/events | https://github.com/huggingface/datasets/issues/3111 | 1,030,598,983 | I_kwDODunzps49bbFH | 3,111 | concatenate_datasets removes ClassLabel typing. | {
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Something like this would fix it I think: https://github.com/huggingface/datasets/compare/master...Dref360:HF-3111/concatenate_types?expand=1"
] | 1,634,666,731,000 | 1,634,827,821,000 | 1,634,827,821,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
When concatenating two datasets, we lose typing of ClassLabel columns.
I can work on this if this is a legitimate bug,
## Steps to reproduce the bug
```python
import datasets
from datasets import Dataset, ClassLabel, Value, concatenate_datasets
DS_LEN = 100
my_dataset = Dataset.from_dict(
{
"sentence": [f"{chr(i % 10)}" for i in range(DS_LEN)],
"label": [i % 2 for i in range(DS_LEN)]
}
)
my_predictions = Dataset.from_dict(
{
"pred": [(i + 1) % 2 for i in range(DS_LEN)]
}
)
my_dataset = my_dataset.cast(datasets.Features({"sentence": Value("string"), "label": ClassLabel(2, names=["POS", "NEG"])}))
print("Original")
print(my_dataset)
print(my_dataset.features)
concat_ds = concatenate_datasets([my_dataset, my_predictions], axis=1)
print("Concatenated")
print(concat_ds)
print(concat_ds.features)
```
## Expected results
The features of `concat_ds` should contain ClassLabel.
## Actual results
On master, I get:
```
{'sentence': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None), 'pred': Value(dtype='int64', id=None)}
```
## Environment info
- `datasets` version: 1.14.1.dev0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.11
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3111/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3110 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3110/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3110/comments | https://api.github.com/repos/huggingface/datasets/issues/3110/events | https://github.com/huggingface/datasets/pull/3110 | 1,030,558,484 | PR_kwDODunzps4tZakS | 3,110 | Stream TAR-based dataset using iter_archive | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I'm creating a new branch `stream-tar-audio` just for the audio datasets since they need https://github.com/huggingface/datasets/pull/3129 to be merged first",
"The CI fails are only related to missing sections or tags in the dataset cards - which is unrelated to this PR"
] | 1,634,663,784,000 | 1,636,134,529,000 | 1,636,134,528,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3110",
"html_url": "https://github.com/huggingface/datasets/pull/3110",
"diff_url": "https://github.com/huggingface/datasets/pull/3110.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3110.patch",
"merged_at": 1636134528000
} | I converted all the dataset based on TAR archive to use iter_archive instead, so that they can be streamable.
It means that around 80 datasets become streamable :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3110/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3109/comments | https://api.github.com/repos/huggingface/datasets/issues/3109/events | https://github.com/huggingface/datasets/pull/3109 | 1,030,543,284 | PR_kwDODunzps4tZXmC | 3,109 | Update BibTeX entry | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,662,771,000 | 1,634,663,608,000 | 1,634,663,607,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3109",
"html_url": "https://github.com/huggingface/datasets/pull/3109",
"diff_url": "https://github.com/huggingface/datasets/pull/3109.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3109.patch",
"merged_at": 1634663607000
} | Update BibTeX entry. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3109/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3108 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3108/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3108/comments | https://api.github.com/repos/huggingface/datasets/issues/3108/events | https://github.com/huggingface/datasets/pull/3108 | 1,030,405,618 | PR_kwDODunzps4tY8ID | 3,108 | Add Google BLEU (aka GLEU) metric | {
"login": "slowwavesleep",
"id": 44175589,
"node_id": "MDQ6VXNlcjQ0MTc1NTg5",
"avatar_url": "https://avatars.githubusercontent.com/u/44175589?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slowwavesleep",
"html_url": "https://github.com/slowwavesleep",
"followers_url": "https://api.github.com/users/slowwavesleep/followers",
"following_url": "https://api.github.com/users/slowwavesleep/following{/other_user}",
"gists_url": "https://api.github.com/users/slowwavesleep/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slowwavesleep/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slowwavesleep/subscriptions",
"organizations_url": "https://api.github.com/users/slowwavesleep/orgs",
"repos_url": "https://api.github.com/users/slowwavesleep/repos",
"events_url": "https://api.github.com/users/slowwavesleep/events{/privacy}",
"received_events_url": "https://api.github.com/users/slowwavesleep/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,654,918,000 | 1,635,170,824,000 | 1,635,170,824,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3108",
"html_url": "https://github.com/huggingface/datasets/pull/3108",
"diff_url": "https://github.com/huggingface/datasets/pull/3108.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3108.patch",
"merged_at": 1635170824000
} | This PR adds the NLTK implementation of Google BLEU metric. This is also a part of an effort to resolve an unfortunate naming collision between GLEU for machine translation and GLEU for grammatical error correction.
I used [this page](https://huggingface.co/docs/datasets/add_metric.html) for reference. Please, point me to the right direction if I missed anything. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3108/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3107/comments | https://api.github.com/repos/huggingface/datasets/issues/3107/events | https://github.com/huggingface/datasets/pull/3107 | 1,030,357,527 | PR_kwDODunzps4tYyhF | 3,107 | Add paper BibTeX citation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,652,491,000 | 1,634,653,582,000 | 1,634,653,581,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3107",
"html_url": "https://github.com/huggingface/datasets/pull/3107",
"diff_url": "https://github.com/huggingface/datasets/pull/3107.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3107.patch",
"merged_at": 1634653581000
} | Add paper BibTeX citation to README file. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3107/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3106/comments | https://api.github.com/repos/huggingface/datasets/issues/3106/events | https://github.com/huggingface/datasets/pull/3106 | 1,030,112,473 | PR_kwDODunzps4tYA6i | 3,106 | Fix URLs in blog_authorship_corpus dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,637,965,000 | 1,634,647,840,000 | 1,634,647,839,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3106",
"html_url": "https://github.com/huggingface/datasets/pull/3106",
"diff_url": "https://github.com/huggingface/datasets/pull/3106.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3106.patch",
"merged_at": 1634647839000
} | After contacting the authors of the paper "Effects of Age and Gender on Blogging", they confirmed:
- the old URLs are no longer valid
- there are alternative host URLs
Fix #3091. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3106/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3105 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3105/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3105/comments | https://api.github.com/repos/huggingface/datasets/issues/3105/events | https://github.com/huggingface/datasets/issues/3105 | 1,029,098,843 | I_kwDODunzps49Vs1b | 3,105 | download_mode=`force_redownload` does not work on removed datasets | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | null | [] | null | [] | 1,634,562,758,000 | 1,634,895,370,000 | null | CONTRIBUTOR | null | null | null | ## Describe the bug
If a cached dataset is removed from the library, I don't see how to delete it programmatically. I thought that using `force_redownload` would try to refresh the cache, then raise an exception, but it reuses the cache instead.
## Steps to reproduce the bug
_requires to already have `wit` in the cache_: see https://github.com/huggingface/datasets/pull/2981
```python
import datasets as ds
dataset = ds.load_dataset("wit", split="train", download_mode='force_redownload')
```
## Expected results
It should raise an exception, since the dataset does not exist anymore.
## Actual results
It uses the cached result
```
Using the latest cached version of the module from /home/slesage/.cache/huggingface/modules/datasets_modules/datasets/wit/107afbffd48e058b19101bddc47fbee25fa68eb6d50a733e262875f1285a5171 (last modified on Wed Sep 29 08:21:10 2021) since it couldn't be found locally at wit, or remotely on the Hugging Face Hub.
```
## Environment info
- `datasets` version: 1.13.4.dev0
- Platform: Linux-5.11.0-1019-aws-x86_64-with-glibc2.31
- Python version: 3.9.6
- PyArrow version: 4.0.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3105/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3104 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3104/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3104/comments | https://api.github.com/repos/huggingface/datasets/issues/3104/events | https://github.com/huggingface/datasets/issues/3104 | 1,029,080,412 | I_kwDODunzps49VoVc | 3,104 | Missing Zenodo 1.13.3 release | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Zenodo has fixed on their side the 1.13.3 release: https://zenodo.org/record/5589150"
] | 1,634,561,838,000 | 1,634,908,945,000 | 1,634,908,944,000 | MEMBER | null | null | null | After `datasets` 1.13.3 release, this does not appear in Zenodo releases: https://zenodo.org/record/5570305
TODO:
- [x] Contact Zenodo support
- [x] Check it is fixed | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3104/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3103 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3103/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3103/comments | https://api.github.com/repos/huggingface/datasets/issues/3103/events | https://github.com/huggingface/datasets/pull/3103 | 1,029,069,310 | PR_kwDODunzps4tUzJQ | 3,103 | Fix project description in PyPI | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,561,249,000 | 1,634,561,997,000 | 1,634,561,996,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3103",
"html_url": "https://github.com/huggingface/datasets/pull/3103",
"diff_url": "https://github.com/huggingface/datasets/pull/3103.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3103.patch",
"merged_at": 1634561996000
} | Fix project description appearing in PyPI, so that it contains the content of the README.md file (like transformers).
Currently, `datasets` project description appearing in PyPI shows the release instructions addressed to core maintainers: https://pypi.org/project/datasets/1.13.3/
Fix #3102. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3103/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3102 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3102/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3102/comments | https://api.github.com/repos/huggingface/datasets/issues/3102/events | https://github.com/huggingface/datasets/issues/3102 | 1,029,067,062 | I_kwDODunzps49VlE2 | 3,102 | Unsuitable project description in PyPI | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,634,561,100,000 | 1,634,561,996,000 | 1,634,561,996,000 | MEMBER | null | null | null | Currently, `datasets` project description appearing in PyPI shows the release instructions addressed to core maintainers: https://pypi.org/project/datasets/1.13.3/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3102/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3101 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3101/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3101/comments | https://api.github.com/repos/huggingface/datasets/issues/3101/events | https://github.com/huggingface/datasets/pull/3101 | 1,028,966,968 | PR_kwDODunzps4tUelE | 3,101 | Update SUPERB to use Audio features | {
"login": "anton-l",
"id": 26864830,
"node_id": "MDQ6VXNlcjI2ODY0ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anton-l",
"html_url": "https://github.com/anton-l",
"followers_url": "https://api.github.com/users/anton-l/followers",
"following_url": "https://api.github.com/users/anton-l/following{/other_user}",
"gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anton-l/subscriptions",
"organizations_url": "https://api.github.com/users/anton-l/orgs",
"repos_url": "https://api.github.com/users/anton-l/repos",
"events_url": "https://api.github.com/users/anton-l/events{/privacy}",
"received_events_url": "https://api.github.com/users/anton-l/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thank you! Sorry I forgot this one @albertvillanova"
] | 1,634,555,118,000 | 1,634,560,434,000 | 1,634,558,806,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3101",
"html_url": "https://github.com/huggingface/datasets/pull/3101",
"diff_url": "https://github.com/huggingface/datasets/pull/3101.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3101.patch",
"merged_at": 1634558806000
} | This is the same dataset refresh as the other Audio ones: https://github.com/huggingface/datasets/pull/3081
cc @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3101/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3101/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3100/comments | https://api.github.com/repos/huggingface/datasets/issues/3100/events | https://github.com/huggingface/datasets/pull/3100 | 1,028,738,180 | PR_kwDODunzps4tTwpn | 3,100 | Replace FSTimeoutError with parent TimeoutError | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,542,629,000 | 1,634,543,515,000 | 1,634,543,514,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3100",
"html_url": "https://github.com/huggingface/datasets/pull/3100",
"diff_url": "https://github.com/huggingface/datasets/pull/3100.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3100.patch",
"merged_at": 1634543514000
} | PR #3050 introduced a dependency on `fsspec.FSTiemoutError`. Note that this error only exists from `fsspec` version `2021.06.0` (June 2021).
To fix #3097, there are 2 alternatives:
- Either pinning `fsspec` to versions newer or equal to `2021.06.0`
- Or replacing `fsspec.FSTimeoutError` wth its parent `asyncio.TimeoutError`, which exists from Python 3.8.0 (Sep 2018).
This PR implements the second approach.
Fix #3097. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3100/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3099 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3099/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3099/comments | https://api.github.com/repos/huggingface/datasets/issues/3099/events | https://github.com/huggingface/datasets/issues/3099 | 1,028,338,078 | I_kwDODunzps49SzGe | 3,099 | AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo' | {
"login": "JTWang2000",
"id": 49268567,
"node_id": "MDQ6VXNlcjQ5MjY4NTY3",
"avatar_url": "https://avatars.githubusercontent.com/u/49268567?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JTWang2000",
"html_url": "https://github.com/JTWang2000",
"followers_url": "https://api.github.com/users/JTWang2000/followers",
"following_url": "https://api.github.com/users/JTWang2000/following{/other_user}",
"gists_url": "https://api.github.com/users/JTWang2000/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JTWang2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JTWang2000/subscriptions",
"organizations_url": "https://api.github.com/users/JTWang2000/orgs",
"repos_url": "https://api.github.com/users/JTWang2000/repos",
"events_url": "https://api.github.com/users/JTWang2000/events{/privacy}",
"received_events_url": "https://api.github.com/users/JTWang2000/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @JTWang2000, thanks for reporting.\r\n\r\nHowever, I cannot reproduce your reported bug:\r\n```python\r\n>>> from datasets import load_dataset\r\n\r\n>>> dataset = load_dataset(\"sst\", \"default\")\r\n>>> dataset\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'tokens', 'tree'],\r\n num_rows: 8544\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'tokens', 'tree'],\r\n num_rows: 1101\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'tokens', 'tree'],\r\n num_rows: 2210\r\n })\r\n})\r\n```\r\n\r\nMaybe, the cause is that you have a quite old version of `huggingface_hub`. Could you please try to update it and confirm if the problem persists?\r\n```\r\npip install -U huggingface_hub\r\n```",
"Im facing the same issue. I did run the upgrade command but that doesnt seem to resolve the issue",
"Hi @aneeshjain, could you please specify which `huggingface_hub` version you are using?\r\n\r\nBesides that, please run `datasets-cli env` and copy-and-paste its output below.",
"The problem seems to be with the latest version of `datasets`. After running `pip install -U datasets huggingface_hub`, I get the following: \r\n\r\n```bash\r\npython -c \"import huggingface_hub; print(f'hbvers={huggingface_hub.__version__}'); import datasets; print(f'dvers={datasets.__version__}')\"\r\nhbvers=0.0.8\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/opt/conda/lib/python3.6/site-packages/datasets/__init__.py\", line 37, in <module>\r\n from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder\r\n File \"/opt/conda/lib/python3.6/site-packages/datasets/builder.py\", line 44, in <module>\r\n from .data_files import DataFilesDict, _sanitize_patterns\r\n File \"/opt/conda/lib/python3.6/site-packages/datasets/data_files.py\", line 122, in <module>\r\n allowed_extensions: Optional[list] = None,\r\nAttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'\r\n````\r\nNote that pip reports the latest `datasets` version as \r\n```bash\r\n pip show datasets\r\nName: datasets\r\nVersion: 1.14.0\r\n```\r\nHowever, if I downgrade datasets with `pip install datasets==1.11.0`, things now work\r\n```bash\r\npython -c \"import huggingface_hub; print(f'hbvers={huggingface_hub.__version__}'); import datasets; print(f'dvers={datasets.__version__}')\"\r\nhbvers=0.0.8\r\ndvers=1.11.0\r\n````",
"> Hi @JTWang2000, thanks for reporting.\r\n> \r\n> However, I cannot reproduce your reported bug:\r\n> \r\n> ```python\r\n> >>> from datasets import load_dataset\r\n> \r\n> >>> dataset = load_dataset(\"sst\", \"default\")\r\n> >>> dataset\r\n> DatasetDict({\r\n> train: Dataset({\r\n> features: ['sentence', 'label', 'tokens', 'tree'],\r\n> num_rows: 8544\r\n> })\r\n> validation: Dataset({\r\n> features: ['sentence', 'label', 'tokens', 'tree'],\r\n> num_rows: 1101\r\n> })\r\n> test: Dataset({\r\n> features: ['sentence', 'label', 'tokens', 'tree'],\r\n> num_rows: 2210\r\n> })\r\n> })\r\n> ```\r\n> \r\n> Maybe, the cause is that you have a quite old version of `huggingface_hub`. Could you please try to update it and confirm if the problem persists?\r\n> \r\n> ```\r\n> pip install -U huggingface_hub\r\n> ```\r\n\r\nMy problem solved after updating huggingface hub command. Thanks a lot and sorry for the late reply. ",
"@tjruwase, please note that versions of `datsets` and `huggingface_hub` must be compatible one with each other:\r\n- In `datasets` version `1.11.0`, the requirement on `huggingface_hub` was: `huggingface_hub<0.1.0`\r\n https://github.com/huggingface/datasets/blob/2cc00f372a96133e701275eec4d6b26d15257289/setup.py#L90\r\n - Therefore, your installed `huggingface_hub` version `0.0.8` was compatible\r\n- In `datasets` version `1.12.0`, the requirement on `huggingface_hub` was: `huggingface_hub>=0.0.14,<0.1.0`\r\n https://github.com/huggingface/datasets/blob/6c766f9115d686182d76b1b937cb27e099c45d68/setup.py#L104\r\n - Therefore, your installed `huggingface_hub` version `0.0.8` was no longer compatible \r\n- Currently, in `datasets` version `1.15.1`, the requirement on `huggingface_hub` is: `huggingface_hub>=0.1.0,<1.0.0`\r\n https://github.com/huggingface/datasets/blob/018100679d21cf27136f0eccb1c50e3a9c968ce2/setup.py#L102\r\n\r\n@JTWang2000, thanks for your answer. I close this issue then."
] | 1,634,480,267,000 | 1,636,476,149,000 | 1,636,476,148,000 | NONE | null | null | null | ## Describe the bug
When using `pip install datasets`
or use `conda install -c huggingface -c conda-forge datasets`
cannot install datasets
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("sst", "default")
```
## Actual results
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-fbe7981e6e21> in <module>
1 import torch
2 import transformers
----> 3 from datasets import load_dataset
4
5 dataset = load_dataset("sst", "default")
~/miniforge3/envs/actor/lib/python3.8/site-packages/datasets/__init__.py in <module>
35 from .arrow_reader import ArrowReader, ReadInstruction
36 from .arrow_writer import ArrowWriter
---> 37 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
38 from .combine import interleave_datasets
39 from .dataset_dict import DatasetDict, IterableDatasetDict
~/miniforge3/envs/actor/lib/python3.8/site-packages/datasets/builder.py in <module>
42 )
43 from .arrow_writer import ArrowWriter, BeamWriter
---> 44 from .data_files import DataFilesDict, _sanitize_patterns
45 from .dataset_dict import DatasetDict, IterableDatasetDict
46 from .fingerprint import Hasher
~/miniforge3/envs/actor/lib/python3.8/site-packages/datasets/data_files.py in <module>
118
119 def _exec_patterns_in_dataset_repository(
--> 120 dataset_info: huggingface_hub.hf_api.DatasetInfo,
121 patterns: List[str],
122 allowed_extensions: Optional[list] = None,
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.13.3
- Platform: macOS-11.3.1-arm64-arm-64bit
- Python version: 3.8.10
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3099/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3099/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3098 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3098/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3098/comments | https://api.github.com/repos/huggingface/datasets/issues/3098/events | https://github.com/huggingface/datasets/pull/3098 | 1,028,210,790 | PR_kwDODunzps4tSRSZ | 3,098 | Push to hub capabilities for `Dataset` and `DatasetDict` | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thank you for your reviews! I should have addressed all of your comments, and I added a test to ensure that `private` datasets work correctly too. I have merged the changes in `huggingface_hub`, so the `main` branch can be installed now; and I will release v0.1.0 soon.\r\n\r\nAs blockers for this PR:\r\n- It's still waiting for #3027 to be addressed as the folder name will dictate the split name\r\n- The `self.split` name is set to `None` when the dataset dict is instantiated as follows:\r\n```py\r\nds = Dataset.from_dict({\"x\": [1, 2, 3], \"y\": [4, 5, 6]})\r\nlocal_ds = DatasetDict({\"random\": ds})\r\n\r\nlocal_ds['random'].split # returns None\r\n```\r\nIn order to remove the `split=key` I would need to know of a different way to test here as it relies on the above as a surefire way of constructing a `DatasetDict`.\r\n- Finally, the `threading` parameter is flaky on moon-staging which results in many errors server side. I propose to leave it as an argument instead of having it having it set to `True` so that users may toggle it according to their wish. ",
"Currently it looks like it only saves the last split.\r\nIndeed when writing the data of one split, it deletes all the other files from the other splits\r\n```python\r\n>>> dataset.push_to_hub(\"lhoestq/squad_titles\", shard_size=50<<10) \r\nPushing split train to the Hub.\r\nPushing dataset shards to the dataset hub: 100%|█| 31/31 [00:22<00:00, 1.38\r\nPushing split validation to the Hub.\r\nThe repository already exists: the `private` keyword argument will be ignored.\r\nDeleting unused files from dataset repository: 100%|█| 31/31 [00:14<00:00, \r\nPushing dataset shards to the dataset hub: 100%|█| 4/4 [00:03<00:00, 1.18it\r\n```\r\nNote the \"Deleting\" part.",
"I think this PR should fix #3035, so feel free to link it. ",
"Thank you for your comments! I have rebased on `master` to have PR #3221. I've updated all tests to reflect the `-` instead of the `_` in the filenames.\r\n\r\n@lhoestq, I have fixed the issue with splits and added a corresponding test.\r\n\r\n@mariosasko I have not updated the `load_dataset` method to work differently, so I don't expect #3035 to be resolved with `push_to_hub`.\r\n\r\nOnly remaining issues before merging:\r\n- Take a good look at the `threading` and if that's something we want to keep.\r\n- As mentioned above:\r\n>The self.split name is set to None when the dataset dict is instantiated as follows:\r\n> ```\r\n> ds = Dataset.from_dict({\"x\": [1, 2, 3], \"y\": [4, 5, 6]})\r\n> local_ds = DatasetDict({\"random\": ds})\r\n> \r\n> local_ds['random'].split # returns None\r\n> ```\r\nI need to understand how to build a `DatasetDict` from some `Dataset` objects to be able to leverage the `split` parameter in `DatasetDict.push_to_hub`",
"Cool thanks ! And indeed this won't solve https://github.com/huggingface/datasets/issues/3035 yet\r\n\r\n> I need to understand how to build a DatasetDict from some Dataset objects to be able to leverage the split parameter in DatasetDict.push_to_hub\r\n\r\nYou can use the key in the DatasetDict instead of the `split` attribute",
"What do you think about bumping the minimum version of pyarrow to 3.0.0 ? This is the minimum required version to write parquet files, which is needed for push_to_hub. That's why our pyarrow 1 CI is failing.\r\n\r\nI think it's fine since it's been available for a long time (january 2021) and it's also the version that is installed on Google Colab.",
"Pushing pyarrow to 3.0.0 is fine for me. I don’t think we need to keep a lot of backward support for pyarrow.",
"Hi.\r\nI published in the forum about my experience with `DatasetDict.push_to_hub()`: here is my [post.](https://discuss.huggingface.co/t/save-datasetdict-to-huggingface-hub/12075/4)\r\nOn my side, there is a problem as my train and validation `Datasets` are concatenated when I do a `load_dataset()` from the `DatasetDict` I pushed to the HF datasets hub.",
"Hi ! Let me respond here as well in case other people have the same issues and come here:\r\n\r\n`push_to_hub` was introduced in `datasets` 1.16, and to be able to properly load a dataset with separated splits you need to have `datasets>=1.16.0` as well. \r\n\r\nOld version of `datasets` used to concatenate everything in the `train` split."
] | 1,634,443,964,000 | 1,638,979,490,000 | 1,637,753,136,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3098",
"html_url": "https://github.com/huggingface/datasets/pull/3098",
"diff_url": "https://github.com/huggingface/datasets/pull/3098.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3098.patch",
"merged_at": 1637753136000
} | This PR implements a `push_to_hub` method on `Dataset` and `DatasetDict`. This does not currently work in `IterableDatasetDict` nor `IterableDataset` as those are simple dicts and I would like your opinion on how you would like to implement this before going ahead and doing it.
This implementation needs to be used with the following `huggingface_hub` branch in order to work correctly: https://github.com/huggingface/huggingface_hub/pull/415
### Implementation
The `push_to_hub` API is entirely based on HTTP requests rather than a git-based workflow:
- This allows pushing changes without firstly cloning the repository, which reduces the time in half for the `push_to_hub` method.
- Collaboration, as well as the system of branches/merges/rebases is IMO less straightforward than for models and spaces. In the situation where such collaboration is needed, I would *heavily* advocate for the `Repository` helper of the `huggingface_hub` to be used instead of the `push_to_hub` method which will always be, by design, limiting in that regard (even if based on a git-workflow instead of HTTP requests)
In order to overcome the limit of 5GB files set by the HTTP requests, dataset sharding is used.
### Testing
The test suite implemented here makes use of the moon-staging instead of the production setup. As several repositories are created and deleted, it is better to use the staging.
It does not require setting an environment variable or any kind of special attention but introduces a new decorator `with_staging_testing` which patches global variables to use the staging endpoint instead of the production endpoint.
### Examples
The tests cover a lot of examples and behavior. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3098/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/3098/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3097 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3097/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3097/comments | https://api.github.com/repos/huggingface/datasets/issues/3097/events | https://github.com/huggingface/datasets/issues/3097 | 1,027,750,811 | I_kwDODunzps49Qjub | 3,097 | `ModuleNotFoundError: No module named 'fsspec.exceptions'` | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @VictorSanh.\r\n\r\nI'm fixing it."
] | 1,634,326,478,000 | 1,634,543,514,000 | 1,634,543,514,000 | MEMBER | null | null | null | ## Describe the bug
I keep runnig into a fsspec ModuleNotFound error
## Steps to reproduce the bug
```python
>>> from datasets import get_dataset_infos
2021-10-15 15:25:37.863206: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-10-15 15:25:37.863252: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/__init__.py", line 37, in <module>
from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 56, in <module>
from .utils.streaming_download_manager import StreamingDownloadManager
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/streaming_download_manager.py", line 11, in <module>
from fsspec.exceptions import FSTimeoutError
ModuleNotFoundError: No module named 'fsspec.exceptions'
```
Yet, I do have `fsspec`:
```bash
hf@victor-scale:~/dev/promptsource$ pip show fsspec
Name: fsspec
Version: 2021.5.0
Summary: File-system specification
Home-page: http://github.com/intake/filesystem_spec
Author: None
Author-email: None
License: BSD
Location: /home/hf/dev/promptsource/.venv/lib/python3.7/site-packages
Requires:
Required-by: datasets
```
With the same version of fsspec and `datasets==1.9.0`, I don't see this problem....
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
I can't even run `datasets-cli env` actually.., but here's my env:
- `datasets` version: 1.13.3
- Platform: Ubuntu 18.04
- Python version: 3.7.10
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3097/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3097/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3096 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3096/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3096/comments | https://api.github.com/repos/huggingface/datasets/issues/3096/events | https://github.com/huggingface/datasets/pull/3096 | 1,027,535,685 | PR_kwDODunzps4tQblQ | 3,096 | Fix Audio feature mp3 resampling | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,310,319,000 | 1,634,312,310,000 | 1,634,312,310,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3096",
"html_url": "https://github.com/huggingface/datasets/pull/3096",
"diff_url": "https://github.com/huggingface/datasets/pull/3096.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3096.patch",
"merged_at": 1634312309000
} | Issue #3095 is related to mp3 resampling, not to `cast_column`.
This PR fixes Audio feature mp3 resampling.
Fix #3095. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3096/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3096/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3095 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3095/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3095/comments | https://api.github.com/repos/huggingface/datasets/issues/3095/events | https://github.com/huggingface/datasets/issues/3095 | 1,027,453,146 | I_kwDODunzps49PbDa | 3,095 | `cast_column` makes audio decoding fail | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"cc @anton-l @albertvillanova ",
"Thanks for reporting, @patrickvonplaten.\r\n\r\nI think the issue is related to mp3 resampling, not to `cast_column`.\r\n\r\nYou can check that `cast_column` works OK with non-mp3 audio files:\r\n```python\r\nfrom datasets import load_dataset\r\nimport datasets\r\nds = load_dataset(\"arabic_speech_corpus\", split=\"train\")\r\nds = ds.cast_column(\"audio\", datasets.features.Audio(sampling_rate=16_000))\r\nprint(ds[0][\"audio\"])\r\n```\r\n\r\nI'm fixing it."
] | 1,634,305,018,000 | 1,634,312,310,000 | 1,634,312,310,000 | MEMBER | null | null | null | ## Describe the bug
After changing the sampling rate automatic decoding fails.
## Steps to reproduce the bug
```python
from datasets import load_dataset
import datasets
ds = load_dataset("common_voice", "ab", split="train")
ds = ds.cast_column("audio", datasets.features.Audio(sampling_rate=16_000))
print(ds[0]["audio"]) # <- this fails currently
```
yields:
```
TypeError: forward() takes 2 positional arguments but 4 were given
```
## Expected results
no failure
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 1.13.2 (master)
- Platform: Linux-5.11.0-1019-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3095/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3095/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3094 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3094/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3094/comments | https://api.github.com/repos/huggingface/datasets/issues/3094/events | https://github.com/huggingface/datasets/issues/3094 | 1,027,328,633 | I_kwDODunzps49O8p5 | 3,094 | Support loading a dataset from SQLite files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3761482852,
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue",
"name": "good second issue",
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues"
}
] | open | false | null | [] | null | [
"for reference Kaggle has a good number of open source datasets stored in sqlite\r\n\r\nAlternatively a tutorial or tool on how to convert from sqlite to parquet would be cool too"
] | 1,634,295,521,000 | 1,655,730,716,000 | null | MEMBER | null | null | null | As requested by @julien-c, we could eventually support loading a dataset from SQLite files, like it is the case for JSON/CSV files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3094/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3094/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3093 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3093/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3093/comments | https://api.github.com/repos/huggingface/datasets/issues/3093/events | https://github.com/huggingface/datasets/issues/3093 | 1,027,262,124 | I_kwDODunzps49Osas | 3,093 | Error loading json dataset with multiple splits if keys in nested dicts have a different order | {
"login": "dthulke",
"id": 8331189,
"node_id": "MDQ6VXNlcjgzMzExODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8331189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dthulke",
"html_url": "https://github.com/dthulke",
"followers_url": "https://api.github.com/users/dthulke/followers",
"following_url": "https://api.github.com/users/dthulke/following{/other_user}",
"gists_url": "https://api.github.com/users/dthulke/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dthulke/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dthulke/subscriptions",
"organizations_url": "https://api.github.com/users/dthulke/orgs",
"repos_url": "https://api.github.com/users/dthulke/repos",
"events_url": "https://api.github.com/users/dthulke/events{/privacy}",
"received_events_url": "https://api.github.com/users/dthulke/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi, \r\n\r\neven Pandas, which is less strict compared to PyArrow when it comes to reading JSON, doesn't support different orderings:\r\n```python\r\nimport io\r\nimport pandas as pd\r\n\r\ns = \"\"\"\r\n{\"a\": {\"c\": 8, \"b\": 5}}\r\n{\"a\": {\"b\": 7, \"c\": 6}}\r\n\"\"\"\r\n\r\nbuffer = io.StringIO(s)\r\ndf = pd.read_json(buffer, lines=True)\r\n\r\nprint(df.shape[0]) # 0\r\n```\r\n\r\nSo we can't even fall back to Pandas in such cases.\r\n\r\nIt seems the only option is a script that recursively re-orders fields to enforce deterministic order:\r\n```python\r\nwith open(\"train.json\", \"r\") as fin:\r\n with open(\"train_reordered.json\", \"w\") as fout:\r\n for line in fin:\r\n obj_jsonl = json.loads(line.strip())\r\n fout.write(json.dumps(obj_jsonl, sort_keys=True) + \"\\n\")\r\n```",
"Fixed in #3575, so I'm closing this issue."
] | 1,634,290,405,000 | 1,649,599,589,000 | 1,649,599,589,000 | NONE | null | null | null | ## Describe the bug
Loading a json dataset with multiple splits that have nested dicts with keys in different order results in the error below.
If the keys in the nested dicts always have the same order or even if you just load a single split in which the nested dicts don't have the same order, everything works fine.
## Steps to reproduce the bug
Create two json files:
train.json
```
{"a": {"c": 8, "b": 5}}
{"a": {"b": 7, "c": 6}}
```
test.json
```
{"a": {"b": 1, "c": 2}}
{"a": {"b": 3, "c": 4}}
```
```python
from datasets import load_dataset
# Loading the files individually works (even though the keys in train.json don't have the same order)
load_dataset('json', data_files={"test": "test.json"})
load_dataset('json', data_files={"train": "train.json"})
# Loading both splits fails
load_dataset('json', data_files={"train": "train.json", "test": "test.json"})
```
## Expected results
Loading both splits should not give an error whether the nested dicts are have the same order or not.
## Actual results
```
>>> load_dataset('json', data_files={"train": "train.json", "test": "test.json"})
Using custom data configuration default-f1bc76fd07398c4c
Downloading and preparing dataset json/default to /home/dthulke/.cache/huggingface/datasets/json/default-f1bc76fd07398c4c/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426...
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 8839.42it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 477.82it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/load.py", line 1632, in load_dataset
use_auth_token=use_auth_token,
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/builder.py", line 608, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/builder.py", line 697, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/builder.py", line 1159, in _prepare_split
writer.write_table(table)
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/arrow_writer.py", line 428, in write_table
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "pyarrow/table.pxi", line 1596, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 592, in pyarrow.lib._sanitize_arrays
File "pyarrow/array.pxi", line 329, in pyarrow.lib.asarray
File "pyarrow/table.pxi", line 277, in pyarrow.lib.ChunkedArray.cast
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/pyarrow/compute.py", line 297, in cast
return call_function("cast", [arr], options)
File "pyarrow/_compute.pyx", line 527, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 337, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 120, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct
```
## Environment info
- `datasets` version: 1.13.2
- Platform: Linux-4.15.0-147-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3093/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3093/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3092 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3092/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3092/comments | https://api.github.com/repos/huggingface/datasets/issues/3092/events | https://github.com/huggingface/datasets/pull/3092 | 1,027,260,383 | PR_kwDODunzps4tPj6e | 3,092 | Fix JNLBA dataset | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Fix #3089.",
"@albertvillanova all tests are passing now. Either you or @lhoestq can review it!"
] | 1,634,290,274,000 | 1,634,891,037,000 | 1,634,891,037,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3092",
"html_url": "https://github.com/huggingface/datasets/pull/3092",
"diff_url": "https://github.com/huggingface/datasets/pull/3092.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3092.patch",
"merged_at": 1634891037000
} | As mentioned in #3089, I've added more tags and also updated the link for dataset which was earlier using a Google Drive link.
I'm having problem with generating dummy data as `datasets-cli dummy_data ./datasets/jnlpba --auto_generate --match_text_files "*.iob2"` is giving `datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
` error. I'll try to add dummy data manually. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3092/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3092/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3091 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3091/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3091/comments | https://api.github.com/repos/huggingface/datasets/issues/3091/events | https://github.com/huggingface/datasets/issues/3091 | 1,027,251,530 | I_kwDODunzps49Op1K | 3,091 | `blog_authorship_corpus` is broken | {
"login": "fdtomasi",
"id": 12514317,
"node_id": "MDQ6VXNlcjEyNTE0MzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/12514317?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fdtomasi",
"html_url": "https://github.com/fdtomasi",
"followers_url": "https://api.github.com/users/fdtomasi/followers",
"following_url": "https://api.github.com/users/fdtomasi/following{/other_user}",
"gists_url": "https://api.github.com/users/fdtomasi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fdtomasi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fdtomasi/subscriptions",
"organizations_url": "https://api.github.com/users/fdtomasi/orgs",
"repos_url": "https://api.github.com/users/fdtomasi/repos",
"events_url": "https://api.github.com/users/fdtomasi/events{/privacy}",
"received_events_url": "https://api.github.com/users/fdtomasi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @fdtomasi, thanks for reporting.\r\n\r\nYou are right: the original host data URL does no longer exist.\r\n\r\nI've contacted the authors of the dataset to ask them if they host this dataset in another URL.",
"Hi, @fdtomasi, the URL is fixed.\r\n\r\nThe fix is already in our master branch and it will be accessible in our next release.\r\n\r\nIn the meantime, you can include the fix if you install the `datasets` library from the master branch:\r\n```\r\npip install -U git+ssh://git@github.com/huggingface/datasets.git@master#egg=datasest\r\n```\r\nor\r\n```\r\npip install -U git+https://github.com/huggingface/datasets.git@master#egg=datasets\r\n```",
"Awesome thank you so much for the quick fix!"
] | 1,634,289,640,000 | 1,634,648,770,000 | 1,634,647,839,000 | NONE | null | null | null | ## Describe the bug
The dataset `blog_authorship_corpus` is broken.
By bypassing the checksum checks, the loading does not return any error but the resulting dataset is empty.
I suspect it is because the data download url is broken (http://www.cs.biu.ac.il/~koppel/blogs/blogs.zip).
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("blog_authorship_corpus", split="train", download_mode='force_redownload')
```
## Expected results
No error.
## Actual results
```
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
/tmp/ipykernel_5237/1729238701.py in <module>
2 ds = load_dataset(
3 "blog_authorship_corpus", split="train",
----> 4 download_mode='force_redownload'
5 )
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
1115 ignore_verifications=ignore_verifications,
1116 try_from_hf_gcs=try_from_hf_gcs,
-> 1117 use_auth_token=use_auth_token,
1118 )
1119
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
635 if not downloaded_from_gcs:
636 self._download_and_prepare(
--> 637 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
638 )
639 # Sync info
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
707 if verify_infos:
708 verify_checksums(
--> 709 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
710 )
711
/opt/conda/lib/python3.7/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['http://www.cs.biu.ac.il/~koppel/blogs/blogs.zip']
```
## Environment info
- `datasets` version: 1.13.2
- Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11
- Python version: 3.7.10
- PyArrow version: 5.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3091/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3091/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3090 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3090/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3090/comments | https://api.github.com/repos/huggingface/datasets/issues/3090/events | https://github.com/huggingface/datasets/pull/3090 | 1,027,100,371 | PR_kwDODunzps4tPEtH | 3,090 | Update BibTeX entry | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,276,367,000 | 1,634,283,357,000 | 1,634,283,357,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3090",
"html_url": "https://github.com/huggingface/datasets/pull/3090",
"diff_url": "https://github.com/huggingface/datasets/pull/3090.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3090.patch",
"merged_at": 1634283357000
} | Update BibTeX entry. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3090/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3090/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3089 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3089/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3089/comments | https://api.github.com/repos/huggingface/datasets/issues/3089/events | https://github.com/huggingface/datasets/issues/3089 | 1,026,973,360 | I_kwDODunzps49Nl6w | 3,089 | JNLPBA Dataset | {
"login": "sciarrilli",
"id": 10460111,
"node_id": "MDQ6VXNlcjEwNDYwMTEx",
"avatar_url": "https://avatars.githubusercontent.com/u/10460111?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sciarrilli",
"html_url": "https://github.com/sciarrilli",
"followers_url": "https://api.github.com/users/sciarrilli/followers",
"following_url": "https://api.github.com/users/sciarrilli/following{/other_user}",
"gists_url": "https://api.github.com/users/sciarrilli/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sciarrilli/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sciarrilli/subscriptions",
"organizations_url": "https://api.github.com/users/sciarrilli/orgs",
"repos_url": "https://api.github.com/users/sciarrilli/repos",
"events_url": "https://api.github.com/users/sciarrilli/events{/privacy}",
"received_events_url": "https://api.github.com/users/sciarrilli/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"# Steps to reproduce\r\n\r\nTo reproduce:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('jnlpba')\r\n\r\ndataset['train'].features['ner_tags']\r\n```\r\nOutput:\r\n```python\r\nSequence(feature=ClassLabel(num_classes=3, names=['O', 'B', 'I'], names_file=None, id=None), length=-1, id=None)\r\n```\r\n\r\n",
"Since I cannot create a branch here is the updated code:\r\n\r\n```python\r\n\r\n# coding=utf-8\r\n# Copyright 2020 HuggingFace Datasets Authors.\r\n#\r\n# Licensed under the Apache License, Version 2.0 (the \"License\");\r\n# you may not use this file except in compliance with the License.\r\n# You may obtain a copy of the License at\r\n#\r\n# http://www.apache.org/licenses/LICENSE-2.0\r\n#\r\n# Unless required by applicable law or agreed to in writing, software\r\n# distributed under the License is distributed on an \"AS IS\" BASIS,\r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n# See the License for the specific language governing permissions and\r\n# limitations under the License.\r\n\r\n# Lint as: python3\r\n\"\"\"Introduction to the Bio-Entity Recognition Task at JNLPBA\"\"\"\r\n\r\nimport os\r\n\r\nimport datasets\r\n\r\n\r\nlogger = datasets.logging.get_logger(__name__)\r\n\r\n\r\n_CITATION = \"\"\"\\\r\n@inproceedings{kim2004introduction,\r\n title={Introduction to the bio-entity recognition task at JNLPBA},\r\n author={Kim, Jin-Dong and Ohta, Tomoko and Tsuruoka, Yoshimasa and Tateisi, Yuka and Collier, Nigel},\r\n booktitle={Proceedings of the international joint workshop on natural language processing in biomedicine and its applications},\r\n pages={70--75},\r\n year={2004},\r\n organization={Citeseer}\r\n}\r\n\"\"\"\r\n\r\n_DESCRIPTION = \"\"\"\\\r\nThe data came from the GENIA version 3.02 corpus (Kim et al., 2003). This was formed from a controlled search\r\non MEDLINE using the MeSH terms \u0018human\u0019, \u0018blood cells\u0019 and \u0018transcription factors\u0019. From this search 2,000 abstracts\r\nwere selected and hand annotated according to a small taxonomy of 48 classes based on a chemical classification.\r\nAmong the classes, 36 terminal classes were used to annotate the GENIA corpus.\r\n\"\"\"\r\n\r\n_HOMEPAGE = \"http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004\"\r\n_TRAIN_URL = \"http://www.nactem.ac.uk/GENIA/current/Shared-tasks/JNLPBA/Train/Genia4ERtraining.tar.gz\"\r\n_VAL_URL = 'http://www.nactem.ac.uk/GENIA/current/Shared-tasks/JNLPBA/Evaluation/Genia4ERtest.tar.gz'\r\n\r\n\r\n_URLS = {\r\n \"train\": _TRAIN_URL,\r\n \"val\": _VAL_URL,\r\n}\r\n\r\n_TRAIN_DIRECTORY = \"Genia4ERtraining\"\r\n_VAL_DIRECTORY = \"Genia4ERtest\"\r\n\r\n_TRAIN_FILE = \"Genia4ERtask1.iob2\"\r\n_VAL_FILE = \"Genia4EReval1.iob2\"\r\n\r\n\r\nclass JNLPBAConfig(datasets.BuilderConfig):\r\n \"\"\"BuilderConfig for JNLPBA\"\"\"\r\n\r\n def __init__(self, **kwargs):\r\n \"\"\"BuilderConfig for JNLPBA.\r\n Args:\r\n **kwargs: keyword arguments forwarded to super.\r\n \"\"\"\r\n super(JNLPBAConfig, self).__init__(**kwargs)\r\n\r\n\r\nclass JNLPBA(datasets.GeneratorBasedBuilder):\r\n \"\"\"JNLPBA dataset.\"\"\"\r\n\r\n BUILDER_CONFIGS = [\r\n JNLPBAConfig(name=\"jnlpba\", version=datasets.Version(\"1.0.0\"), description=\"JNLPBA dataset\"),\r\n ]\r\n\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"id\": datasets.Value(\"string\"),\r\n \"tokens\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"ner_tags\": datasets.Sequence(\r\n datasets.features.ClassLabel(\r\n names=[\r\n 'O',\r\n 'B-DNA',\r\n 'I-DNA', \r\n 'B-RNA',\r\n 'I-RNA',\r\n 'B-cell_line',\r\n 'I-cell_line',\r\n 'B-cell_type',\r\n 'I-cell_type',\r\n 'B-protein',\r\n 'I-protein',\r\n ]\r\n )\r\n ),\r\n }\r\n ),\r\n supervised_keys=None,\r\n homepage=_HOMEPAGE,\r\n citation=_CITATION,\r\n )\r\n\r\n def _split_generators(self, dl_manager):\r\n downloaded_files = dl_manager.download_and_extract(_URLS)\r\n \r\n return [\r\n datasets.SplitGenerator(name=datasets.Split.TRAIN, \r\n gen_kwargs={\"filepath\": os.path.join(downloaded_files['train'], _TRAIN_FILE)}),\r\n datasets.SplitGenerator(name=datasets.Split.VALIDATION, \r\n gen_kwargs={\"filepath\": os.path.join(downloaded_files['val'], _VAL_FILE)})\r\n ]\r\n \r\n\r\n def _generate_examples(self, filepath):\r\n logger.info(\"⏳ Generating examples from = %s\", filepath)\r\n with open(filepath, encoding=\"utf-8\") as f:\r\n guid = 0\r\n tokens = []\r\n ner_tags = []\r\n for line in f:\r\n if line.startswith('###'):\r\n continue\r\n if line == '' or line == '\\n':\r\n if tokens:\r\n yield guid, {\r\n \"id\": str(guid),\r\n \"tokens\": tokens,\r\n \"ner_tags\": ner_tags,\r\n }\r\n guid += 1\r\n tokens = []\r\n ner_tags = []\r\n else:\r\n # tokens are tab separated\r\n splits = line.split(\"\\t\")\r\n #print(splits)\r\n #print(len(splits))\r\n if len(splits) < 2:\r\n splits = splits[0].split()\r\n tokens.append(splits[0])\r\n ner_tags.append(splits[1].rstrip())\r\n # last example\r\n yield guid, {\r\n \"id\": str(guid),\r\n \"tokens\": tokens,\r\n \"ner_tags\": ner_tags,\r\n }\r\n```"
] | 1,634,260,562,000 | 1,634,891,037,000 | 1,634,891,037,000 | NONE | null | null | null | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
The dataset loading script for this dataset is incorrect. This is a biomedical dataset used for named entity recognition. The entities in the [script](https://github.com/huggingface/datasets/blob/master/datasets/jnlpba/jnlpba.py#L81-L83) are: O, B, and I. The correct entities from the original data file are:
['O',
'B-DNA',
'I-DNA',
'B-RNA',
'I-RNA',
'B-cell_line',
'I-cell_line',
'B-cell_type',
'I-cell_type',
'B-protein',
'I-protein']
## Actual results
The dataset loader script needs to include the following NER names:
['O',
'B-DNA',
'I-DNA',
'B-RNA',
'I-RNA',
'B-cell_line',
'I-cell_line',
'B-cell_type',
'I-cell_type',
'B-protein',
'I-protein']
And the [data](https://github.com/huggingface/datasets/blob/master/datasets/jnlpba/jnlpba.py#L46) that is being pulled has been modified from the original dataset and does not include the original NER tags.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3089/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3089/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3088 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3088/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3088/comments | https://api.github.com/repos/huggingface/datasets/issues/3088/events | https://github.com/huggingface/datasets/pull/3088 | 1,026,920,369 | PR_kwDODunzps4tOhRx | 3,088 | Use template column_mapping to transmit_format instead of template features | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for fixing!"
] | 1,634,255,380,000 | 1,634,308,805,000 | 1,634,292,664,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3088",
"html_url": "https://github.com/huggingface/datasets/pull/3088",
"diff_url": "https://github.com/huggingface/datasets/pull/3088.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3088.patch",
"merged_at": 1634292664000
} | Use `template.column_mapping` to check for modified columns since `template.features` represent a generic template/column mapping.
Fix #3087
TODO:
- [x] Add a test | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3088/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3088/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3087 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3087/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3087/comments | https://api.github.com/repos/huggingface/datasets/issues/3087/events | https://github.com/huggingface/datasets/issues/3087 | 1,026,780,469 | I_kwDODunzps49M201 | 3,087 | Removing label column in a text classification dataset yields to errors | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,634,242,370,000 | 1,634,292,664,000 | 1,634,292,664,000 | MEMBER | null | null | null | ## Describe the bug
This looks like #3059 but it's not linked to the cache this time. Removing the `label` column from a text classification dataset and then performing any processing will result in an error.
To reproduce:
```py
from datasets import load_dataset
from transformers import AutoTokenizer
raw_datasets = load_dataset("imdb")
raw_datasets = raw_datasets.remove_columns("label")
model_checkpoint = "distilbert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
context_length = 128
def tokenize_pad_and_truncate(texts):
return tokenizer(texts["text"], truncation=True, padding="max_length", max_length=context_length)
tokenized_datasets = raw_datasets.map(tokenize_pad_and_truncate, batched=True)
```
Traceback:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-1-ba61bb32f786> in <module>
12 return tokenizer(texts["text"], truncation=True, padding="max_length", max_length=context_length)
13
---> 14 tokenized_datasets = raw_datasets.map(tokenize_pad_and_truncate, batched=True)
~/git/datasets/src/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc)
500 desc=desc,
501 )
--> 502 for k, dataset in self.items()
503 }
504 )
~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0)
500 desc=desc,
501 )
--> 502 for k, dataset in self.items()
503 }
504 )
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2051 new_fingerprint=new_fingerprint,
2052 disable_tqdm=disable_tqdm,
-> 2053 desc=desc,
2054 )
2055 else:
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
501 self: "Dataset" = kwargs.pop("self")
502 # apply actual function
--> 503 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
504 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
505 for dataset in datasets:
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
468 }
469 # apply actual function
--> 470 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
471 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
472 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
404 # Call actual function
405
--> 406 out = func(self, *args, **kwargs)
407
408 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2243 if os.path.exists(cache_file_name) and load_from_cache_file:
2244 logger.warning("Loading cached processed dataset at %s", cache_file_name)
-> 2245 info = self.info.copy()
2246 info.features = features
2247 info.task_templates = None
~/git/datasets/src/datasets/info.py in copy(self)
278
279 def copy(self) -> "DatasetInfo":
--> 280 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
281
282
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
177 for idx, template in enumerate(self.task_templates):
178 if isinstance(template, TextClassification):
--> 179 labels = self.features[template.label_column].names
180 self.task_templates[idx] = TextClassification(
181 text_column=template.text_column, label_column=template.label_column, labels=labels
KeyError: 'label'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3087/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3087/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3086/comments | https://api.github.com/repos/huggingface/datasets/issues/3086/events | https://github.com/huggingface/datasets/pull/3086 | 1,026,481,905 | PR_kwDODunzps4tNIvp | 3,086 | Remove _resampler from Audio fields | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,222,330,000 | 1,634,224,421,000 | 1,634,224,420,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3086",
"html_url": "https://github.com/huggingface/datasets/pull/3086",
"diff_url": "https://github.com/huggingface/datasets/pull/3086.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3086.patch",
"merged_at": 1634224420000
} | The `_resampler` Audio attribute was implemented to optimize audio resampling, but it should not be cached.
This PR removes `_resampler` from Audio fields, so that it is not returned by `fields()` or `asdict()`.
Fix #3083. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3086/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3086/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3085 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3085/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3085/comments | https://api.github.com/repos/huggingface/datasets/issues/3085/events | https://github.com/huggingface/datasets/pull/3085 | 1,026,467,384 | PR_kwDODunzps4tNFza | 3,085 | Fixes to `to_tf_dataset` | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Can you give some details about why you need these changes ?",
"Hey, sorry, I should have explained! I've been getting a lot of `VisibleDeprecationWarning` from Numpy, due to an issue in the formatter, see #3084 . This is a temporary workaround (since I'm using these methods in the upcoming course) until I can fix that issue, because I couldn't see an obvious fix for the Numpy formatter. If you can see a quick way to fix that, though, that might be even better!"
] | 1,634,221,556,000 | 1,634,828,729,000 | 1,634,828,728,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3085",
"html_url": "https://github.com/huggingface/datasets/pull/3085",
"diff_url": "https://github.com/huggingface/datasets/pull/3085.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3085.patch",
"merged_at": 1634828728000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3085/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3085/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3084/comments | https://api.github.com/repos/huggingface/datasets/issues/3084/events | https://github.com/huggingface/datasets/issues/3084 | 1,026,428,992 | I_kwDODunzps49LhBA | 3,084 | VisibleDeprecationWarning when using `set_format("numpy")` | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I just opened a PR and I verified that the code you provided doesn't show any deprecation warning :)"
] | 1,634,219,581,000 | 1,634,918,654,000 | 1,634,918,654,000 | MEMBER | null | null | null | Code to reproduce:
```
from datasets import load_dataset
dataset = load_dataset("glue", "mnli")
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('distilbert-base-cased')
def tokenize_function(dataset):
return tokenizer(dataset['premise'])
tokenized_datasets = dataset.map(tokenize_function, batched=True, remove_columns=dataset['train'].features)
tokenized_datasets.set_format("numpy")
tokenized_datasets['train'][5:8]
```
Outputs:
```
python3.9/site-packages/datasets/formatting/formatting.py:167: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return np.array(array, copy=False, **self.np_array_kwargs)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3084/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3084/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3083 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3083/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3083/comments | https://api.github.com/repos/huggingface/datasets/issues/3083/events | https://github.com/huggingface/datasets/issues/3083 | 1,026,397,062 | I_kwDODunzps49LZOG | 3,083 | Datasets with Audio feature raise error when loaded from cache due to _resampler parameter | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,634,217,833,000 | 1,634,224,420,000 | 1,634,224,420,000 | MEMBER | null | null | null | ## Describe the bug
As reported by @patrickvonplaten, when loaded from the cache, datasets containing the Audio feature raise TypeError.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# load first time works
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
# load from cache breaks
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
```
## Actual results
```
TypeError: __init__() got an unexpected keyword argument '_resampler'
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3083/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3083/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3082/comments | https://api.github.com/repos/huggingface/datasets/issues/3082/events | https://github.com/huggingface/datasets/pull/3082 | 1,026,388,994 | PR_kwDODunzps4tM2BV | 3,082 | Fix error related to huggingface_hub timeout parameter | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,217,467,000 | 1,634,222,392,000 | 1,634,222,391,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3082",
"html_url": "https://github.com/huggingface/datasets/pull/3082",
"diff_url": "https://github.com/huggingface/datasets/pull/3082.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3082.patch",
"merged_at": 1634222391000
} | The `huggingface_hub` package added the parameter `timeout` from version 0.0.19.
This PR bumps this minimal version.
Fix #3080. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3082/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3082/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3081 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3081/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3081/comments | https://api.github.com/repos/huggingface/datasets/issues/3081/events | https://github.com/huggingface/datasets/pull/3081 | 1,026,383,749 | PR_kwDODunzps4tM1Gy | 3,081 | [Audio datasets] Adapting all audio datasets | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq - are there other important speech datasets that I'm forgetting here? \r\n\r\nThink PR is good to go otherwise",
"@lhoestq @albertvillanova - how can we make an exception for the AMI README so that the test doesn't fail? The dataset card definitely should have a data preprocessing section",
"Hi @patrickvonplaten ,\r\n\r\nthe data preprocessing section is not defined as a valid section in the readme validation file. After this line:\r\nhttps://github.com/huggingface/datasets/blob/568db594d51110da9e23d224abded2a976b3c8c7/src/datasets/utils/resources/readme_structure.yaml#L20\r\nfeel free to insert (correctly indented of course):\r\n```python\r\n- name: \"Dataset Preprocessing\"\r\n allow_empty: true\r\n allow_empty_text: true\r\n subsections: null\r\n```\r\nand then the tests should pass.",
"Thanks a lot @albertvillanova - I've added the feature to all audio datasets and corrected the task of `covost2`"
] | 1,634,217,225,000 | 1,634,302,323,000 | 1,634,300,553,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3081",
"html_url": "https://github.com/huggingface/datasets/pull/3081",
"diff_url": "https://github.com/huggingface/datasets/pull/3081.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3081.patch",
"merged_at": 1634300553000
} | This PR adds the new `Audio(...)` features - see: https://github.com/huggingface/datasets/pull/2324 to the most important audio datasets:
- Librispeech
- Timit
- Common Voice
- AMI
- ... (others I'm forgetting now)
The PR is curently blocked because the following leads to a problem:
```python
from datasets import load_dataset
# load first time works
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
# load from cache breaks
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
```
As soon as it's unblocked, I'll adapt the other audio datasets as well. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3081/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3081/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3080/comments | https://api.github.com/repos/huggingface/datasets/issues/3080/events | https://github.com/huggingface/datasets/issues/3080 | 1,026,380,626 | I_kwDODunzps49LVNS | 3,080 | Error related to timeout keyword argument | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,634,217,058,000 | 1,634,222,391,000 | 1,634,222,391,000 | MEMBER | null | null | null | ## Describe the bug
As reported by @patrickvonplaten, a TypeError is raised when trying to load a dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
```
## Actual results
```
TypeError: dataset_info() got an unexpected keyword argument 'timeout'
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3080/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3080/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3077/comments | https://api.github.com/repos/huggingface/datasets/issues/3077/events | https://github.com/huggingface/datasets/pull/3077 | 1,026,150,362 | PR_kwDODunzps4tMFPG | 3,077 | Fix loading a metric with internal import | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,202,418,000 | 1,634,202,896,000 | 1,634,202,895,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3077",
"html_url": "https://github.com/huggingface/datasets/pull/3077",
"diff_url": "https://github.com/huggingface/datasets/pull/3077.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3077.patch",
"merged_at": 1634202895000
} | After refactoring the module factory (#2986), a bug was introduced when loading metrics with internal imports.
This PR adds a new test case and fixes this bug.
Fix #3076.
CC: @sgugger @merveenoyan | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3077/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3077/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3076/comments | https://api.github.com/repos/huggingface/datasets/issues/3076/events | https://github.com/huggingface/datasets/issues/3076 | 1,026,113,484 | I_kwDODunzps49KT_M | 3,076 | Error when loading a metric | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,634,200,167,000 | 1,634,202,895,000 | 1,634,202,895,000 | MEMBER | null | null | null | ## Describe the bug
As reported by @sgugger, after last release, exception is thrown when loading a metric.
## Steps to reproduce the bug
```python
from datasets import load_metric
metric = load_metric("squad_v2")
```
## Actual results
```
FileNotFoundError Traceback (most recent call last)
<ipython-input-1-e612a8cab787> in <module>
1 from datasets import load_metric
----> 2 metric = load_metric("squad_v2")
d:\projects\huggingface\datasets\src\datasets\load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, script_version, **metric_init_kwargs)
1336 )
1337 revision = script_version
-> 1338 metric_module = metric_module_factory(
1339 path, revision=revision, download_config=download_config, download_mode=download_mode
1340 ).module_path
d:\projects\huggingface\datasets\src\datasets\load.py in metric_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, **download_kwargs)
1237 if not isinstance(e1, FileNotFoundError):
1238 raise e1 from None
-> 1239 raise FileNotFoundError(
1240 f"Couldn't find a metric script at {relative_to_absolute_path(combined_path)}. "
1241 f"Metric '{path}' doesn't exist on the Hugging Face Hub either."
FileNotFoundError: Couldn't find a metric script at D:\projects\huggingface\datasets\squad_v2\squad_v2.py. Metric 'squad_v2' doesn't exist on the Hugging Face Hub either.
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3076/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3076/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3075/comments | https://api.github.com/repos/huggingface/datasets/issues/3075/events | https://github.com/huggingface/datasets/pull/3075 | 1,026,103,388 | PR_kwDODunzps4tL75E | 3,075 | Updates LexGLUE and MultiEURLEX README.md files | {
"login": "iliaschalkidis",
"id": 1626984,
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iliaschalkidis",
"html_url": "https://github.com/iliaschalkidis",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,199,556,000 | 1,634,552,020,000 | 1,634,552,020,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3075",
"html_url": "https://github.com/huggingface/datasets/pull/3075",
"diff_url": "https://github.com/huggingface/datasets/pull/3075.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3075.patch",
"merged_at": 1634552020000
} | Updates LexGLUE and MultiEURLEX README.md files
- Fix leaderboard in LexGLUE.
- Fix an error in the CaseHOLD data example.
- Turn MultiEURLEX dataset statistics table into HTML to nicely render in HF website. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3075/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3075/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3074/comments | https://api.github.com/repos/huggingface/datasets/issues/3074/events | https://github.com/huggingface/datasets/pull/3074 | 1,025,940,085 | PR_kwDODunzps4tLbe- | 3,074 | add XCSR dataset | {
"login": "yangxqiao",
"id": 42788901,
"node_id": "MDQ6VXNlcjQyNzg4OTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/42788901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yangxqiao",
"html_url": "https://github.com/yangxqiao",
"followers_url": "https://api.github.com/users/yangxqiao/followers",
"following_url": "https://api.github.com/users/yangxqiao/following{/other_user}",
"gists_url": "https://api.github.com/users/yangxqiao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yangxqiao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yangxqiao/subscriptions",
"organizations_url": "https://api.github.com/users/yangxqiao/orgs",
"repos_url": "https://api.github.com/users/yangxqiao/repos",
"events_url": "https://api.github.com/users/yangxqiao/events{/privacy}",
"received_events_url": "https://api.github.com/users/yangxqiao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> Hi ! Thanks for adding this dataset :)\r\n> \r\n> Do you know how the translations were done ? Maybe we can mention that in the dataset card.\r\n> \r\n> The rest looks all good to me :) good job with the dataset script and the dataset card !\r\n> \r\n> Just one thing: we try to have dummy_data.zip files that are as small as possible, however here each zip file is 70KB+. It think we can make them even smaller if we remove unnecessary files in them. In particular in the `ar` dummy data zip file, we don't need the data for all languages, but rather only the `ar` files. Could you try to remove the unnecessary files in the dummy data zip files ?\r\n\r\nHi! \r\n\r\nThank you so much for reviewing this PR. I've updated the README to briefly mention the translations and added a link to the paper, where a detailed description of the translation procedure can be found in the appendix.\r\n\r\nFor the dummy_data.zip files, is it possible to keep all the current files? I tried to remove some of the files, but the removal led to a failure in the local testing. We also think it may be better to keep the current dummy_data.zip files because all the data are useful actually. Thanks a lot!!",
"Hi @lhoestq, just a gentle ping on this PR. :D "
] | 1,634,186,399,000 | 1,636,379,556,000 | 1,636,379,556,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3074",
"html_url": "https://github.com/huggingface/datasets/pull/3074",
"diff_url": "https://github.com/huggingface/datasets/pull/3074.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3074.patch",
"merged_at": 1636379556000
} | Hi,
I wanted to add the [XCSR ](https://inklab.usc.edu//XCSR/xcsr_datasets) dataset to huggingface! :)
I followed the instructions of adding new dataset to huggingface and have all the required files ready now! It would be super helpful if you can take a look and review them. Thanks in advance for your time and help. Look forward to hearing from you and can't wait to add XCSR to huggingface :D | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3074/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3074/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3073 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3073/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3073/comments | https://api.github.com/repos/huggingface/datasets/issues/3073/events | https://github.com/huggingface/datasets/issues/3073 | 1,025,718,469 | I_kwDODunzps49IzjF | 3,073 | Import error installing with ppc64le | {
"login": "gcervantes8",
"id": 21228908,
"node_id": "MDQ6VXNlcjIxMjI4OTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/21228908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gcervantes8",
"html_url": "https://github.com/gcervantes8",
"followers_url": "https://api.github.com/users/gcervantes8/followers",
"following_url": "https://api.github.com/users/gcervantes8/following{/other_user}",
"gists_url": "https://api.github.com/users/gcervantes8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gcervantes8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gcervantes8/subscriptions",
"organizations_url": "https://api.github.com/users/gcervantes8/orgs",
"repos_url": "https://api.github.com/users/gcervantes8/repos",
"events_url": "https://api.github.com/users/gcervantes8/events{/privacy}",
"received_events_url": "https://api.github.com/users/gcervantes8/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"This seems to be an issue with importing PyArrow so I posted the problem [here](https://issues.apache.org/jira/browse/ARROW-14323), and I'm closing this issue.\r\n"
] | 1,634,161,043,000 | 1,634,229,346,000 | 1,634,229,208,000 | NONE | null | null | null | ## Describe the bug
Installing the datasets library with a computer running with ppc64le seems to cause an issue when importing the datasets library.
```
python
Python 3.6.13 | packaged by conda-forge | (default, Sep 23 2021, 07:37:44)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import datasets
Illegal instruction (core dumped)
```
Error when importing
`Illegal instruction (core dumped)`
## Steps to reproduce the bug
I get this error when installing the library by using conda. I can't install with pip I believe because pyarrow only has the ppc64le library on conda forge
```
conda create --name transformers_py36_v2 python=3.6
conda activate transformers_py36_v2
conda install datasets
```
## Tracebacks
conda create --name transformers_py36_v2 python=3.6
```
Collecting package metadata (current_repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.9.2
latest version: 4.10.3
Please update conda by running
$ conda update -n base -c defaults conda
## Package Plan ##
environment location: /p/home/gerryc/.conda/envs/transformers_py36_v2
added / updated specs:
- python=3.6
The following NEW packages will be INSTALLED:
_libgcc_mutex conda-forge/linux-ppc64le::_libgcc_mutex-0.1-conda_forge
_openmp_mutex conda-forge/linux-ppc64le::_openmp_mutex-4.5-1_gnu
ca-certificates conda-forge/linux-ppc64le::ca-certificates-2021.10.8-h1084571_0
certifi pkgs/main/linux-ppc64le::certifi-2020.12.5-py36h6ffa863_0
ld_impl_linux-ppc~ conda-forge/linux-ppc64le::ld_impl_linux-ppc64le-2.36.1-ha35d02b_2
libffi conda-forge/linux-ppc64le::libffi-3.4.2-h3b9df90_4
libgcc-ng conda-forge/linux-ppc64le::libgcc-ng-11.2.0-h7698a5e_11
libgomp conda-forge/linux-ppc64le::libgomp-11.2.0-h7698a5e_11
libstdcxx-ng conda-forge/linux-ppc64le::libstdcxx-ng-11.2.0-habdf983_11
libzlib conda-forge/linux-ppc64le::libzlib-1.2.11-h339bb43_1013
ncurses conda-forge/linux-ppc64le::ncurses-6.2-hea85c5d_4
openssl conda-forge/linux-ppc64le::openssl-1.1.1l-h4e0d66e_0
pip conda-forge/noarch::pip-21.3-pyhd8ed1ab_0
python conda-forge/linux-ppc64le::python-3.6.13-h57873ef_2_cpython
readline conda-forge/linux-ppc64le::readline-8.1-h5c45dff_0
setuptools pkgs/main/linux-ppc64le::setuptools-58.0.4-py36h6ffa863_0
sqlite conda-forge/linux-ppc64le::sqlite-3.36.0-h4e2196e_2
tk conda-forge/linux-ppc64le::tk-8.6.11-h41c6715_1
wheel conda-forge/noarch::wheel-0.37.0-pyhd8ed1ab_1
xz conda-forge/linux-ppc64le::xz-5.2.5-h6eb9509_1
zlib conda-forge/linux-ppc64le::zlib-1.2.11-h339bb43_1013
Proceed ([y]/n)? y
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
#
# To activate this environment, use
#
# $ conda activate transformers_py36_v2
#
# To deactivate an active environment, use
#
# $ conda deactivate
```
conda activate transformers_py36_v2
conda install datasets
```
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.9.2
latest version: 4.10.3
Please update conda by running
$ conda update -n base -c defaults conda
## Package Plan ##
environment location: /p/home/gerryc/.conda/envs/transformers_py36_v2
added / updated specs:
- datasets
The following NEW packages will be INSTALLED:
abseil-cpp conda-forge/linux-ppc64le::abseil-cpp-20210324.2-h3b9df90_0
aiohttp conda-forge/linux-ppc64le::aiohttp-3.7.4.post0-py36hc33305d_0
arrow-cpp conda-forge/linux-ppc64le::arrow-cpp-5.0.0-py36hf9cf308_8_cpu
async-timeout conda-forge/noarch::async-timeout-3.0.1-py_1000
attrs conda-forge/noarch::attrs-21.2.0-pyhd8ed1ab_0
aws-c-cal conda-forge/linux-ppc64le::aws-c-cal-0.5.11-hb3fac3d_0
aws-c-common conda-forge/linux-ppc64le::aws-c-common-0.6.2-h4e0d66e_0
aws-c-event-stream conda-forge/linux-ppc64le::aws-c-event-stream-0.2.7-h76da5f2_13
aws-c-io conda-forge/linux-ppc64le::aws-c-io-0.10.5-hf6a6c7c_0
aws-checksums conda-forge/linux-ppc64le::aws-checksums-0.1.11-hfe76d68_7
aws-sdk-cpp conda-forge/linux-ppc64le::aws-sdk-cpp-1.8.186-h90855e8_3
brotlipy conda-forge/linux-ppc64le::brotlipy-0.7.0-py36hc33305d_1001
bzip2 conda-forge/linux-ppc64le::bzip2-1.0.8-h4e0d66e_4
c-ares conda-forge/linux-ppc64le::c-ares-1.17.2-h4e0d66e_0
cffi conda-forge/linux-ppc64le::cffi-1.14.6-py36h021ab3c_1
chardet conda-forge/linux-ppc64le::chardet-4.0.0-py36h270354c_1
colorama conda-forge/noarch::colorama-0.4.4-pyh9f0ad1d_0
cryptography conda-forge/linux-ppc64le::cryptography-3.4.7-py36hc71b123_0
dataclasses conda-forge/noarch::dataclasses-0.8-pyh787bdff_2
datasets conda-forge/noarch::datasets-1.12.1-pyhd8ed1ab_1
dill conda-forge/noarch::dill-0.3.4-pyhd8ed1ab_0
filelock conda-forge/noarch::filelock-3.3.0-pyhd8ed1ab_0
fsspec conda-forge/noarch::fsspec-2021.10.0-pyhd8ed1ab_0
gflags conda-forge/linux-ppc64le::gflags-2.2.2-hb209c28_1004
glog conda-forge/linux-ppc64le::glog-0.5.0-h4040248_0
grpc-cpp conda-forge/linux-ppc64le::grpc-cpp-1.40.0-h2bf711c_2
huggingface_hub conda-forge/noarch::huggingface_hub-0.0.19-pyhd8ed1ab_0
idna conda-forge/noarch::idna-2.10-pyh9f0ad1d_0
idna_ssl conda-forge/noarch::idna_ssl-1.0.0-0
importlib-metadata conda-forge/linux-ppc64le::importlib-metadata-4.8.1-py36h270354c_0
importlib_metadata conda-forge/noarch::importlib_metadata-4.8.1-hd8ed1ab_0
krb5 conda-forge/linux-ppc64le::krb5-1.19.2-haf43566_2
libblas conda-forge/linux-ppc64le::libblas-3.9.0-11_linuxppc64le_openblas
libbrotlicommon conda-forge/linux-ppc64le::libbrotlicommon-1.0.9-h4e0d66e_5
libbrotlidec conda-forge/linux-ppc64le::libbrotlidec-1.0.9-h4e0d66e_5
libbrotlienc conda-forge/linux-ppc64le::libbrotlienc-1.0.9-h4e0d66e_5
libcblas conda-forge/linux-ppc64le::libcblas-3.9.0-11_linuxppc64le_openblas
libcurl conda-forge/linux-ppc64le::libcurl-7.79.1-he415e40_1
libedit conda-forge/linux-ppc64le::libedit-3.1.20191231-h41a240f_2
libev conda-forge/linux-ppc64le::libev-4.33-h6eb9509_1
libevent conda-forge/linux-ppc64le::libevent-2.1.10-h97db324_4
libgfortran-ng conda-forge/linux-ppc64le::libgfortran-ng-11.2.0-hfdc3801_11
libgfortran5 conda-forge/linux-ppc64le::libgfortran5-11.2.0-he58fbb4_11
liblapack conda-forge/linux-ppc64le::liblapack-3.9.0-11_linuxppc64le_openblas
libnghttp2 conda-forge/linux-ppc64le::libnghttp2-1.43.0-h42039ad_1
libopenblas conda-forge/linux-ppc64le::libopenblas-0.3.17-pthreads_h486567c_1
libprotobuf conda-forge/linux-ppc64le::libprotobuf-3.18.1-h690f14c_0
libssh2 conda-forge/linux-ppc64le::libssh2-1.10.0-ha5a9321_2
libthrift conda-forge/linux-ppc64le::libthrift-0.15.0-h54f692e_1
libutf8proc conda-forge/linux-ppc64le::libutf8proc-2.6.1-h4e0d66e_0
lz4-c conda-forge/linux-ppc64le::lz4-c-1.9.3-h3b9df90_1
multidict conda-forge/linux-ppc64le::multidict-5.2.0-py36hc33305d_0
multiprocess conda-forge/linux-ppc64le::multiprocess-0.70.12.2-py36hc33305d_0
numpy conda-forge/linux-ppc64le::numpy-1.19.5-py36h86665d4_1
orc conda-forge/linux-ppc64le::orc-1.7.0-hae6b4bd_0
packaging conda-forge/noarch::packaging-21.0-pyhd8ed1ab_0
pandas conda-forge/linux-ppc64le::pandas-1.1.5-py36hab1a6e6_0
parquet-cpp conda-forge/noarch::parquet-cpp-1.5.1-2
pyarrow conda-forge/linux-ppc64le::pyarrow-5.0.0-py36h7a46c7e_8_cpu
pycparser conda-forge/noarch::pycparser-2.20-pyh9f0ad1d_2
pyopenssl conda-forge/noarch::pyopenssl-21.0.0-pyhd8ed1ab_0
pyparsing conda-forge/noarch::pyparsing-2.4.7-pyh9f0ad1d_0
pysocks conda-forge/linux-ppc64le::pysocks-1.7.1-py36h270354c_3
python-dateutil conda-forge/noarch::python-dateutil-2.8.2-pyhd8ed1ab_0
python-xxhash conda-forge/linux-ppc64le::python-xxhash-2.0.2-py36hc33305d_0
python_abi conda-forge/linux-ppc64le::python_abi-3.6-2_cp36m
pytz conda-forge/noarch::pytz-2021.3-pyhd8ed1ab_0
pyyaml conda-forge/linux-ppc64le::pyyaml-5.4.1-py36hc33305d_1
re2 conda-forge/linux-ppc64le::re2-2021.09.01-h3b9df90_0
requests conda-forge/noarch::requests-2.25.1-pyhd3deb0d_0
s2n conda-forge/linux-ppc64le::s2n-1.0.10-h97db324_0
six conda-forge/noarch::six-1.16.0-pyh6c4a22f_0
snappy conda-forge/linux-ppc64le::snappy-1.1.8-hb209c28_3
tqdm conda-forge/noarch::tqdm-4.62.3-pyhd8ed1ab_0
typing-extensions conda-forge/noarch::typing-extensions-3.10.0.2-hd8ed1ab_0
typing_extensions conda-forge/noarch::typing_extensions-3.10.0.2-pyha770c72_0
urllib3 conda-forge/noarch::urllib3-1.26.7-pyhd8ed1ab_0
xxhash conda-forge/linux-ppc64le::xxhash-0.8.0-h4e0d66e_3
yaml conda-forge/linux-ppc64le::yaml-0.2.5-h6eb9509_0
yarl conda-forge/linux-ppc64le::yarl-1.6.3-py36hc33305d_2
zipp conda-forge/noarch::zipp-3.6.0-pyhd8ed1ab_0
zstd conda-forge/linux-ppc64le::zstd-1.5.0-h65c4b1a_0
The following packages will be UPDATED:
certifi pkgs/main::certifi-2020.12.5-py36h6ff~ --> conda-forge::certifi-2021.5.30-py36h270354c_0
Proceed ([y]/n)? y
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Red Hat Enterprise Linux 8.2 (Ootpa)
- Python version: 3.6
- PyArrow version: pyarrow - 5.0.0 - py36h7a46c7e_8_cpu - conda-forge
Any help would be appreciated! I've been struggling on installing datasets on this machine.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3073/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3073/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3072/comments | https://api.github.com/repos/huggingface/datasets/issues/3072/events | https://github.com/huggingface/datasets/pull/3072 | 1,025,233,152 | PR_kwDODunzps4tJNnD | 3,072 | Fix pathlib patches for streaming | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,130,675,000 | 1,634,131,865,000 | 1,634,131,865,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3072",
"html_url": "https://github.com/huggingface/datasets/pull/3072",
"diff_url": "https://github.com/huggingface/datasets/pull/3072.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3072.patch",
"merged_at": 1634131865000
} | Fix issue https://github.com/huggingface/datasets/issues/2866 (for good this time)
`counter` now works in both streaming and non-streaming mode.
And the `AttributeError: 'str' object has no attribute 'as_posix'` related to the patch of Path.open is fixed as well
Note : the patches should only affect the datasets module, not the user's ones ! That's why we should probably use something else than patch.object to patch the Path class' methods.
cc @severo @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3072/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3072/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3071/comments | https://api.github.com/repos/huggingface/datasets/issues/3071/events | https://github.com/huggingface/datasets/issues/3071 | 1,024,893,493 | I_kwDODunzps49FqI1 | 3,071 | Custom plain text dataset, plain json dataset and plain csv dataset are remove from datasets template folder | {
"login": "zixiliuUSC",
"id": 49173327,
"node_id": "MDQ6VXNlcjQ5MTczMzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/49173327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zixiliuUSC",
"html_url": "https://github.com/zixiliuUSC",
"followers_url": "https://api.github.com/users/zixiliuUSC/followers",
"following_url": "https://api.github.com/users/zixiliuUSC/following{/other_user}",
"gists_url": "https://api.github.com/users/zixiliuUSC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zixiliuUSC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zixiliuUSC/subscriptions",
"organizations_url": "https://api.github.com/users/zixiliuUSC/orgs",
"repos_url": "https://api.github.com/users/zixiliuUSC/repos",
"events_url": "https://api.github.com/users/zixiliuUSC/events{/privacy}",
"received_events_url": "https://api.github.com/users/zixiliuUSC/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @zixiliuUSC, \r\n\r\nAs explained in the documentation (https://huggingface.co/docs/datasets/loading.html#json), we support loading any dataset in JSON (as well as CSV, text, Parquet) format:\r\n```python\r\nds = load_dataset('json', data_files='my_file.json')\r\n```"
] | 1,634,110,330,000 | 1,634,113,624,000 | 1,634,113,623,000 | NONE | null | null | null | ## Adding a Dataset
- **Name:** text, json, csv
- **Description:** I am developing a customized dataset loading script. The problem is mainly about my custom dataset is seperate into many files and I only find a dataset loading template in [https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py](https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py) that can handle my circumstance. I'm afraid these templates are too old to use. Could you re-add these three templates to current master branch?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3071/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3071/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3070 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3070/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3070/comments | https://api.github.com/repos/huggingface/datasets/issues/3070/events | https://github.com/huggingface/datasets/pull/3070 | 1,024,856,745 | PR_kwDODunzps4tIBRp | 3,070 | Fix Windows CI with FileNotFoundError when stting up s3_base fixture | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks ! Sorry for the inconvenience ^^' "
] | 1,634,107,741,000 | 1,634,115,313,000 | 1,634,107,788,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3070",
"html_url": "https://github.com/huggingface/datasets/pull/3070",
"diff_url": "https://github.com/huggingface/datasets/pull/3070.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3070.patch",
"merged_at": 1634107788000
} | Fix #3069. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3070/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3070/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3069 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3069/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3069/comments | https://api.github.com/repos/huggingface/datasets/issues/3069/events | https://github.com/huggingface/datasets/issues/3069 | 1,024,818,680 | I_kwDODunzps49FX34 | 3,069 | CI fails on Windows with FileNotFoundError when stting up s3_base fixture | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,634,104,346,000 | 1,634,112,349,000 | 1,634,107,788,000 | MEMBER | null | null | null | ## Describe the bug
After commit 9353fc863d0c99ab0427f83cc5a4f04fcf52f1df, the CI fails on Windows with FileNotFoundError when stting up s3_base fixture. See: https://app.circleci.com/pipelines/github/huggingface/datasets/8151/workflows/5db8d154-badd-4d3d-b202-ca7a318997a2/jobs/50321
Error summary:
```
ERROR tests/test_arrow_dataset.py::test_dummy_dataset_serialize_s3 - FileNotF...
ERROR tests/test_dataset_dict.py::test_dummy_dataset_serialize_s3 - FileNotFo...
```
Stack trace:
```
______________ ERROR at setup of test_dummy_dataset_serialize_s3 ______________
[gw0] win32 -- Python 3.6.8 C:\tools\miniconda3\python.exe
@pytest.fixture()
def s3_base():
# writable local S3 system
import shlex
import subprocess
# Mocked AWS Credentials for moto.
old_environ = os.environ.copy()
os.environ.update(S3_FAKE_ENV_VARS)
> proc = subprocess.Popen(shlex.split("moto_server s3 -p %s" % s3_port))
tests\s3_fixtures.py:32:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\tools\miniconda3\lib\subprocess.py:729: in __init__
restore_signals, start_new_session)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <subprocess.Popen object at 0x0000012BB8A4B908>
args = 'moto_server s3 -p 5555', executable = None, preexec_fn = None
close_fds = True, pass_fds = (), cwd = None, env = None
startupinfo = <subprocess.STARTUPINFO object at 0x0000012BB8177630>
creationflags = 0, shell = False, p2cread = -1, p2cwrite = -1, c2pread = -1
c2pwrite = -1, errread = -1, errwrite = -1, unused_restore_signals = True
unused_start_new_session = False
def _execute_child(self, args, executable, preexec_fn, close_fds,
pass_fds, cwd, env,
startupinfo, creationflags, shell,
p2cread, p2cwrite,
c2pread, c2pwrite,
errread, errwrite,
unused_restore_signals, unused_start_new_session):
"""Execute program (MS Windows version)"""
assert not pass_fds, "pass_fds not supported on Windows."
if not isinstance(args, str):
args = list2cmdline(args)
# Process startup details
if startupinfo is None:
startupinfo = STARTUPINFO()
if -1 not in (p2cread, c2pwrite, errwrite):
startupinfo.dwFlags |= _winapi.STARTF_USESTDHANDLES
startupinfo.hStdInput = p2cread
startupinfo.hStdOutput = c2pwrite
startupinfo.hStdError = errwrite
if shell:
startupinfo.dwFlags |= _winapi.STARTF_USESHOWWINDOW
startupinfo.wShowWindow = _winapi.SW_HIDE
comspec = os.environ.get("COMSPEC", "cmd.exe")
args = '{} /c "{}"'.format (comspec, args)
# Start the process
try:
hp, ht, pid, tid = _winapi.CreateProcess(executable, args,
# no special security
None, None,
int(not close_fds),
creationflags,
env,
os.fspath(cwd) if cwd is not None else None,
> startupinfo)
E FileNotFoundError: [WinError 2] The system cannot find the file specified
C:\tools\miniconda3\lib\subprocess.py:1017: FileNotFoundError
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3069/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3068 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3068/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3068/comments | https://api.github.com/repos/huggingface/datasets/issues/3068/events | https://github.com/huggingface/datasets/pull/3068 | 1,024,681,264 | PR_kwDODunzps4tHhOC | 3,068 | feat: increase streaming retry config | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq I had 2 runs for more than 2 days each, continuously streaming (they were failing before with 3 retries at 1 sec interval).\r\n\r\nThey are running on TPU's (so great internet connection) and only had connection errors a few times each (3 & 4). Each time it worked after only 1 retry.\r\nThe reason for a higher number of retries is for local connections. It would allow for almost 2mn of a wifi/ethernet disconnection. In practice this should not happen very often.\r\n\r\nLet me know if you think it's too much."
] | 1,634,090,450,000 | 1,634,117,156,000 | 1,634,117,154,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3068",
"html_url": "https://github.com/huggingface/datasets/pull/3068",
"diff_url": "https://github.com/huggingface/datasets/pull/3068.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3068.patch",
"merged_at": 1634117154000
} | Increase streaming config parameters:
* retry interval set to 5 seconds
* max retries set to 20 (so 1mn 40s) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3068/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3067/comments | https://api.github.com/repos/huggingface/datasets/issues/3067/events | https://github.com/huggingface/datasets/pull/3067 | 1,024,023,185 | PR_kwDODunzps4tFSCy | 3,067 | add story_cloze | {
"login": "zaidalyafeai",
"id": 15667714,
"node_id": "MDQ6VXNlcjE1NjY3NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zaidalyafeai",
"html_url": "https://github.com/zaidalyafeai",
"followers_url": "https://api.github.com/users/zaidalyafeai/followers",
"following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}",
"gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions",
"organizations_url": "https://api.github.com/users/zaidalyafeai/orgs",
"repos_url": "https://api.github.com/users/zaidalyafeai/repos",
"events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}",
"received_events_url": "https://api.github.com/users/zaidalyafeai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for pushing this dataset :)\r\n\r\nAccording to the CI, the file `cloze_test_val__spring2016 - cloze_test_ALL_val.csv` is missing in the dummy data zip file (the zip files seem empty). Feel free to add this file with 4-5 lines and it should be good\r\n\r\nAnd you can fix the YAML tags with\r\n```yaml\r\npretty_name: Story Cloze Test\r\n```\r\nand filling the other tags task_categories and task_ids\r\n\r\nIf the dataset doesn exist on paperswithcode, you can just leave\r\n```yaml\r\npaperswithcode_id: null\r\n```",
"@lhoestq can't fix the last test fails.",
"> Thanks @zaidalyafeai, the failing test is due to an issue in the master branch, that has already been fixed.\r\n> \r\n> You can include the fix:\r\n> \r\n> ```\r\n> git checkout add_story_cloze\r\n> git fetch upstream master\r\n> git merge upstream/master\r\n> ```\r\n\r\nThanks @albertvillanova, passed all the tests now. ",
"Thanks Albert, I fixed the suggested comments. This dataset has no train splits, it is only used for evaluation."
] | 1,634,056,613,000 | 1,634,132,893,000 | 1,634,132,893,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3067",
"html_url": "https://github.com/huggingface/datasets/pull/3067",
"diff_url": "https://github.com/huggingface/datasets/pull/3067.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3067.patch",
"merged_at": 1634132893000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3067/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3066 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3066/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3066/comments | https://api.github.com/repos/huggingface/datasets/issues/3066/events | https://github.com/huggingface/datasets/pull/3066 | 1,024,005,311 | PR_kwDODunzps4tFObl | 3,066 | Add iter_archive | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,055,436,000 | 1,634,548,367,000 | 1,634,548,366,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3066",
"html_url": "https://github.com/huggingface/datasets/pull/3066",
"diff_url": "https://github.com/huggingface/datasets/pull/3066.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3066.patch",
"merged_at": 1634548366000
} | Added the `iter_archive` method for the StreamingDownloadManager.
It was already implemented in the regular DownloadManager.
Now it can be used to stream from TAR archives as mentioned in https://github.com/huggingface/datasets/issues/2829
I also updated the `food101` dataset as an example.
Any image/audio dataset using TAR archives can be updated to use `iter_archive` in order to be streamable :)
cc @severo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3066/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3065 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3065/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3065/comments | https://api.github.com/repos/huggingface/datasets/issues/3065/events | https://github.com/huggingface/datasets/pull/3065 | 1,023,951,322 | PR_kwDODunzps4tFDjk | 3,065 | Fix test command after refac | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,634,052,210,000 | 1,634,052,527,000 | 1,634,052,526,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3065",
"html_url": "https://github.com/huggingface/datasets/pull/3065",
"diff_url": "https://github.com/huggingface/datasets/pull/3065.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3065.patch",
"merged_at": 1634052526000
} | Fix the `datasets-cli` test command after the `prepare_module` change in #2986 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3065/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3064 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3064/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3064/comments | https://api.github.com/repos/huggingface/datasets/issues/3064/events | https://github.com/huggingface/datasets/issues/3064 | 1,023,900,075 | I_kwDODunzps49B3mr | 3,064 | Make `interleave_datasets` more robust | {
"login": "sbmaruf",
"id": 32699797,
"node_id": "MDQ6VXNlcjMyNjk5Nzk3",
"avatar_url": "https://avatars.githubusercontent.com/u/32699797?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sbmaruf",
"html_url": "https://github.com/sbmaruf",
"followers_url": "https://api.github.com/users/sbmaruf/followers",
"following_url": "https://api.github.com/users/sbmaruf/following{/other_user}",
"gists_url": "https://api.github.com/users/sbmaruf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sbmaruf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sbmaruf/subscriptions",
"organizations_url": "https://api.github.com/users/sbmaruf/orgs",
"repos_url": "https://api.github.com/users/sbmaruf/repos",
"events_url": "https://api.github.com/users/sbmaruf/events{/privacy}",
"received_events_url": "https://api.github.com/users/sbmaruf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi @lhoestq Any response on this issue?",
"Hi ! Sorry for the late response\r\n\r\nI agree `interleave_datasets` would benefit a lot from having more flexibility. If I understand correctly it would be nice to be able to define stopping strategies like `stop=\"first_exhausted\"` (default) or `stop=\"all_exhausted\"`. If you'd like to contribute this feature I'd be happy to give you some pointers :)\r\n\r\nAlso one can already set the max number of iterations per dataset by doing `dataset.take(n)` on the dataset that should only have `n` samples.\r\n\r\nRegarding the `iter_cnt` counter, I think this requires a bit more thoughts, since we might have to be able to backpropagate the the counter if `map` or other transforms have been applied after `interleave_datasets`. "
] | 1,634,049,293,000 | 1,643,212,307,000 | null | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
Right now there are few hiccups using `interleave_datasets`. Interleaved dataset iterates until the smallest dataset completes it's iterator. In this way larger datasets may not complete full epoch of iteration.
It creates new problems in calculation of epoch since there are no way to track which dataset from `interleave_datasets` completes how many epoch.
**Describe the solution you'd like**
For `interleave_datasets` module,
- [ ] Add a boolean argument `--stop-iter` in `interleave_datasets` that enables dataset to either iterate infinite amount of time or not. That means it should not return `StopIterator` exception in case `--stop-iter=False`.
- [ ] Internal list variable `iter_cnt` that explains how many times (in steps/epochs) each dataset iterates at a given point.
- [ ] Add an argument `--max-iter` (list type) that explain maximum amount of time each of the dataset can iterate. After complete `--max-iter` of one dataset, other dataset should continue sampling and when all the dataset finish their respective `--max-iter`, only then return `StopIterator`
Note: I'm new to `datasets` api. May be these features are already there in the datasets.
Since multitask training is the latest trends, I believe this feature would make the `datasets` api more popular.
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3064/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3063/comments | https://api.github.com/repos/huggingface/datasets/issues/3063/events | https://github.com/huggingface/datasets/issues/3063 | 1,023,588,297 | I_kwDODunzps49ArfJ | 3,063 | Windows CI is unable to test streaming properly because of SSL issues | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [
"I think this problem is already fixed:\r\n```python\r\nIn [4]: import fsspec\r\n ...:\r\n ...: url = \"https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattributes\"\r\n ...:\r\n ...: fsspec.open(url).open()\r\nOut[4]: <File-like object HTTPFileSystem, https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattribu\r\n```",
"No I'm still having this issue on my windows, and so does the CI"
] | 1,634,031,220,000 | 1,634,663,512,000 | null | MEMBER | null | null | null | In https://github.com/huggingface/datasets/pull/3041 the windows tests were skipped because of SSL issues with moon-staging.huggingface.co:443
The issue appears only on windows with asyncio. On Linux it works. With requests it works as well. And with the production environment huggingface.co it also works.
to reproduce on windows:
```python
import fsspec
# use any URL to a file in a dataset repo
url = "https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattributes"
fsspec.open(url).open()
```
raises
```python
FileNotFoundError: https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattributes
```
because of
```python
aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host moon-staging.huggingface.co:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)')]
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3063/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3062 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3062/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3062/comments | https://api.github.com/repos/huggingface/datasets/issues/3062/events | https://github.com/huggingface/datasets/pull/3062 | 1,023,209,592 | PR_kwDODunzps4tCxfK | 3,062 | Update summary on PyPi beyond NLP | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,994,866,000 | 1,634,115,354,000 | 1,634,115,354,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3062",
"html_url": "https://github.com/huggingface/datasets/pull/3062",
"diff_url": "https://github.com/huggingface/datasets/pull/3062.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3062.patch",
"merged_at": 1634115353000
} | More than just NLP now | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3062/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3062/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3061/comments | https://api.github.com/repos/huggingface/datasets/issues/3061/events | https://github.com/huggingface/datasets/issues/3061 | 1,023,103,119 | I_kwDODunzps48-1CP | 3,061 | Feature request : add leave=True to dataset.map to enable tqdm nested bars (and whilst we're at it couldn't we get a way to access directly tqdm underneath?) | {
"login": "BenoitDalFerro",
"id": 69694610,
"node_id": "MDQ6VXNlcjY5Njk0NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/69694610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenoitDalFerro",
"html_url": "https://github.com/BenoitDalFerro",
"followers_url": "https://api.github.com/users/BenoitDalFerro/followers",
"following_url": "https://api.github.com/users/BenoitDalFerro/following{/other_user}",
"gists_url": "https://api.github.com/users/BenoitDalFerro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenoitDalFerro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenoitDalFerro/subscriptions",
"organizations_url": "https://api.github.com/users/BenoitDalFerro/orgs",
"repos_url": "https://api.github.com/users/BenoitDalFerro/repos",
"events_url": "https://api.github.com/users/BenoitDalFerro/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenoitDalFerro/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"@lhoestq, @albertvillanova can we have `**tqdm_kwargs` in `map`? If there are any fields that are important to our tqdm (like iterable or unit), we can pop them before initialising the tqdm object so as to avoid duplicity.",
"Hi ! Sounds like a good idea :)\r\n\r\nAlso I think it would be better to have this as an actual parameters instead of kwargs to make it clearer"
] | 1,633,985,389,000 | 1,634,895,250,000 | null | NONE | null | null | null | **A clear and concise description of what you want to happen.**
It would be so nice to be able to nest HuggingFace `Datasets.map() ` progress bars in the grander scheme of things and whilst we're at it why not other functions.
**Describe alternatives you've considered**
By the way is there not a way to directly interact with underlying tqdm module ? **kwargs-ish?
**Additional context**
Furthering tqdm integration #2374 and huggingface/transformers#11797 solutioned by huggingface/transformers#12226 provided with tqdm description as `desc=`
@sgugger @bhavitvyamalik | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3061/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3060 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3060/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3060/comments | https://api.github.com/repos/huggingface/datasets/issues/3060/events | https://github.com/huggingface/datasets/issues/3060 | 1,022,936,396 | I_kwDODunzps48-MVM | 3,060 | load_dataset('openwebtext') yields "Compressed file ended before the end-of-stream marker was reached" | {
"login": "RylanSchaeffer",
"id": 8942987,
"node_id": "MDQ6VXNlcjg5NDI5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8942987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RylanSchaeffer",
"html_url": "https://github.com/RylanSchaeffer",
"followers_url": "https://api.github.com/users/RylanSchaeffer/followers",
"following_url": "https://api.github.com/users/RylanSchaeffer/following{/other_user}",
"gists_url": "https://api.github.com/users/RylanSchaeffer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RylanSchaeffer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RylanSchaeffer/subscriptions",
"organizations_url": "https://api.github.com/users/RylanSchaeffer/orgs",
"repos_url": "https://api.github.com/users/RylanSchaeffer/repos",
"events_url": "https://api.github.com/users/RylanSchaeffer/events{/privacy}",
"received_events_url": "https://api.github.com/users/RylanSchaeffer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @RylanSchaeffer, thanks for reporting.\r\n\r\nI'm sorry, but I was not able to reproduce your problem.\r\n\r\nNormally, the reason for this type of error is that, during your download of the data files, this was not fully complete.\r\n\r\nCould you please try to load the dataset again but forcing its redownload? Please use:\r\n```python\r\ndataset = load_dataset(\"openwebtext\", download_mode=\"FORCE_REDOWNLOAD\")\r\n```\r\n\r\nLet me know if the problem persists.",
"I close this issue for the moment. Feel free to re-open it again if the problem persists."
] | 1,633,971,927,000 | 1,635,400,341,000 | 1,635,400,341,000 | NONE | null | null | null | ## Describe the bug
When I try `load_dataset('openwebtext')`, I receive a "EOFError: Compressed file ended before the end-of-stream marker was reached" error.
## Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset('openwebtext')
```
## Expected results
I expect the `dataset` variable to be properly constructed.
## Actual results
```
File "/home/rschaef/CoCoSci-Language-Distillation/distillation_v2/ratchet_learning/tasks/base.py", line 37, in create_dataset
dataset_str,
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/load.py", line 1117, in load_dataset
use_auth_token=use_auth_token,
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/builder.py", line 637, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/builder.py", line 704, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/rschaef/.cache/huggingface/modules/datasets_modules/datasets/openwebtext/85b3ae7051d2d72e7c5fdf6dfb462603aaa26e9ed506202bf3a24d261c6c40a1/openwebtext.py", line 61, in _split_generators
dl_dir = dl_manager.download_and_extract(_URL)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 261, in extract
partial(cached_path, download_config=download_config), path_or_paths, num_proc=num_proc, disable_tqdm=False
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 197, in map_nested
return function(data_struct)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 316, in cached_path
output_path, force_extract=download_config.force_extract
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 40, in extract
self.extractor.extract(input_path, output_path, extractor=extractor)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 179, in extract
return extractor.extract(input_path, output_path)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 53, in extract
tar_file.extractall(output_path)
File "/usr/lib/python3.6/tarfile.py", line 2010, in extractall
numeric_owner=numeric_owner)
File "/usr/lib/python3.6/tarfile.py", line 2052, in extract
numeric_owner=numeric_owner)
File "/usr/lib/python3.6/tarfile.py", line 2122, in _extract_member
self.makefile(tarinfo, targetpath)
File "/usr/lib/python3.6/tarfile.py", line 2171, in makefile
copyfileobj(source, target, tarinfo.size, ReadError, bufsize)
File "/usr/lib/python3.6/tarfile.py", line 249, in copyfileobj
buf = src.read(bufsize)
File "/usr/lib/python3.6/lzma.py", line 200, in read
return self._buffer.read(size)
File "/usr/lib/python3.6/_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "/usr/lib/python3.6/_compression.py", line 99, in read
raise EOFError("Compressed file ended before the "
python-BaseException
EOFError: Compressed file ended before the end-of-stream marker was reached
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-4.4.0-173-generic-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.10
- PyArrow version: 5.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3060/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3060/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3059 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3059/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3059/comments | https://api.github.com/repos/huggingface/datasets/issues/3059/events | https://github.com/huggingface/datasets/pull/3059 | 1,022,620,057 | PR_kwDODunzps4tA54w | 3,059 | Fix task reloading from cache | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,953,784,000 | 1,633,955,019,000 | 1,633,955,019,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3059",
"html_url": "https://github.com/huggingface/datasets/pull/3059",
"diff_url": "https://github.com/huggingface/datasets/pull/3059.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3059.patch",
"merged_at": 1633955018000
} | When reloading a dataset from the cache when doing `map`, the tasks templates were kept instead of being updated regarding the output of the `map` function. This is an issue because we drop the tasks templates that are not compatible anymore after `map`, for example if a column of the template was removed.
This PR fixes this and for convenience introduces a decorator `@transmit_tasks` that takes care of doing this verification, similar to the `@transmit_format` decorator.
This should fix issue https://github.com/huggingface/datasets/issues/3047 cc @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3059/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3059/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3058/comments | https://api.github.com/repos/huggingface/datasets/issues/3058/events | https://github.com/huggingface/datasets/issues/3058 | 1,022,612,664 | I_kwDODunzps4889S4 | 3,058 | Dataset wikipedia and Bookcorpusopen cannot be fetched from dataloader. | {
"login": "hobbitlzy",
"id": 35392624,
"node_id": "MDQ6VXNlcjM1MzkyNjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/35392624?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hobbitlzy",
"html_url": "https://github.com/hobbitlzy",
"followers_url": "https://api.github.com/users/hobbitlzy/followers",
"following_url": "https://api.github.com/users/hobbitlzy/following{/other_user}",
"gists_url": "https://api.github.com/users/hobbitlzy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hobbitlzy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hobbitlzy/subscriptions",
"organizations_url": "https://api.github.com/users/hobbitlzy/orgs",
"repos_url": "https://api.github.com/users/hobbitlzy/repos",
"events_url": "https://api.github.com/users/hobbitlzy/events{/privacy}",
"received_events_url": "https://api.github.com/users/hobbitlzy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! I think this issue is more related to the `transformers` project. Could you open an issue on https://github.com/huggingface/transformers ?\r\n\r\nAnyway I think the issue could be that both wikipedia and bookcorpusopen have an additional \"title\" column, contrary to wikitext which only has a \"text\" column. After calling `load_dataset`, can you try doing `dataset = dataset.remove_columns(\"title\")` ?",
"Removing the \"title\" column works! Thanks for your advice.\r\n\r\nMaybe I should still create an issue to `transformers' to mark this solution?"
] | 1,633,953,299,000 | 1,642,601,029,000 | 1,642,601,029,000 | NONE | null | null | null | ## Describe the bug
I have used the previous version of `transformers` and `datasets`. The dataset `wikipedia` can be successfully used. Recently, I upgrade them to the newest version and find it raises errors. I also tried other datasets. The `wikitext` works and the `bookcorpusopen` raises the same errors as `wikipedia`.
## Steps to reproduce the bug
Run the `run_mlm_no_trainer.py` and the given script on this [link](https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling). Change the dataset from wikitext to wikipedia or bookcorpusopen. BTW, the library transformers is of version 4.11.3.
## Expected results
The data batchs are fetched from the data loader and train.
## Actual results
The first time to fetch data batch occurs error.
`Traceback (most recent call last):
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 705, in convert_to_tensors
tensor = as_tensor(value)
ValueError: too many dimensions 'str'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "src/original_run_mlm_no_trainer.py", line 528, in <module>
main()
File "src/original_run_mlm_no_trainer.py", line 488, in main
for step, batch in enumerate(train_dataloader):
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/accelerate/data_loader.py", line 303, in __iter__
for batch in super().__iter__():
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 517, in __next__
data = self._next_data()
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 557, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/data/data_collator.py", line 41, in __call__
return self.torch_call(features)
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/data/data_collator.py", line 671, in torch_call
batch = self.tokenizer.pad(examples, return_tensors="pt", pad_to_multiple_of=self.pad_to_multiple_of)
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2774, in pad
return BatchEncoding(batch_outputs, tensor_type=return_tensors)
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 210, in __init__
self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 722, in convert_to_tensors
"Unable to create tensor, you should probably activate truncation and/or padding "
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Linux-5.8.0-59-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.6
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3058/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3058/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3057 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3057/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3057/comments | https://api.github.com/repos/huggingface/datasets/issues/3057/events | https://github.com/huggingface/datasets/issues/3057 | 1,022,508,315 | I_kwDODunzps488j0b | 3,057 | Error in per class precision computation | {
"login": "tidhamecha2",
"id": 38906722,
"node_id": "MDQ6VXNlcjM4OTA2NzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/38906722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tidhamecha2",
"html_url": "https://github.com/tidhamecha2",
"followers_url": "https://api.github.com/users/tidhamecha2/followers",
"following_url": "https://api.github.com/users/tidhamecha2/following{/other_user}",
"gists_url": "https://api.github.com/users/tidhamecha2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tidhamecha2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tidhamecha2/subscriptions",
"organizations_url": "https://api.github.com/users/tidhamecha2/orgs",
"repos_url": "https://api.github.com/users/tidhamecha2/repos",
"events_url": "https://api.github.com/users/tidhamecha2/events{/privacy}",
"received_events_url": "https://api.github.com/users/tidhamecha2/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @tidhamecha2, thanks for reporting.\r\n\r\nIndeed, we fixed this issue just one week ago: #3008\r\n\r\nThe fix will be included in our next version release.\r\n\r\nIn the meantime, you can incorporate the fix by installing `datasets` from the master branch:\r\n```\r\npip install -U git+ssh://git@github.com/huggingface/datasets.git@master#egg=datasest\r\n```\r\nor\r\n```\r\npip install -U git+https://github.com/huggingface/datasets.git@master#egg=datasets\r\n```"
] | 1,633,946,719,000 | 1,633,947,464,000 | 1,633,947,376,000 | NONE | null | null | null | ## Describe the bug
When trying to get the per class precision values by providing `average=None`, following error is thrown `ValueError: can only convert an array of size 1 to a Python scalar`
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
precision_metric = load_metric("precision")
predictions = [0, 2, 1, 0, 0, 1]
references = [0, 1, 2, 0, 1, 2]
results = precision_metric.compute(predictions=predictions, references=references, average=None)
```
## Expected results
` {'precision': array([0.66666667, 0. , 0. ])}`
as per https://github.com/huggingface/datasets/blob/master/metrics/precision/precision.py
## Actual results
```
output = self._compute(predictions=predictions, references=references, **kwargs)
File "~/.cache/huggingface/modules/datasets_modules/metrics/precision/94709a71c6fe37171ef49d3466fec24dee9a79846c9f176dff66a649e9811690/precision.py", line 110, in _compute
sample_weight=sample_weight,
ValueError: can only convert an array of size 1 to a Python scalar
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: linux
- Python version: 3.6.9
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3057/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3057/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3056 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3056/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3056/comments | https://api.github.com/repos/huggingface/datasets/issues/3056/events | https://github.com/huggingface/datasets/pull/3056 | 1,022,345,564 | PR_kwDODunzps4tAB9h | 3,056 | Fix meteor metric for version >= 3.6.4 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,936,304,000 | 1,633,937,360,000 | 1,633,937,359,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3056",
"html_url": "https://github.com/huggingface/datasets/pull/3056",
"diff_url": "https://github.com/huggingface/datasets/pull/3056.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3056.patch",
"merged_at": 1633937359000
} | After `nltk` update, the meteor metric expects pre-tokenized inputs (breaking change).
This PR fixes this issue, while maintaining compatibility with older versions. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3056/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3055/comments | https://api.github.com/repos/huggingface/datasets/issues/3055/events | https://github.com/huggingface/datasets/issues/3055 | 1,022,319,238 | I_kwDODunzps4871qG | 3,055 | CI test suite fails after meteor metric update | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,633,934,232,000 | 1,633,937,431,000 | 1,633,937,431,000 | MEMBER | null | null | null | ## Describe the bug
CI test suite fails: https://app.circleci.com/pipelines/github/huggingface/datasets/8110/workflows/f059ba43-9154-4632-bebb-82318447ddc9/jobs/50010
Stack trace:
```
___________________ LocalMetricTest.test_load_metric_meteor ____________________
[gw1] linux -- Python 3.6.15 /home/circleci/.pyenv/versions/3.6.15/bin/python3.6
self = <tests.test_metric_common.LocalMetricTest testMethod=test_load_metric_meteor>
metric_name = 'meteor'
def test_load_metric(self, metric_name):
doctest.ELLIPSIS_MARKER = "[...]"
metric_module = importlib.import_module(datasets.load.prepare_module(os.path.join("metrics", metric_name))[0])
metric = datasets.load.import_main_class(metric_module.__name__, dataset=False)
# check parameters
parameters = inspect.signature(metric._compute).parameters
self.assertTrue("predictions" in parameters)
self.assertTrue("references" in parameters)
self.assertTrue(all([p.kind != p.VAR_KEYWORD for p in parameters.values()])) # no **kwargs
# run doctest
with self.patch_intensive_calls(metric_name, metric_module.__name__):
with self.use_local_metrics():
> results = doctest.testmod(metric_module, verbose=True, raise_on_error=True)
tests/test_metric_common.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1951: in testmod
runner.run(test)
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1839: in run
r = DocTestRunner.run(self, test, compileflags, out, False)
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1476: in run
return self.__run(test, compileflags, out)
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1382: in __run
exception)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <doctest.DebugRunner object at 0x7f4c26bd3da0>
out = <built-in method write of _io.TextIOWrapper object at 0x7f51a21852d0>
test = <DocTest datasets_modules.datasets.meteor.6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7.meteor.Mete...ets_modules/datasets/meteor/6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7/meteor.py:87 (5 examples)>
example = <doctest.Example object at 0x7f4c26bd3eb8>
exc_info = (<class 'TypeError'>, TypeError('"hypothesis" expects pre-tokenized hypothesis (Iterable[str]): It is a guide to action which ensures that the military always obeys the commands of the party',), <traceback object at 0x7f4cd01afec8>)
def report_unexpected_exception(self, out, test, example, exc_info):
> raise UnexpectedException(test, example, exc_info)
E doctest.UnexpectedException: <DocTest datasets_modules.datasets.meteor.6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7.meteor.Meteor from /tmp/pytest-of-circleci/pytest-0/popen-gw1/cache/modules/datasets_modules/datasets/meteor/6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7/meteor.py:87 (5 examples)>
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1845: UnexpectedException
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3055/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3054/comments | https://api.github.com/repos/huggingface/datasets/issues/3054/events | https://github.com/huggingface/datasets/pull/3054 | 1,022,108,186 | PR_kwDODunzps4s_TmE | 3,054 | Update Biosses | {
"login": "bwang482",
"id": 6764450,
"node_id": "MDQ6VXNlcjY3NjQ0NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bwang482",
"html_url": "https://github.com/bwang482",
"followers_url": "https://api.github.com/users/bwang482/followers",
"following_url": "https://api.github.com/users/bwang482/following{/other_user}",
"gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bwang482/subscriptions",
"organizations_url": "https://api.github.com/users/bwang482/orgs",
"repos_url": "https://api.github.com/users/bwang482/repos",
"events_url": "https://api.github.com/users/bwang482/events{/privacy}",
"received_events_url": "https://api.github.com/users/bwang482/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,904,712,000 | 1,634,115,867,000 | 1,634,115,867,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3054",
"html_url": "https://github.com/huggingface/datasets/pull/3054",
"diff_url": "https://github.com/huggingface/datasets/pull/3054.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3054.patch",
"merged_at": 1634115867000
} | Fix variable naming | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3054/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3053 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3053/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3053/comments | https://api.github.com/repos/huggingface/datasets/issues/3053/events | https://github.com/huggingface/datasets/issues/3053 | 1,022,076,905 | I_kwDODunzps4866fp | 3,053 | load_dataset('the_pile_openwebtext2') produces ArrowInvalid, value too large to fit in C integer type | {
"login": "davidbau",
"id": 3458792,
"node_id": "MDQ6VXNlcjM0NTg3OTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3458792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davidbau",
"html_url": "https://github.com/davidbau",
"followers_url": "https://api.github.com/users/davidbau/followers",
"following_url": "https://api.github.com/users/davidbau/following{/other_user}",
"gists_url": "https://api.github.com/users/davidbau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davidbau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davidbau/subscriptions",
"organizations_url": "https://api.github.com/users/davidbau/orgs",
"repos_url": "https://api.github.com/users/davidbau/repos",
"events_url": "https://api.github.com/users/davidbau/events{/privacy}",
"received_events_url": "https://api.github.com/users/davidbau/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"I encountered the same bug using different datasets.\r\nany suggestions?",
"+1, can reproduce here!",
"I get the same error\r\nPlatform: Windows 10\r\nPython: python 3.8.8\r\nPyArrow: 5.0"
] | 1,633,895,721,000 | 1,642,601,052,000 | null | NONE | null | null | null | ## Describe the bug
When loading `the_pile_openwebtext2`, we get the error `pyarrow.lib.ArrowInvalid: Value 2111 too large to fit in C integer type`
## Steps to reproduce the bug
```python
import datasets
ds = datasets.load_dataset('the_pile_openwebtext2')
```
## Expected results
Should download the dataset, convert it to an arrow file, and return a working Dataset object.
## Actual results
The download works, but conversion to the arrow file fails as follows:
```
>>> ds = datasets.load_dataset('the_pile_openwebtext2')
Downloading and preparing dataset openwebtext2/plain_text (download: 27.33 GiB, generated: 63.86 GiB
, post-processed: Unknown size, total: 91.19 GiB) to /home/davidbau/.cache/huggingface/datasets/open
webtext2/plain_text/1.0.0/c48ec73ba3483bac673463f48f67e9a4fd8cb49a9d6ec4fb957f0b424b97cf25...
Traceback (most recent call last):
File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/builder.py", line 1133,
in _prepare_split
writer.write(example, key)
File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/arrow_writer.py", line
366, in write
self.write_examples_on_file()
File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/arrow_writer.py", line
311, in write_examples_on_file
pa_array = pa.array(typed_sequence)
File "pyarrow/array.pxi", line 222, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/arrow_writer.py", line
115, in __arrow_array__
out = pa.array(cast_to_python_objects(self.data, only_1d_for_numpy=True), type=type)
File "pyarrow/array.pxi", line 305, in pyarrow.lib.array
File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Value 2111 too large to fit in C integer type
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
```
- Platform: Ubuntu 20.04
- Python version: python 3.9
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3053/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3053/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3052 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3052/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3052/comments | https://api.github.com/repos/huggingface/datasets/issues/3052/events | https://github.com/huggingface/datasets/issues/3052 | 1,021,944,435 | I_kwDODunzps486aJz | 3,052 | load_dataset cannot download the data and hangs on forever if cache dir specified | {
"login": "BenoitDalFerro",
"id": 69694610,
"node_id": "MDQ6VXNlcjY5Njk0NjEw",
"avatar_url": "https://avatars.githubusercontent.com/u/69694610?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenoitDalFerro",
"html_url": "https://github.com/BenoitDalFerro",
"followers_url": "https://api.github.com/users/BenoitDalFerro/followers",
"following_url": "https://api.github.com/users/BenoitDalFerro/following{/other_user}",
"gists_url": "https://api.github.com/users/BenoitDalFerro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenoitDalFerro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenoitDalFerro/subscriptions",
"organizations_url": "https://api.github.com/users/BenoitDalFerro/orgs",
"repos_url": "https://api.github.com/users/BenoitDalFerro/repos",
"events_url": "https://api.github.com/users/BenoitDalFerro/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenoitDalFerro/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Issue was environment inconsistency, updating packages did the trick\r\n\r\n`conda install -c huggingface -c conda-forge datasets`\r\n\r\n> Collecting package metadata (current_repodata.json): done\r\n> Solving environment: |\r\n> The environment is inconsistent, please check the package plan carefully\r\n> The following packages are causing the inconsistency:\r\n> \r\n> - conda-forge/noarch::datasets==1.12.1=pyhd8ed1ab_1\r\n> - conda-forge/win-64::multiprocess==0.70.12.2=py38h294d835_0\r\n> done\r\n> \r\n> Package Plan\r\n> \r\n> environment location: C:\\xxx\\anaconda3\\envs\\UnBias-94-1\r\n> \r\n> added / updated specs:\r\n> - datasets\r\n> \r\n> \r\n> The following NEW packages will be INSTALLED:\r\n> \r\n> dill conda-forge/noarch::dill-0.3.4-pyhd8ed1ab_0\r\n> \r\n> The following packages will be UPDATED:\r\n> \r\n> ca-certificates pkgs/main::ca-certificates-2021.9.30-~ --> conda-forge::ca-certificates-2021.10.8-h5b45459_0\r\n> certifi pkgs/main::certifi-2021.5.30-py38haa9~ --> conda-forge::certifi-2021.10.8-py38haa244fe_0\r\n> \r\n> The following packages will be SUPERSEDED by a higher-priority channel:\r\n> "
] | 1,633,861,896,000 | 1,633,949,829,000 | 1,633,949,796,000 | NONE | null | null | null | ## Describe the bug
After updating datasets, a code that ran just fine for ages began to fail. Specifying _datasets.load_dataset_'s _cache_dir_ optional argument on Windows 10 machine results in data download to hang on forever. Same call without cache_dir works just fine. Surprisingly exact same code just runs perfectly fine on Linux docker instance running in cloud.
Unfortunately I updated Windows also at the same time and I can't remember which version of datasets was running in my conda environment prior to the update otherwise I would have tried both to check this out. :(
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
cache_dir = 'c:/data/datasets'
dataset = load_dataset('wikipedia', '20200501.en', split='train',cache_dir=cache_dir)
```
Note that exact same code without specifying _cache_dir_ argument works perfectly fine.
```
cache_dir = 'c:/data/datasets'
dataset = load_dataset('wikipedia', '20200501.en', split='train')
```
## Expected results
Downloads the dataset and cache is handled in the _cache_dir_ directory
## Actual results
Data download keeps hanging on forever, **NO TRACEBACK**!
## Environment info
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.8.11
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3052/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3052/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3051/comments | https://api.github.com/repos/huggingface/datasets/issues/3051/events | https://github.com/huggingface/datasets/issues/3051 | 1,021,852,234 | I_kwDODunzps486DpK | 3,051 | Non-Matching Checksum Error with crd3 dataset | {
"login": "RylanSchaeffer",
"id": 8942987,
"node_id": "MDQ6VXNlcjg5NDI5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8942987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RylanSchaeffer",
"html_url": "https://github.com/RylanSchaeffer",
"followers_url": "https://api.github.com/users/RylanSchaeffer/followers",
"following_url": "https://api.github.com/users/RylanSchaeffer/following{/other_user}",
"gists_url": "https://api.github.com/users/RylanSchaeffer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RylanSchaeffer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RylanSchaeffer/subscriptions",
"organizations_url": "https://api.github.com/users/RylanSchaeffer/orgs",
"repos_url": "https://api.github.com/users/RylanSchaeffer/repos",
"events_url": "https://api.github.com/users/RylanSchaeffer/events{/privacy}",
"received_events_url": "https://api.github.com/users/RylanSchaeffer/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I got the same error for another dataset (`multi_woz_v22`):\r\n\r\n```\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json']\r\n```",
"I'm seeing the same issue as @RylanSchaeffer:\r\nPython 3.7.11, macOs 11.4\r\ndatasets==1.14.0\r\n\r\nfails on:\r\n```python\r\ndataset = datasets.load_dataset(\"multi_woz_v22\")\r\n```"
] | 1,633,829,563,000 | 1,647,359,666,000 | 1,647,359,666,000 | NONE | null | null | null | ## Describe the bug
When I try loading the crd3 dataset (https://huggingface.co/datasets/crd3), an error is thrown.
## Steps to reproduce the bug
```python
dataset = load_dataset('crd3', split='train')
```
## Expected results
I expect no error to be thrown.
## Actual results
A non-matching checksum error is thrown.
```
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/RevanthRameshkumar/CRD3/archive/master.zip']
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-4.4.0-173-generic-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.10
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3051/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3050 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3050/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3050/comments | https://api.github.com/repos/huggingface/datasets/issues/3050/events | https://github.com/huggingface/datasets/pull/3050 | 1,021,772,622 | PR_kwDODunzps4s-anK | 3,050 | Fix streaming: catch Timeout error | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I'm running a large test.\r\nLet's see if I get any error within a few days.",
"This time it stopped after 8h but correctly raised `ConnectionError: Server Disconnected`.\r\n\r\nTraceback:\r\n```\r\nTraceback (most recent call last): \r\n File \"/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py\", line 1027, in <module> \r\n main() \r\n File \"/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py\", line 991, in main \r\n for batch in tqdm( \r\n File \"/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/tqdm/std.py\", line 1180, in __iter__ \r\n for obj in iterable: \r\n File \"/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py\", line 376, in data_loader_streaming\r\n for item in dataset:\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 341, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 338, in _iter\r\n yield from ex_iterable\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 179, in __iter__\r\n key_examples_list = [(key, example)] + [\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 179, in <listcomp>\r\n key_examples_list = [(key, example)] + [\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 176, in __iter__\r\n for key, example in iterator:\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 225, in __iter__\r\n for x in self.ex_iterable:\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 99, in __iter__\r\n for key, example in self.generate_examples_fn(**kwargs_with_shuffled_shards):\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 287, in wrapper\r\n for key, table in generate_tables_fn(**kwargs):\r\n File \"/home/koush/datasets/src/datasets/packaged_modules/json/json.py\", line 107, in _generate_tables\r\n batch = f.read(self.config.chunksize)\r\n File \"/home/koush/datasets/src/datasets/utils/streaming_download_manager.py\", line 136, in read_with_retries\r\n raise ConnectionError(\"Server Disconnected\")\r\nConnectionError: Server Disconnected\r\n```\r\n\r\nRight before this error, the warnings were correctly raised:\r\n\r\n```\r\n10/10/2021 06:02:26 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 1sec [1/3]\r\n10/10/2021 06:02:27 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 1sec [2/3] \r\n10/10/2021 06:02:28 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 1sec [3/3\r\n```\r\n\r\nI'm going to see what happens if I change the max retries to 20 and the interval to 5.",
"Also maybe we can raise the Server Disconnected error with more info about what kind of error caused it (client error, time out, etc.)",
"I have 2 runs:\r\n* [run 1](https://wandb.ai/dalle-mini/dalle-mini/runs/1nj161cl?workspace=user-borisd13) with [this data](https://huggingface.co/datasets/dalle-mini/encoded) that I will remove soon because I now use the 2nd one\r\n* [run 2](https://wandb.ai/dalle-mini/dalle-mini/runs/he9rrc3q?workspace=user-borisd13) with [this data](https://huggingface.co/datasets/dalle-mini/encoded-vqgan_imagenet_f16_16384)\r\n* `load_dataset(dataset_repo, data_files={'train':'data/train/*.jsonl', 'validation':'data/valid/*.jsonl'}, streaming=True)`\r\n\r\nThey have now been running by a bit more than a day for one run and 15h for the other.\r\n\r\nThe error logs are not shown in wandb because the script use `pylogging` (not sure why, I should change it) but basically so far with the new settings I had one timeout in each with successful reconnect afterwards.\r\n\r\nSo I think it's a good idea to have:\r\n* `STREAMING_READ_RETRY_INTERVAL = 5` since before my runs would get 3 errors in a row (with the default 1 second pause)\r\n* `STREAMING_READ_MAX_RETRIES` should also be increased. Since this type of error does not happen a lot, I would still have a large number (at least 10) because a stopped training run may be a big issue if checkpointing/restart is not well implemented which is not always trivial",
"I agree ! Feel free to open a PR to increase both values"
] | 1,633,803,560,000 | 1,634,052,498,000 | 1,633,944,938,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3050",
"html_url": "https://github.com/huggingface/datasets/pull/3050",
"diff_url": "https://github.com/huggingface/datasets/pull/3050.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3050.patch",
"merged_at": 1633944938000
} | Catches Timeout error during streaming.
fix #3049 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3050/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3049/comments | https://api.github.com/repos/huggingface/datasets/issues/3049/events | https://github.com/huggingface/datasets/issues/3049 | 1,021,770,008 | I_kwDODunzps485vkY | 3,049 | TimeoutError during streaming | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,633,802,811,000 | 1,633,944,938,000 | 1,633,944,938,000 | CONTRIBUTOR | null | null | null | ## Describe the bug
I got a TimeoutError after streaming for about 10h.
## Steps to reproduce the bug
Very long code but we could do a test of streaming indefinitely data, though error may take a while to appear.
## Expected results
This error was not expected in the code which considers only `ClientError` but not `TimeoutError`.
See [this line](https://github.com/huggingface/datasets/blob/2814fbd0e18150be409f10804670e98d9ecb87d4/src/datasets/utils/streaming_download_manager.py#L129).
Based on the traceback, it looks like the `TimeoutError` was not captured.
## Actual results
```
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/asyn.py", line 25, in _runner
result[0] = await coro
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/implementations/http.py", line 614, in async_fetch_range
out = await r.read()
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 1032, in read
self._body = await self.content.read()
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/streams.py", line 370, in read
block = await self.readany()
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/streams.py", line 392, in readany
await self._wait("readany")
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/streams.py", line 306, in _wait
await waiter
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/helpers.py", line 656, in __exit__
raise asyncio.TimeoutError from None
asyncio.exceptions.TimeoutError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py", line 1027, in <module>
main()
File "/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py", line 991, in main
for batch in tqdm(
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/tqdm/std.py", line 1180, in __iter__
for obj in iterable:
File "/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py", line 376, in data_loader_streaming
for item in dataset:
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 341, in __iter__
for key, example in self._iter():
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 338, in _iter
yield from ex_iterable
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 179, in __iter__
key_examples_list = [(key, example)] + [
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 179, in <listcomp>
key_examples_list = [(key, example)] + [
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 176, in __iter__
for key, example in iterator:
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 225, in __iter__
for x in self.ex_iterable:
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 99, in __iter__
for key, example in self.generate_examples_fn(**kwargs_with_shuffled_shards):
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 287, in wrapper
for key, table in generate_tables_fn(**kwargs):
File "/home/koush/datasets/src/datasets/packaged_modules/json/json.py", line 107, in _generate_tables
batch = f.read(self.config.chunksize)
File "/home/koush/datasets/src/datasets/utils/streaming_download_manager.py", line 126, in read_with_retries
out = read(*args, **kwargs)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/implementations/http.py", line 572, in read
return super().read(length)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/spec.py", line 1533, in read
out = self.cache._fetch(self.loc, self.loc + length)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/caching.py", line 390, in _fetch
self.cache = self.fetcher(start, bend)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/asyn.py", line 91, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/asyn.py", line 69, in sync
raise FSTimeoutError from return_result
fsspec.exceptions.FSTimeoutError
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.2.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3049/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3048/comments | https://api.github.com/repos/huggingface/datasets/issues/3048/events | https://github.com/huggingface/datasets/issues/3048 | 1,021,765,661 | I_kwDODunzps485ugd | 3,048 | Identify which shard data belongs to | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Independently of this I think it raises the need to allow multiprocessing during streaming so that we get samples from multiple shards in one batch."
] | 1,633,801,595,000 | 1,633,811,057,000 | null | CONTRIBUTOR | null | null | null | **Is your feature request related to a problem? Please describe.**
I'm training on a large dataset made of multiple sub-datasets.
During training I can observe some jumps in loss which may correspond to different shards.
![image](https://user-images.githubusercontent.com/715491/136668758-521263aa-a9b2-4ad2-8d22-060b6bf86a1c.png)
My suspicion is that either:
* some of the sub-datasets are harder for the model than others
* some of the sub-datasets are not formatted properly
I'd like to identify which shards correspond to those jumps.
**Describe the solution you'd like**
It would be nice to have a key associated to each data sample or data batch containing details on where the data comes from (shard idx + item idx within the shard).
This should be supported both in local and streaming mode.
**Describe alternatives you've considered**
A fix would be for me to add myself details (shard id, sample id) as part of each data sample.
The inconvenient is that it requires users to process/reupload every dataset when they need this feature. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3048/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3047 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3047/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3047/comments | https://api.github.com/repos/huggingface/datasets/issues/3047/events | https://github.com/huggingface/datasets/issues/3047 | 1,021,360,616 | I_kwDODunzps484Lno | 3,047 | Loading from cache a dataset for LM built from a text classification dataset sometimes errors | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"This has been fixed in 1.15, let me know if you still have this issue"
] | 1,633,717,391,000 | 1,635,959,588,000 | 1,635,959,588,000 | MEMBER | null | null | null | ## Describe the bug
Yes, I know, that description sucks. So the problem is arising in the course when we build a masked language modeling dataset using the IMDB dataset. To reproduce (or try since it's a bit fickle).
Create a dataset for masled-language modeling from the IMDB dataset.
```python
from datasets import load_dataset
from transformers import Autotokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased)
imdb_dataset = load_dataset("imdb", split="train")
def tokenize_function(examples):
return tokenizer(examples["text"])
tokenized_dataset = imdb_dataset.map(
tokenize_function, batched=True, remove_columns=["text", "label"]
)
chunk_size = 128
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
# Compute length of concatenated texts
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the last chunk if it's smaller than chunk_size
total_length = (total_length // chunk_size) * chunk_size
# Split by chunks of max_len.
result = {
k: [t[i : i + chunk_size] for i in range(0, total_length, chunk_size)]
for k, t in concatenated_examples.items()
}
# Create a new labels column
result["labels"] = result["input_ids"].copy()
return result
lm_dataset = tokenized_dataset.map(group_texts, batched=True)
```
Until now, all is well. The problem comes when you re-execute that code, more specifically:
```python
tokenized_dataset = imdb_dataset.map(
tokenize_function, batched=True, remove_columns=["text", "label"]
)
lm_dataset = tokenized_dataset.map(group_texts, batched=True)
```
Try several times if the bug doesn't appear instantly, or just each line at a time, ideally in a notebook/Colab and you should get at some point:
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-40-357a56ee3d53> in <module>
----> 1 lm_dataset = tokenized_dataset.map(group_texts, batched=True)
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1947 new_fingerprint=new_fingerprint,
1948 disable_tqdm=disable_tqdm,
-> 1949 desc=desc,
1950 )
1951 else:
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
424 }
425 # apply actual function
--> 426 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
427 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
428 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
404 # Call actual function
405
--> 406 out = func(self, *args, **kwargs)
407
408 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2138 if os.path.exists(cache_file_name) and load_from_cache_file:
2139 logger.warning("Loading cached processed dataset at %s", cache_file_name)
-> 2140 info = self.info.copy()
2141 info.features = features
2142 return Dataset.from_file(cache_file_name, info=info, split=self.split)
~/git/datasets/src/datasets/info.py in copy(self)
278
279 def copy(self) -> "DatasetInfo":
--> 280 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
281
282
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
177 for idx, template in enumerate(self.task_templates):
178 if isinstance(template, TextClassification):
--> 179 labels = self.features[template.label_column].names
180 self.task_templates[idx] = TextClassification(
181 text_column=template.text_column, label_column=template.label_column, labels=labels
KeyError: 'label'
```
It seems that when loading the cache, the dataset tries to access some kind of text classification template (which I imagine comes from the original dataset) and to look at a key that has since been removed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3047/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3047/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3046 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3046/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3046/comments | https://api.github.com/repos/huggingface/datasets/issues/3046/events | https://github.com/huggingface/datasets/pull/3046 | 1,021,021,368 | PR_kwDODunzps4s8MjS | 3,046 | Fix MedDialog metadata JSON | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,694,680,000 | 1,633,938,403,000 | 1,633,938,402,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3046",
"html_url": "https://github.com/huggingface/datasets/pull/3046",
"diff_url": "https://github.com/huggingface/datasets/pull/3046.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3046.patch",
"merged_at": 1633938402000
} | Fix #2969. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3046/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3045 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3045/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3045/comments | https://api.github.com/repos/huggingface/datasets/issues/3045/events | https://github.com/huggingface/datasets/pull/3045 | 1,020,968,704 | PR_kwDODunzps4s8B2b | 3,045 | Fix inconsistent caching behaviour in Dataset.map() with multiprocessing #3044 | {
"login": "vlievin",
"id": 9859840,
"node_id": "MDQ6VXNlcjk4NTk4NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9859840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vlievin",
"html_url": "https://github.com/vlievin",
"followers_url": "https://api.github.com/users/vlievin/followers",
"following_url": "https://api.github.com/users/vlievin/following{/other_user}",
"gists_url": "https://api.github.com/users/vlievin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vlievin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vlievin/subscriptions",
"organizations_url": "https://api.github.com/users/vlievin/orgs",
"repos_url": "https://api.github.com/users/vlievin/repos",
"events_url": "https://api.github.com/users/vlievin/events{/privacy}",
"received_events_url": "https://api.github.com/users/vlievin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for noticing this inconsistence and suggesting a fix :)\r\n\r\nIf I understand correctly you try to pass the same fingerprint to each processed shard of the dataset. This can be an issue since each shard is actually a different dataset with different data: they shouldn't have the same fingerprint.\r\n\r\nIdeally we want the result after `map` to have this fingerprint. The result after `map` is the concatenation of all the processed shards. In this case what we can do is add the `fingerprint` parameter to `concatenate_datasets` to overwrite the fingerprint here if needed:\r\nhttps://github.com/huggingface/datasets/blob/03b7f123cc17afc517c0aa2f912bbd90cb266185/src/datasets/arrow_dataset.py#L3588-L3590\r\n\r\nthen you can pass the fingerprint to `concatenate_datasets` here:\r\nhttps://github.com/huggingface/datasets/blob/03b7f123cc17afc517c0aa2f912bbd90cb266185/src/datasets/arrow_dataset.py#L2044-L2044",
"Hi @lhoestq, thanks for the pointers! Not having a unique fingerprint per shard was indeed was indeed a problem. \r\n\r\nLet me look into this. I'll be back with a fix soon.",
"Alright, to clarify about my problem. I using am using `datasets` with large datasets, and want to cache a heavy and non-deterministically fingerprintable function (using `datasets.fingerprint.Hasher`). Using `Dataset.map()` as it is would cause generating a random fingerprint. To circumvent this, I am generating custom deterministic fingerprints, which I pass as an argument to `Dataset.map()`. In that way, a deterministic fingerprint is set, and caching can be used. \r\n\r\nThis approach works well when using `num_proc==1`, but not so well when using `num_proc>1`. In both cases, `dataset._fingerprint` is effectively set to `new_fingerprint` at the end of the `.map()` call. However, caching is not used when `num_proc>1`, a non deterministically fingerprintable function and `new_fingerprint != null. The reason is that caching operates within `Dataset._map_single` and `new_fingerprint` is not passed here. \r\n\r\nThis pull request implements a quick fix (+unit test) by passing `new_fingerprint=f\"{new_fingerprint}-part{rank+1}-{num_proc}\"` to each `_map_single` call. Using a separate name for each call makes sure that each worker uses a different cache file (as you mentioned above).\r\n\r\nHowever, this solution still means that using a different value for `num_proc` will require computing new partial cache files. In the long run, performing the caching within `map()` instead of within `_map_single()` would be a cleaner solution.",
"Hi @vlievin,\r\n\r\nIf I understand your example correctly, you are trying to use the `new_fingerprint` param to have a deterministic fingerprint of the transform, which is not hashable due to randomness. Any particular reason why you are not using the `cache_file_name` param instead? I did run your example with the `cache_file_name` specified, and it behaves as expected based on the logs. Internally, `new_fingerprint` is needed to inject the calculated fingerprint into a method by the `fingerprint_transform` decorator, which is then used to compute the cache file name in `Dataset._get_cache_file_path` if the user hasn't specified one. ",
"Hi @lhoestq, I have cleaned up the unit test (incl. styling). It should be ready to merge as such. I am using this branch in my project and everything works fine. \r\n\r\nHi @mariosasko, the argument `new_fingerprint` allowed me to deterministically cache my transformation when using `num_proc=1`, so I assumed that was the right way to go. But maybe I have misinterpreted how `new_fingerprint` should be used.\r\n\r\nBut in any case, `map()` should perform consistently with regards to `num_proc`. In my opinion, the behaviour of `Dataset.map()` should perform the same, and this without requiring the user to input `cache_file_name` when `num_proc>1` is set.\r\nBut maybe there is a more elegant way to fix this using `cache_file_name` internally for each `_single_map()` call.\r\n\r\nSo, I think this is a more high level design decision and I will leave it to the maintainers :) ",
"Hi @vlievin,\r\n\r\nI appreciate your effort, but `new_fingerprint` behaves as described in the `Dataset.map` docs, and we don't have to follow some artificial consistency in regards to `num_proc`:\r\nhttps://github.com/huggingface/datasets/blob/adc5cec58dd15ee672016086fefdea34b3143e4f/src/datasets/arrow_dataset.py#L1962-L1963\r\n\r\nAdditionally, to compute the cache file name, you are using a private method (`dset._get_cache_file_path(new_fingerprint)`); prefixed with `_`), so this is a sign you may be doing something wrong because you are relying on the internals. I suggest you use cache_file_name instead and follow the suffix template docs, which explain how to compute file paths of the created cache files when `num_proc > 1`.",
"Hi @mariosasko, thanks for the pointer regarding the use of the private method in then unit tests. \r\n\r\nYes, `new_fingerprint` behaves as documented. If you don't think this is an issue, feel free to close this pull request. \r\n",
"Allowing the users to pass the fingerprint themselves for functions that can't be hashed would be a nice improvements. However I agree that as @mariosasko mentioned this is currently not how we want the API to behave for now - since it has to do with the internals of the library.\r\n\r\nThough we can discuss what could be the right way of doing it in https://github.com/huggingface/datasets/issues/3044 if you don't mind !"
] | 1,633,690,761,000 | 1,634,835,512,000 | 1,634,826,164,000 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3045",
"html_url": "https://github.com/huggingface/datasets/pull/3045",
"diff_url": "https://github.com/huggingface/datasets/pull/3045.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3045.patch",
"merged_at": null
} | Fix #3044
1. A rough unit test that fails without the fix. It probably doesn't comply with your code standards, but that just to draft the idea.
2. A one liner fix | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3045/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3044 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3044/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3044/comments | https://api.github.com/repos/huggingface/datasets/issues/3044/events | https://github.com/huggingface/datasets/issues/3044 | 1,020,869,778 | I_kwDODunzps482TyS | 3,044 | Inconsistent caching behaviour when using `Dataset.map()` with a `new_fingerprint` and `num_proc>1` | {
"login": "vlievin",
"id": 9859840,
"node_id": "MDQ6VXNlcjk4NTk4NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/9859840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vlievin",
"html_url": "https://github.com/vlievin",
"followers_url": "https://api.github.com/users/vlievin/followers",
"following_url": "https://api.github.com/users/vlievin/following{/other_user}",
"gists_url": "https://api.github.com/users/vlievin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vlievin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vlievin/subscriptions",
"organizations_url": "https://api.github.com/users/vlievin/orgs",
"repos_url": "https://api.github.com/users/vlievin/repos",
"events_url": "https://api.github.com/users/vlievin/events{/privacy}",
"received_events_url": "https://api.github.com/users/vlievin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Following the discussion in #3045 if would be nice to have a way to let users have a nice experience with caching even if the function is not hashable.\r\n\r\nCurrently a workaround is to make the function picklable. This can be done by implementing a callable class instead, that can be pickled using by implementing a custom `__getstate__` method for example.\r\n\r\nHowever it sounds pretty complicated for a simple thing. Maybe one idea would be to have something similar to streamlit: they allow users to register the hashing of their own objects.\r\n\r\nSee the documentation about their `hash_funcs` here: https://docs.streamlit.io/library/advanced-features/caching#the-hash_funcs-parameter\r\n\r\nHere is the example they give:\r\n\r\n```python\r\nclass FileReference:\r\n def __init__(self, filename):\r\n self.filename = filename\r\n\r\ndef hash_file_reference(file_reference):\r\n filename = file_reference.filename\r\n return (filename, os.path.getmtime(filename))\r\n\r\n@st.cache(hash_funcs={FileReference: hash_file_reference})\r\ndef func(file_reference):\r\n ...\r\n```",
"My solution was to generate a custom hash, and use the hash as a `new_fingerprint` argument to the `map()` method to enable caching. This works, but is quite hacky.\r\n\r\n@lhoestq, this approach is very neat, this would make the whole caching mechanic more explicit. I don't have so much time to look into this right now, but I might give it a try in the future. "
] | 1,633,684,030,000 | 1,635,324,058,000 | null | NONE | null | null | null | ## Describe the bug
Caching does not work when using `Dataset.map()` with:
1. a function that cannot be deterministically fingerprinted
2. `num_proc>1`
3. using a custom fingerprint set with the argument `new_fingerprint`.
This means that the dataset will be mapped with the function for each and every call, which does not happen if `num_proc==1`. In that case (`num_proc==1`) subsequent calls will load the transformed dataset from the cache, which is the expected behaviour. The example can easily be translated into a unit test.
I have a fix and will submit a pull request asap.
## Steps to reproduce the bug
```python
import hashlib
import json
import os
from typing import Dict, Any
import numpy as np
from datasets import load_dataset, Dataset
Batch = Dict[str, Any]
filename = 'example.json'
class Transformation():
"""A transformation with a random state that cannot be fingerprinted"""
def __init__(self):
self.state = np.random.random()
def __call__(self, batch: Batch) -> Batch:
batch['x'] = [np.random.random() for _ in batch['x']]
return batch
def generate_dataset():
"""generate a simple dataset"""
rgn = np.random.RandomState(24)
data = {
'data': [{'x': float(y), 'y': -float(y)} for y in
rgn.random(size=(1000,))]}
if not os.path.exists(filename):
with open(filename, 'w') as f:
f.write(json.dumps(data))
return filename
def process_dataset_with_cache(num_proc=1, remove_cache=False,
cache_expected_to_exist=False):
# load the generated dataset
dset: Dataset = next(
iter(load_dataset('json', data_files=filename, field='data').values()))
new_fingerprint = hashlib.md5("static-id".encode("utf8")).hexdigest()
# get the expected cached path
cache_path = dset._get_cache_file_path(new_fingerprint)
if remove_cache and os.path.exists(cache_path):
os.remove(cache_path)
# check that the cache exists, and print a statement
# if was actually expected to exist
cache_exist = os.path.exists(cache_path)
print(f"> cache file exists={cache_exist}")
if cache_expected_to_exist and not cache_exist:
print("=== Cache does not exist! ====")
# apply the transformation with the new fingerprint
dset = dset.map(
Transformation(),
batched=True,
num_proc=num_proc,
new_fingerprint=new_fingerprint,
desc="mapping dataset with transformation")
generate_dataset()
for num_proc in [1, 2]:
print(f"# num_proc={num_proc}, first pass")
# first pass to generate the cache (always create a new cache here)
process_dataset_with_cache(remove_cache=True,
num_proc=num_proc,
cache_expected_to_exist=False)
print(f"# num_proc={num_proc}, second pass")
# second pass, expects the cache to exist
process_dataset_with_cache(remove_cache=False,
num_proc=num_proc,
cache_expected_to_exist=True)
os.remove(filename)
```
## Expected results
In the above python example, with `num_proc=2`, the **cache file should exist in the second call** of `process_dataset_with_cache` ("=== Cache does not exist! ====" should not be printed).
When the cache is successfully created, `map()` is called only one time.
## Actual results
In the above python example, with `num_proc=2`, the **cache does not exist in the second call** of `process_dataset_with_cache` (this results in printing "=== Cache does not exist! ====").
Because the cache doesn't exist, the `map()` method is executed a second time and the dataset is not loaded from the cache.
## Environment info
- `datasets` version: 1.12.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.8
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3044/timeline | null | null | false |