url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.26B
| node_id
stringlengths 18
32
| number
int64 1
4.44k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,654B
| updated_at
int64 1,587B
1,654B
| closed_at
int64 1,587B
1,654B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 1
value | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3328/comments | https://api.github.com/repos/huggingface/datasets/issues/3328/events | https://github.com/huggingface/datasets/pull/3328 | 1,065,015,262 | PR_kwDODunzps4vFTpW | 3,328 | Quick fix error formatting | {
"login": "NouamaneTazi",
"id": 29777165,
"node_id": "MDQ6VXNlcjI5Nzc3MTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NouamaneTazi",
"html_url": "https://github.com/NouamaneTazi",
"followers_url": "https://api.github.com/users/NouamaneTazi/followers",
"following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}",
"gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions",
"organizations_url": "https://api.github.com/users/NouamaneTazi/orgs",
"repos_url": "https://api.github.com/users/NouamaneTazi/repos",
"events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}",
"received_events_url": "https://api.github.com/users/NouamaneTazi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,638,013,668,000 | 1,638,192,762,000 | 1,638,192,762,000 | MEMBER | null | While working on a dataset, I got the error
```
TypeError: Provided `function` which is applied to all elements of table returns a `dict` of types {[type(x) for x in processed_inputs.values()]}. When using `batched=True`, make sure provided `function` returns a `dict` of types like `{allowed_batch_return_types}`.
```
This PR should fix the formatting of this error | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3328/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3328/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3328",
"html_url": "https://github.com/huggingface/datasets/pull/3328",
"diff_url": "https://github.com/huggingface/datasets/pull/3328.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3328.patch",
"merged_at": 1638192762000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3327 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3327/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3327/comments | https://api.github.com/repos/huggingface/datasets/issues/3327/events | https://github.com/huggingface/datasets/issues/3327 | 1,064,675,888 | I_kwDODunzps4_daow | 3,327 | "Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)" | {
"login": "eliasws",
"id": 19492473,
"node_id": "MDQ6VXNlcjE5NDkyNDcz",
"avatar_url": "https://avatars.githubusercontent.com/u/19492473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliasws",
"html_url": "https://github.com/eliasws",
"followers_url": "https://api.github.com/users/eliasws/followers",
"following_url": "https://api.github.com/users/eliasws/following{/other_user}",
"gists_url": "https://api.github.com/users/eliasws/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eliasws/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eliasws/subscriptions",
"organizations_url": "https://api.github.com/users/eliasws/orgs",
"repos_url": "https://api.github.com/users/eliasws/repos",
"events_url": "https://api.github.com/users/eliasws/events{/privacy}",
"received_events_url": "https://api.github.com/users/eliasws/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"#3323 "
] | 1,637,943,996,000 | 1,637,945,051,000 | 1,637,945,051,000 | CONTRIBUTOR | null | ## Describe the bug
Passing a correctly shaped Numpy-Array to get_nearest_examples leads to the Exception
"Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)"
Probably the reason for this is a wrongly converted assertion.
1.15.1:
`assert len(query.shape) == 1 or (len(query.shape) == 2 and query.shape[0] == 1)`
1.16.1:
```
if len(query.shape) != 1 or (len(query.shape) == 2 and query.shape[0] != 1):
raise ValueError("Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)")
```
## Steps to reproduce the bug
follow the steps described here: https://huggingface.co/course/chapter5/6?fw=tf
```python
question_embedding.shape # (1, 768)
scores, samples = embeddings_dataset.get_nearest_examples(
"embeddings", question_embedding, k=5 # Error
)
# "Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)"
```
## Expected results
Should work without exception
## Actual results
Throws exception
## Environment info
- `datasets` version: 1.15.1
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.12
- PyArrow version: 6.0.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3327/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3327/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3326 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3326/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3326/comments | https://api.github.com/repos/huggingface/datasets/issues/3326/events | https://github.com/huggingface/datasets/pull/3326 | 1,064,664,479 | PR_kwDODunzps4vEaYG | 3,326 | Fix import `datasets` on python 3.10 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,637,943,000,000 | 1,637,944,283,000 | 1,637,944,283,000 | MEMBER | null | In python 3.10 it's no longer possible to use `functools.wraps` on a method decorated with `classmethod`.
To fix this I inverted the order of the `inject_arrow_table_documentation` and `classmethod` decorators
Fix #3324 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3326/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3326/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3326",
"html_url": "https://github.com/huggingface/datasets/pull/3326",
"diff_url": "https://github.com/huggingface/datasets/pull/3326.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3326.patch",
"merged_at": 1637944283000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3325/comments | https://api.github.com/repos/huggingface/datasets/issues/3325/events | https://github.com/huggingface/datasets/pull/3325 | 1,064,663,075 | PR_kwDODunzps4vEaGO | 3,325 | Update conda dependencies | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,637,942,887,000 | 1,637,943,637,000 | 1,637,943,636,000 | MEMBER | null | Some dependencies minimum versions were outdated. For example `pyarrow` and `huggingface_hub` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3325/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3325/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3325",
"html_url": "https://github.com/huggingface/datasets/pull/3325",
"diff_url": "https://github.com/huggingface/datasets/pull/3325.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3325.patch",
"merged_at": 1637943636000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3324 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3324/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3324/comments | https://api.github.com/repos/huggingface/datasets/issues/3324/events | https://github.com/huggingface/datasets/issues/3324 | 1,064,661,212 | I_kwDODunzps4_dXDc | 3,324 | Can't import `datasets` in python 3.10 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,637,942,774,000 | 1,637,944,283,000 | 1,637,944,283,000 | MEMBER | null | When importing `datasets` I'm getting this error in python 3.10:
```python
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/__init__.py", line 34, in <module>
from .arrow_dataset import Dataset, concatenate_datasets
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_dataset.py", line 47, in <module>
from .arrow_reader import ArrowReader
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_reader.py", line 33, in <module>
from .table import InMemoryTable, MemoryMappedTable, Table, concat_tables
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 334, in <module>
class InMemoryTable(TableBlock):
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 361, in InMemoryTable
def from_pandas(cls, *args, **kwargs):
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 24, in wrapper
out = wraps(arrow_table_method)(method)
File "/Users/quentinlhoest/.pyenv/versions/3.10.0/lib/python3.10/functools.py", line 61, in update_wrapper
wrapper.__wrapped__ = wrapped
AttributeError: readonly attribute
```
This makes the conda build fail.
I'm opening a PR to fix this and do a patch release 1.16.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3324/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3324/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3323/comments | https://api.github.com/repos/huggingface/datasets/issues/3323/events | https://github.com/huggingface/datasets/pull/3323 | 1,064,660,452 | PR_kwDODunzps4vEZwq | 3,323 | Fix wrongly converted assert | {
"login": "eliasws",
"id": 19492473,
"node_id": "MDQ6VXNlcjE5NDkyNDcz",
"avatar_url": "https://avatars.githubusercontent.com/u/19492473?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliasws",
"html_url": "https://github.com/eliasws",
"followers_url": "https://api.github.com/users/eliasws/followers",
"following_url": "https://api.github.com/users/eliasws/following{/other_user}",
"gists_url": "https://api.github.com/users/eliasws/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eliasws/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eliasws/subscriptions",
"organizations_url": "https://api.github.com/users/eliasws/orgs",
"repos_url": "https://api.github.com/users/eliasws/repos",
"events_url": "https://api.github.com/users/eliasws/events{/privacy}",
"received_events_url": "https://api.github.com/users/eliasws/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Closes #3327 "
] | 1,637,942,739,000 | 1,637,945,052,000 | 1,637,945,051,000 | CONTRIBUTOR | null | Seems like this assertion was replaced by an exception but the condition got wrongly converted. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3323/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3323/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3323",
"html_url": "https://github.com/huggingface/datasets/pull/3323",
"diff_url": "https://github.com/huggingface/datasets/pull/3323.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3323.patch",
"merged_at": 1637945051000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3322 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3322/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3322/comments | https://api.github.com/repos/huggingface/datasets/issues/3322/events | https://github.com/huggingface/datasets/pull/3322 | 1,064,429,705 | PR_kwDODunzps4vD1Ct | 3,322 | Add missing tags to XTREME | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,637,930,225,000 | 1,638,193,207,000 | 1,638,193,206,000 | CONTRIBUTOR | null | Add missing tags to the XTREME benchmark for better discoverability. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3322/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3322/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3322",
"html_url": "https://github.com/huggingface/datasets/pull/3322",
"diff_url": "https://github.com/huggingface/datasets/pull/3322.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3322.patch",
"merged_at": 1638193206000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3321 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3321/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3321/comments | https://api.github.com/repos/huggingface/datasets/issues/3321/events | https://github.com/huggingface/datasets/pull/3321 | 1,063,858,386 | PR_kwDODunzps4vCBeI | 3,321 | Update URL of tatoeba subset of xtreme | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"<s>To be more precise: `os.path.join` is replaced on-the-fly by `xjoin` anyway with patching, to extend it to remote files</s>",
"Oh actually just ignore what I said: they were used to concatenate URLs, which is not recommended. Let me fix that again by appending using `+`"
] | 1,637,865,751,000 | 1,637,922,630,000 | 1,637,922,630,000 | CONTRIBUTOR | null | Updates the URL of the tatoeba subset of xtreme. Additionally, replaces `os.path.join` with `xjoin` to correctly join the URL segments on Windows.
Fix #3320 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3321/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3321/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3321",
"html_url": "https://github.com/huggingface/datasets/pull/3321",
"diff_url": "https://github.com/huggingface/datasets/pull/3321.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3321.patch",
"merged_at": 1637922629000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3320/comments | https://api.github.com/repos/huggingface/datasets/issues/3320/events | https://github.com/huggingface/datasets/issues/3320 | 1,063,531,992 | I_kwDODunzps4_ZDXY | 3,320 | Can't get tatoeba.rus dataset | {
"login": "mmg10",
"id": 65535131,
"node_id": "MDQ6VXNlcjY1NTM1MTMx",
"avatar_url": "https://avatars.githubusercontent.com/u/65535131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mmg10",
"html_url": "https://github.com/mmg10",
"followers_url": "https://api.github.com/users/mmg10/followers",
"following_url": "https://api.github.com/users/mmg10/following{/other_user}",
"gists_url": "https://api.github.com/users/mmg10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mmg10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mmg10/subscriptions",
"organizations_url": "https://api.github.com/users/mmg10/orgs",
"repos_url": "https://api.github.com/users/mmg10/repos",
"events_url": "https://api.github.com/users/mmg10/events{/privacy}",
"received_events_url": "https://api.github.com/users/mmg10/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,637,843,471,000 | 1,637,922,629,000 | 1,637,922,629,000 | NONE | null | ## Describe the bug
It gives an error.
> FileNotFoundError: Couldn't find file at https://github.com/facebookresearch/LASER/raw/master/data/tatoeba/v1/tatoeba.rus-eng.rus
## Steps to reproduce the bug
```python
data=load_dataset("xtreme","tatoeba.rus", split="validation")
```
## Solution
The library tries to access the **master** branch. In the github repo of facebookresearch, it is in the **main** branch. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3320/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3320/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3319/comments | https://api.github.com/repos/huggingface/datasets/issues/3319/events | https://github.com/huggingface/datasets/pull/3319 | 1,062,749,654 | PR_kwDODunzps4u-xdv | 3,319 | Add push_to_hub docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks good to me! :)\r\n\r\nMaybe we can mention that users can also set the `private` argument if they want to keep their dataset private? It would lead nicely into the next section on Privacy.",
"Thanks for your comments, I fixed the capitalization for consistency and added an passage to mention the `private` parameter and to have a nice transition to the Privacy section :)\r\n\r\nI also added the login instruction that was missing before the user can actually upload a dataset."
] | 1,637,778,071,000 | 1,637,851,666,000 | 1,637,851,666,000 | MEMBER | null | Since #3098 it's now possible to upload a dataset on the Hub directly from python using the `push_to_hub` method.
I just added a section in the "Upload a dataset to the Hub" tutorial.
I kept the section quite simple but let me know if it sounds good to you @LysandreJik @stevhliu :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3319/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3319/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3319",
"html_url": "https://github.com/huggingface/datasets/pull/3319",
"diff_url": "https://github.com/huggingface/datasets/pull/3319.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3319.patch",
"merged_at": 1637851666000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3318 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3318/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3318/comments | https://api.github.com/repos/huggingface/datasets/issues/3318/events | https://github.com/huggingface/datasets/pull/3318 | 1,062,369,717 | PR_kwDODunzps4u9m-k | 3,318 | Finish transition to PyArrow 3.0.0 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,637,757,014,000 | 1,637,768,105,000 | 1,637,768,104,000 | CONTRIBUTOR | null | Finish transition to PyArrow 3.0.0 that was started in #3098. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3318/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3318/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3318",
"html_url": "https://github.com/huggingface/datasets/pull/3318",
"diff_url": "https://github.com/huggingface/datasets/pull/3318.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3318.patch",
"merged_at": 1637768104000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3317 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3317/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3317/comments | https://api.github.com/repos/huggingface/datasets/issues/3317/events | https://github.com/huggingface/datasets/issues/3317 | 1,062,284,447 | I_kwDODunzps4_USyf | 3,317 | Add desc parameter to Dataset filter method | {
"login": "vblagoje",
"id": 458335,
"node_id": "MDQ6VXNlcjQ1ODMzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vblagoje",
"html_url": "https://github.com/vblagoje",
"followers_url": "https://api.github.com/users/vblagoje/followers",
"following_url": "https://api.github.com/users/vblagoje/following{/other_user}",
"gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions",
"organizations_url": "https://api.github.com/users/vblagoje/orgs",
"repos_url": "https://api.github.com/users/vblagoje/repos",
"events_url": "https://api.github.com/users/vblagoje/events{/privacy}",
"received_events_url": "https://api.github.com/users/vblagoje/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi,\r\n\r\n`Dataset.map` allows more generic transforms compared to `Dataset.filter`, which purpose is very specific (to filter examples based on a condition). That's why I don't think we need the `desc` parameter there for consistency. #3196 has added descriptions to the `Dataset` methods that call `.map` internally, but not for the `filter` method, so we should do that.\r\n\r\nDo you have a description in mind? Maybe `\"Filtering the dataset\"` or `\"Filtering the indices\"`? If yes, feel free to open a PR.",
"I'm personally ok with adding the `desc` parameter actually. Let's say you have different filters, it can be nice to differentiate between the different filters when they're running no ?",
"@mariosasko the use case is filtering of a dataset prior to tokenization and subsequent training. As the dataset is huge it's just a matter of giving a user (model trainer) some feedback on what's going on. Otherwise, feedback is given for all steps in training preparation and not for filtering and the filtering in my use case lasts about 4-5 minutes. And yes, if there are more filtering stages, as @lhoestq pointed out, it would be nice to give some feedback. I thought desc is there already and got confused when I got the script error. ",
"I don't have a strong opinion on that, so having `desc` as a parameter is also OK."
] | 1,637,751,696,000 | 1,641,407,484,000 | 1,641,407,484,000 | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
As I was filtering very large datasets I noticed the filter method doesn't have the desc parameter which is available in the map method. Why don't we add a desc parameter to the filter method both for consistency and it's nice to give some feedback to users during long operations on Datasets?
**Describe the solution you'd like**
Add desc parameter to Dataset filter method
**Describe alternatives you've considered**
N/A
**Additional context**
N/A
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3317/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3317/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3316 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3316/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3316/comments | https://api.github.com/repos/huggingface/datasets/issues/3316/events | https://github.com/huggingface/datasets/issues/3316 | 1,062,185,822 | I_kwDODunzps4_T6te | 3,316 | Add RedCaps dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,637,745,782,000 | 1,641,996,795,000 | 1,641,996,795,000 | MEMBER | null | ## Adding a Dataset
- **Name:** RedCaps
- **Description:** Web-curated image-text data created by the people, for the people
- **Paper:** https://arxiv.org/abs/2111.11431
- **Data:** https://redcaps.xyz/
- **Motivation:** Multimodal image-text dataset: 12M+ Image-text pairs
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
Proposed by @patil-suraj | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3316/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3316/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3315 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3315/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3315/comments | https://api.github.com/repos/huggingface/datasets/issues/3315/events | https://github.com/huggingface/datasets/pull/3315 | 1,061,678,452 | PR_kwDODunzps4u7WpU | 3,315 | Removing query params for dynamic URL caching | {
"login": "anton-l",
"id": 26864830,
"node_id": "MDQ6VXNlcjI2ODY0ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anton-l",
"html_url": "https://github.com/anton-l",
"followers_url": "https://api.github.com/users/anton-l/followers",
"following_url": "https://api.github.com/users/anton-l/following{/other_user}",
"gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anton-l/subscriptions",
"organizations_url": "https://api.github.com/users/anton-l/orgs",
"repos_url": "https://api.github.com/users/anton-l/repos",
"events_url": "https://api.github.com/users/anton-l/events{/privacy}",
"received_events_url": "https://api.github.com/users/anton-l/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"IMO it makes more sense to have `ignore_url_params` as an attribute of `DownloadConfig` to avoid defining a new argument in `DownloadManger`'s methods.",
"@mariosasko that would make sense to me too, but it seems like `DownloadConfig` wasn't intended to be modified from a dataset loading script. @lhoestq wdyt?",
"We can expose `DownloadConfig` as a property of `DownloadManager`, and then in the script before the download call we could do: `dl_manager.download_config.ignore_url_params = True`. But yes, let's hear what Quentin thinks.",
"Oh indeed that's a great idea. This parameter is similar to others like `download_config.use_etag` that defines the behavior of the download and caching, so it's better if we have it there, and expose the `download_config`",
"Implemented it via `dl_manager.download_config.ignore_url_params` now, and also added a usage example above :) "
] | 1,637,699,052,000 | 1,637,851,472,000 | 1,637,851,471,000 | MEMBER | null | The main use case for this is to make dynamically generated private URLs (like the ones returned by CommonVoice API) compatible with the datasets' caching logic.
Usage example:
```python
import datasets
class CommonVoice(datasets.GeneratorBasedBuilder):
def _info(self):
return datasets.DatasetInfo()
def _split_generators(self, dl_manager):
dl_manager.download_config.ignore_url_params = True
HUGE_URL = "https://mozilla-common-voice-datasets.s3.dualstack.us-west-2.amazonaws.com/cv-corpus-7.0-2021-07-21/cv-corpus-7.0-2021-07-21-ab.tar.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQ3GQRTO3IU5JYB5K%2F20211125%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Date=20211125T131423Z&X-Amz-Expires=43200&X-Amz-Security-Token=FwoGZXIvYXdzEL7%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDLsZw7Nj0d9h4rgheyKSBJJ6bxo1JdWLXAUhLMrUB8AXfhP8Ge4F8dtjwXmvGJgkIvdMT7P4YOEE1pS3mW8AyKsz7Z7IRVCIGQrOH1AbxGVVcDoCMMswXEOqL3nJFihKLf99%2F6l8iJVZdzftRUNgMhX5Hz0xSIL%2BzRDpH5nYa7C6YpEdOdW81CFVXybx7WUrX13wc8X4ZlUj7zrWcWf5p2VEIU5Utb7YHVi0Y5TQQiZSDoedQl0j4VmMuFkDzoobIO%2BvilgGeE2kIX0E62X423mEGNu4uQV5JsOuLAtv3GVlemsqEH3ZYrXDuxLmnvGj5HfMtySwI4vKv%2BlnnirD29o7hxvtidXiA8JMWhp93aP%2Fw7sod%2BPPbb5EqP%2B4Qb2GJ1myClOKcLEY0cqoy7XWm8NeVljLJojnFJVS5mNFBAzCCTJ%2FidxNsj8fflzkRoAzYaaPBuOTL1dgtZCdslK3FAuEvw0cik7P9A7IYiULV33otSHKMPcVfNHFsWQljs03gDztsIUWxaXvu6ck5vCcGULsHbfe6xoMPm2bR9jtKLONsslPcnzWIf7%2Fch2w%2F%2BjtTCd9IxaH4kytyJ6mIjpV%2FA%2F2h9qeDnDFsCphnMjAzPQn6tqCgTtPcyJ2b8c94ncgUnE4mepx%2FDa%2FanAEsrg9RPdmbdoPswzHn1IClh91IfSN74u95DZUxlPeZrHG5HxVCN3dKO6j%2Ft1xd20L0hEtazDdKOr8%2FYwGMirp8rp%2BII0pYOwQOrYHqH%2FREX2dRJctJtwE86Qj1eU8BAdXuFIkLC4NWXw%3D&X-Amz-Signature=1b8108d29b0e9c2bf6c7246e58ca8d5749a83de0704757ad8e8a44d78194691f&X-Amz-SignedHeaders=host"
dl_path = dl_manager.download_and_extract(HUGE_URL)
print(dl_path)
HUGE_URL += "&some_new_or_changed_param=12345"
dl_path = dl_manager.download_and_extract(HUGE_URL)
print(dl_path)
dl_manager = datasets.DownloadManager(dataset_name="common_voice")
CommonVoice()._split_generators(dl_manager)
```
Output:
```
/home/user/.cache/huggingface/datasets/downloads/6ef2a377398ff3309554be040caa78414e6562d623dbd0ce8fc262459a7f8ec6
/home/user/.cache/huggingface/datasets/downloads/6ef2a377398ff3309554be040caa78414e6562d623dbd0ce8fc262459a7f8ec6
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3315/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3315/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3315",
"html_url": "https://github.com/huggingface/datasets/pull/3315",
"diff_url": "https://github.com/huggingface/datasets/pull/3315.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3315.patch",
"merged_at": 1637851471000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3314 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3314/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3314/comments | https://api.github.com/repos/huggingface/datasets/issues/3314/events | https://github.com/huggingface/datasets/pull/3314 | 1,061,448,227 | PR_kwDODunzps4u6mdX | 3,314 | Adding arg to pass process rank to `map` | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Some commits seem to be there twice (made the mistake of rebasing because I wasn't sure whether the doc had changed), is this an issue @lhoestq ?"
] | 1,637,682,921,000 | 1,637,754,853,000 | 1,637,754,853,000 | MEMBER | null | This PR adds a `with_rank` argument to `map` that gives the user the possibility to pass the rank of each process to their function. This is mostly designed for multi-GPU map (each process can be sent to a different device thanks to the rank). I've also added tests. I'm putting the PR up so you can check the code, I'll add a multi-GPU example to the doc (+ write a bit in the doc for the new arg) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3314/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3314/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3314",
"html_url": "https://github.com/huggingface/datasets/pull/3314",
"diff_url": "https://github.com/huggingface/datasets/pull/3314.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3314.patch",
"merged_at": 1637754853000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3313 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3313/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3313/comments | https://api.github.com/repos/huggingface/datasets/issues/3313/events | https://github.com/huggingface/datasets/issues/3313 | 1,060,933,392 | I_kwDODunzps4_PI8Q | 3,313 | TriviaQA License Mismatch | {
"login": "akhilkedia",
"id": 16665267,
"node_id": "MDQ6VXNlcjE2NjY1MjY3",
"avatar_url": "https://avatars.githubusercontent.com/u/16665267?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akhilkedia",
"html_url": "https://github.com/akhilkedia",
"followers_url": "https://api.github.com/users/akhilkedia/followers",
"following_url": "https://api.github.com/users/akhilkedia/following{/other_user}",
"gists_url": "https://api.github.com/users/akhilkedia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akhilkedia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akhilkedia/subscriptions",
"organizations_url": "https://api.github.com/users/akhilkedia/orgs",
"repos_url": "https://api.github.com/users/akhilkedia/repos",
"events_url": "https://api.github.com/users/akhilkedia/events{/privacy}",
"received_events_url": "https://api.github.com/users/akhilkedia/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! You're completely right, this must be mentioned in the dataset card.\r\nIf you're interesting in contributing, feel free to open a pull request to mention this in the `trivia_qa` dataset card in the \"Licensing Information\" section at https://github.com/huggingface/datasets/blob/master/datasets/trivia_qa/README.md"
] | 1,637,654,415,000 | 1,638,185,061,000 | 1,638,185,061,000 | NONE | null | ## Describe the bug
TriviaQA Webpage at http://nlp.cs.washington.edu/triviaqa/ says they do not own the copyright to the data. However, Huggingface datasets at https://huggingface.co/datasets/trivia_qa mentions that the dataset is released under Apache License
Is the License Information on HuggingFace correct? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3313/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3313/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3312 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3312/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3312/comments | https://api.github.com/repos/huggingface/datasets/issues/3312/events | https://github.com/huggingface/datasets/pull/3312 | 1,060,440,346 | PR_kwDODunzps4u3duV | 3,312 | add bl books genre dataset | {
"login": "davanstrien",
"id": 8995957,
"node_id": "MDQ6VXNlcjg5OTU5NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/davanstrien",
"html_url": "https://github.com/davanstrien",
"followers_url": "https://api.github.com/users/davanstrien/followers",
"following_url": "https://api.github.com/users/davanstrien/following{/other_user}",
"gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}",
"starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions",
"organizations_url": "https://api.github.com/users/davanstrien/orgs",
"repos_url": "https://api.github.com/users/davanstrien/repos",
"events_url": "https://api.github.com/users/davanstrien/events{/privacy}",
"received_events_url": "https://api.github.com/users/davanstrien/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"To fix the CI, feel free to run the `make style` command to format the code.\r\n\r\nThen it also looks like the dummy_data.zip archives are all empty, which makes the tests fail. Can you try regenerating them ? They should have one file inside which is a dummy version of the file at https://bl.iro.bl.uk/downloads/36c7cd20-c8a7-4495-acbe-469b9132c6b1?locale=en",
"@lhoestq, thanks for that feedback. \r\n\r\nI should have made most of these changes now. The `--auto_generate` flag wasn't working because the file wasn't downloaded with a `.csv` extension. I used `--match_text_files \"*\"` to get around this. Because there is a lot of data that isn't annotated using the default line number for the dummy data causes the `annotated_raw` and the `title_genre_classifiction` configs to fail because they don't generate any examples — bumping the line numbers to `250` fixes this. This does make the dummy data a bit bigger, though. \r\n\r\nThe total directory size for the dataset is now `150kb`. Is this okay, or do you want me to generate the dummy data manually instead? ",
"Hi ! yes 150kB is fine :)\r\nFeel free to push your new dummy_data.zip files (I think the current one are still the empty ones)",
"@lhoestq I've pushed those dummy files now and added your other suggestions.",
"The CI failure is unrelated to this PR, merging :)",
"@lhoestq, thanks for all your help with this pull request 😀"
] | 1,637,603,690,000 | 1,638,461,429,000 | 1,638,461,267,000 | CONTRIBUTOR | null | First of all thanks for the fantastic library/collection of datasets 🤗
This pull request adds a dataset of metadata from digitised (mostly 19th Century) books from the British Library The [data](https://bl.iro.bl.uk/concern/datasets/1e1ccb46-65b4-4481-b6f8-b8129d5da053) contains various metadata about the books. In addition, a subset of the data includes 'genre' information which can be used for supervised text classification tasks. I hope that this offers easier access to a dataset for doing text classification on GLAM (galleries, libraries, archives and museums) data.
I have tried to create three configurations that provide both an 'easy' version of the dataset if you want to use it for training a genre classification model and a more 'raw' version of the data for other potential use cases for the data. I am open to suggestions if this doesn't make sense.
Similarly, for some of the arrow datatypes, I have had to fall back to strings since there are missing values for some fields/rows but I may have missed a more elegant way of dealing with it. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3312/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3312/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3312",
"html_url": "https://github.com/huggingface/datasets/pull/3312",
"diff_url": "https://github.com/huggingface/datasets/pull/3312.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3312.patch",
"merged_at": 1638461267000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3311 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3311/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3311/comments | https://api.github.com/repos/huggingface/datasets/issues/3311/events | https://github.com/huggingface/datasets/issues/3311 | 1,060,387,957 | I_kwDODunzps4_NDx1 | 3,311 | Add WebSRC | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [] | 1,637,600,313,000 | 1,637,600,313,000 | null | CONTRIBUTOR | null | ## Adding a Dataset
- **Name:** WebSRC
- **Description:** WebSRC is a novel Web-based Structural Reading Comprehension dataset. It consists of 0.44M question-answer pairs, which are collected from 6.5K web pages with corresponding HTML source code, screenshots and metadata.
- **Paper:** https://arxiv.org/abs/2101.09465
- **Data:** https://x-lance.github.io/WebSRC/dashboard.html#
- **Motivation:** Currently adding MarkupLM to HuggingFace Transformers, which achieves SOTA on this dataset.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3311/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3311/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3310 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3310/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3310/comments | https://api.github.com/repos/huggingface/datasets/issues/3310/events | https://github.com/huggingface/datasets/issues/3310 | 1,060,098,104 | I_kwDODunzps4_L9A4 | 3,310 | Fatal error condition occurred in aws-c-io | {
"login": "Crabzmatic",
"id": 31850219,
"node_id": "MDQ6VXNlcjMxODUwMjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/31850219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Crabzmatic",
"html_url": "https://github.com/Crabzmatic",
"followers_url": "https://api.github.com/users/Crabzmatic/followers",
"following_url": "https://api.github.com/users/Crabzmatic/following{/other_user}",
"gists_url": "https://api.github.com/users/Crabzmatic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Crabzmatic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Crabzmatic/subscriptions",
"organizations_url": "https://api.github.com/users/Crabzmatic/orgs",
"repos_url": "https://api.github.com/users/Crabzmatic/repos",
"events_url": "https://api.github.com/users/Crabzmatic/events{/privacy}",
"received_events_url": "https://api.github.com/users/Crabzmatic/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! Are you having this issue only with this specific dataset, or it also happens with other ones like `squad` ?",
"@lhoestq It happens also on `squad`. It successfully downloads the whole dataset and then crashes on: \r\n\r\n```\r\nFatal error condition occurred in D:\\bld\\aws-c-io_1633633258269\\work\\source\\event_loop.c:74: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS\r\nExiting Application\r\n```\r\n\r\nI tested it on Ubuntu and its working OK. Didn't test on non-preview version of Windows 11, `Windows-10-10.0.22504-SP0` is a preview version, not sure if this is causing it.",
"I see the same error in Windows-10.0.19042 as of a few days ago:\r\n\r\n`Fatal error condition occurred in D:\\bld\\aws-c-io_1633633258269\\work\\source\\event_loop.c:74: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS`\r\n\r\npython 3.8.12 h7840368_2_cpython conda-forge\r\nboto3 1.20.11 pyhd8ed1ab_0 conda-forge\r\nbotocore 1.23.11 pyhd8ed1ab_0 conda-forge\r\n\r\n...but I am not using `datasets` (although I might take a look now that I know about it!)\r\n\r\nThe error has occurred a few times over the last two days, but not consistently enough for me to get it with DEBUG. If there is any interest I can report back here, but it seems not unique to `datasets`.",
"I'm not sure what `datasets` has to do with a crash that seems related to `aws-c-io`, could it be an issue with your environment ?",
"> I'm not sure what `datasets` has to do with a crash that seems related to `aws-c-io`, could it be an issue with your environment ?\r\n\r\nAgreed, this issue is not likely a bug in datasets, since I get the identical error without datasets installed.",
"Will close this issue. Bug in `aws-c-io` shouldn't be in `datasets` repo. Nevertheless, it can be useful to know that it happens. Thanks @leehaust @lhoestq ",
"I have also had this issue since a few days, when running scripts using PyCharm in particular, but it does not seem to affect the script from running, only reporting this error at the end of the run.",
"I also get this issue, It appears after my script has finished running. I get the following error message\r\n```\r\nFatal error condition occurred in /home/conda/feedstock_root/build_artifacts/aws-c-io_1637179816120/work/source/event_loop.c:72: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS\r\nExiting Application\r\n################################################################################\r\nStack trace:\r\n################################################################################\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_backtrace_print+0x59) [0x2aabe0479579]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_fatal_assert+0x48) [0x2aabe04696c8]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../.././././libaws-c-io.so.1.0.0(+0x13ad3) [0x2aabe0624ad3]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_ref_count_release+0x1d) [0x2aabe047b60d]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../.././././libaws-c-io.so.1.0.0(+0x113ca) [0x2aabe06223ca]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_ref_count_release+0x1d) [0x2aabe047b60d]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-crt-cpp.so(_ZN3Aws3Crt2Io15ClientBootstrapD1Ev+0x3a) [0x2aabe041cf5a]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../.././libaws-cpp-sdk-core.so(+0x5f570) [0x2aabe00eb570]\r\n/lib64/libc.so.6(+0x39ce9) [0x2aaaab835ce9]\r\n/lib64/libc.so.6(+0x39d37) [0x2aaaab835d37]\r\n/lib64/libc.so.6(__libc_start_main+0xfc) [0x2aaaab81e55c]\r\npython(+0x1c721d) [0x55555571b21d]\r\nAborted\r\n```\r\nI don't get this issue when running my code in a container, and it seems more relevant to PyArrow but thought a more complete stack trace might be helpful to someone\r\n",
"I created an issue on JIRA:\r\nhttps://issues.apache.org/jira/browse/ARROW-15141",
"@CallumMcMahon Do you have a small reproducer for this problem on Linux? I can reproduce this on Windows but sadly not with linux.",
"Any updates on this issue? I started receiving the same error a few days ago on the amazon reviews"
] | 1,637,584,074,000 | 1,653,406,497,000 | 1,638,224,557,000 | NONE | null | ## Describe the bug
Fatal error when using the library
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('wikiann', 'en')
```
## Expected results
No fatal errors
## Actual results
```
Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\source\event_loop.c:74: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS
Exiting Application
```
## Environment info
- `datasets` version: 1.15.2.dev0
- Platform: Windows-10-10.0.22504-SP0
- Python version: 3.8.12
- PyArrow version: 6.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3310/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3310/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3309 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3309/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3309/comments | https://api.github.com/repos/huggingface/datasets/issues/3309/events | https://github.com/huggingface/datasets/pull/3309 | 1,059,496,154 | PR_kwDODunzps4u0Xgm | 3,309 | fix: files counted twice in inferred structure | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I see it creates some errors in the tests.\r\n\r\nAnother solution if needed is to add something like `data_files = list(set(data_files))` after [this line](https://github.com/huggingface/datasets/blob/8555197a3fe826e98bd0206c2d031c4488c53c5c/src/datasets/data_files.py#L511)",
"Hi ! Thanks for the correction :)\r\n\r\nYour change seems right, let me look at the errors and try to fix this",
"Not sure if it's due to this change but I tested `load_dataset('dalle-mini/encoded-vqgan_imagenet_f16_16384', streaming=True)` and the `validation` set is empty.",
"So indeed there was an issue with the patterns `*` and `**/*` that would return some files twice. This issue came from the fact that we were not using the right `glob`.\r\n\r\nIndeed we were using `Path.rglob` for local files and `Path.match` for remote files. Since these two methods don't have the same behavior for such patterns, I decided to change that.\r\n\r\nIn particular, we now use `glob.glob` (same as `fsspec` glob) as a reference for data files resolution from patterns. This is the same as dask for example.\r\n\r\n/!\\ Here are some behaviors specific to `glob.glob` that are different from Path.glob, Path.match or fnmatch:\r\n- '*' matches only first level files\r\n- '**/*' matches only at least second level files\r\n\r\nThis way we have a consistent behavior with respect to other python data libraries and there's no overlap anymore between the two patterns.\r\n\r\nSome implementations details:\r\n\r\nTo ensure that we have the same behavior for local files and for files in a remote dataset repository, I decided to use `fsspec` glob for both. This was made possible by implementing the `HfFileSystem` class as a `fsspec` filesystem.\r\n\r\nI pushed those changes directly to your PR - I hope you don't mind. I'm still fixing the remaining tests.\r\nPlease let me know if that solves your problem, and then we can merge !",
"There's still an issue with fsspec's glob - I'll take a look this afternoon",
"I just found out that actually glob.glob and fsspec glob are different haha\r\nglob.glob needs `**/*` and recursive=True to look into deep subdirectories, while fsspec only requires `**`\r\n\r\nI think we can go with fsspec glob for consistency with dask and since it's our main tool for filesystems management",
"To recap:\r\n```\r\nWe use fsspec glob as a reference for data files resolution from patterns.\r\nThis is the same as dask for example.\r\n\r\n/!\\ Here are some behaviors specific to fsspec glob that are different from glob.glob, Path.glob, Path.match or fnmatch:\r\n- '*' matches only first level items\r\n- '**' matches all items\r\n- '**/*' matches all at least second level items\r\n\r\nMore generally:\r\n- `*`` matches any character except a forward-slash (to match just the file or directory name)\r\n- `**`` matches any character including a forward-slash /\r\n```",
"lol Windows… Maybe `Pathlib` for the tests?\r\n\r\nI tested streaming a repo and it worked perfectly now!"
] | 1,637,531,438,000 | 1,637,686,858,000 | 1,637,686,858,000 | CONTRIBUTOR | null | Files were counted twice in a structure like:
```
my_dataset_local_path/
├── README.md
└── data/
├── train/
│ ├── shard_0.csv
│ ├── shard_1.csv
│ ├── shard_2.csv
│ └── shard_3.csv
└── valid/
├── shard_0.csv
└── shard_1.csv
```
The reason is that they were matching both `*train*/*` and `*train*/**/*`.
This PR fixes it. @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3309/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3309/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3309",
"html_url": "https://github.com/huggingface/datasets/pull/3309",
"diff_url": "https://github.com/huggingface/datasets/pull/3309.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3309.patch",
"merged_at": 1637686858000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3308 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3308/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3308/comments | https://api.github.com/repos/huggingface/datasets/issues/3308/events | https://github.com/huggingface/datasets/issues/3308 | 1,059,255,705 | I_kwDODunzps4_IvWZ | 3,308 | "dataset_infos.json" missing for chr_en and mc4 | {
"login": "amitness",
"id": 8587189,
"node_id": "MDQ6VXNlcjg1ODcxODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/8587189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amitness",
"html_url": "https://github.com/amitness",
"followers_url": "https://api.github.com/users/amitness/followers",
"following_url": "https://api.github.com/users/amitness/following{/other_user}",
"gists_url": "https://api.github.com/users/amitness/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amitness/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amitness/subscriptions",
"organizations_url": "https://api.github.com/users/amitness/orgs",
"repos_url": "https://api.github.com/users/amitness/repos",
"events_url": "https://api.github.com/users/amitness/events{/privacy}",
"received_events_url": "https://api.github.com/users/amitness/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [
"Hi ! Thanks for reporting :) \r\nWe can easily add the metadata for `chr_en` IMO, but for mC4 it will take more time, since it requires to count the number of examples in each language",
"No problem. I am trying to do some analysis on the metadata of all available datasets. Is reading `metadata_infos.json` for each dataset the correct way to go? \r\n\r\nI noticed that the same information is also available as special variables inside .py file of each dataset. So, I was wondering if `metadata_infos.json` has been deprecated?\r\n\r\n![image](https://user-images.githubusercontent.com/8587189/142914413-a95a1abf-6f3e-4fbe-96e5-16d3ca39c831.png)\r\n",
"The `dataset_infos.json` files have more information and are made to be used to analyze the datasets without having to run/parse the python scripts. Moreover some datasets on the Hugging face don't even have a python script, and for those ones we'll make tools to generate the JSON file automatically :)"
] | 1,637,453,242,000 | 1,642,600,532,000 | null | NONE | null | ## Describe the bug
In the repository, every dataset has its metadata in a file called`dataset_infos.json`. But, this file is missing for two datasets: `chr_en` and `mc4`.
## Steps to reproduce the bug
Check [chr_en](https://github.com/huggingface/datasets/tree/master/datasets/chr_en) and [mc4](https://github.com/huggingface/datasets/tree/master/datasets/mc4) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3308/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3308/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3307 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3307/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3307/comments | https://api.github.com/repos/huggingface/datasets/issues/3307/events | https://github.com/huggingface/datasets/pull/3307 | 1,059,226,297 | PR_kwDODunzps4uzlWa | 3,307 | Add IndoNLI dataset | {
"login": "afaji",
"id": 6201626,
"node_id": "MDQ6VXNlcjYyMDE2MjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/6201626?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/afaji",
"html_url": "https://github.com/afaji",
"followers_url": "https://api.github.com/users/afaji/followers",
"following_url": "https://api.github.com/users/afaji/following{/other_user}",
"gists_url": "https://api.github.com/users/afaji/gists{/gist_id}",
"starred_url": "https://api.github.com/users/afaji/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/afaji/subscriptions",
"organizations_url": "https://api.github.com/users/afaji/orgs",
"repos_url": "https://api.github.com/users/afaji/repos",
"events_url": "https://api.github.com/users/afaji/events{/privacy}",
"received_events_url": "https://api.github.com/users/afaji/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq thanks for the review! I've modified the labels to follow other NLI datasets.\r\nPlease review my change and let me know if I miss anything."
] | 1,637,441,163,000 | 1,637,851,908,000 | 1,637,851,908,000 | CONTRIBUTOR | null | This PR adds IndoNLI dataset, from https://aclanthology.org/2021.emnlp-main.821/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3307/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3307/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3307",
"html_url": "https://github.com/huggingface/datasets/pull/3307",
"diff_url": "https://github.com/huggingface/datasets/pull/3307.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3307.patch",
"merged_at": 1637851908000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3306 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3306/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3306/comments | https://api.github.com/repos/huggingface/datasets/issues/3306/events | https://github.com/huggingface/datasets/issues/3306 | 1,059,185,860 | I_kwDODunzps4_IeTE | 3,306 | nested sequence feature won't encode example if the first item of the outside sequence is an empty list | {
"login": "function2-llx",
"id": 38486514,
"node_id": "MDQ6VXNlcjM4NDg2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/38486514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/function2-llx",
"html_url": "https://github.com/function2-llx",
"followers_url": "https://api.github.com/users/function2-llx/followers",
"following_url": "https://api.github.com/users/function2-llx/following{/other_user}",
"gists_url": "https://api.github.com/users/function2-llx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/function2-llx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/function2-llx/subscriptions",
"organizations_url": "https://api.github.com/users/function2-llx/orgs",
"repos_url": "https://api.github.com/users/function2-llx/repos",
"events_url": "https://api.github.com/users/function2-llx/events{/privacy}",
"received_events_url": "https://api.github.com/users/function2-llx/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"knock knock",
"Hi, thanks for reporting! I've linked a PR that should fix the issue.",
"I've checked the PR and it looks great, thanks a lot!"
] | 1,637,427,474,000 | 1,638,968,535,000 | 1,638,968,535,000 | NONE | null | ## Describe the bug
As the title, nested sequence feature won't encode example if the first item of the outside sequence is an empty list.
## Steps to reproduce the bug
```python
from datasets import Features, Sequence, ClassLabel
features = Features({
'x': Sequence(Sequence(ClassLabel(names=['a', 'b']))),
})
print(features.encode_batch({
'x': [
[['a'], ['b']],
[[], ['b']],
]
}))
```
## Expected results
print `{'x': [[[0], [1]], [[], ['1']]]}`
## Actual results
print `{'x': [[[0], [1]], [[], ['b']]]}`
## Environment info
- `datasets` version: 1.15.1
- Platform: Linux-5.13.0-21-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyArrow version: 6.0.0
## Additional information
I think the issue stems from [here](https://github.com/huggingface/datasets/blob/8555197a3fe826e98bd0206c2d031c4488c53c5c/src/datasets/features/features.py#L847-L848).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3306/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/3306/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3305 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3305/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3305/comments | https://api.github.com/repos/huggingface/datasets/issues/3305/events | https://github.com/huggingface/datasets/pull/3305 | 1,059,161,000 | PR_kwDODunzps4uzZWv | 3,305 | asserts replaced with exception for ``fingerprint.py``, ``search.py``, ``arrow_writer.py`` and ``metric.py`` | {
"login": "Ishan-Kumar2",
"id": 46553104,
"node_id": "MDQ6VXNlcjQ2NTUzMTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/46553104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ishan-Kumar2",
"html_url": "https://github.com/Ishan-Kumar2",
"followers_url": "https://api.github.com/users/Ishan-Kumar2/followers",
"following_url": "https://api.github.com/users/Ishan-Kumar2/following{/other_user}",
"gists_url": "https://api.github.com/users/Ishan-Kumar2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ishan-Kumar2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ishan-Kumar2/subscriptions",
"organizations_url": "https://api.github.com/users/Ishan-Kumar2/orgs",
"repos_url": "https://api.github.com/users/Ishan-Kumar2/repos",
"events_url": "https://api.github.com/users/Ishan-Kumar2/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ishan-Kumar2/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,637,419,883,000 | 1,637,605,472,000 | 1,637,600,893,000 | CONTRIBUTOR | null | Addresses #3171
Fixes exception for ``fingerprint.py``, ``search.py``, ``arrow_writer.py`` and ``metric.py`` and modified tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3305/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3305/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3305",
"html_url": "https://github.com/huggingface/datasets/pull/3305",
"diff_url": "https://github.com/huggingface/datasets/pull/3305.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3305.patch",
"merged_at": 1637600893000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3304 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3304/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3304/comments | https://api.github.com/repos/huggingface/datasets/issues/3304/events | https://github.com/huggingface/datasets/issues/3304 | 1,059,130,494 | I_kwDODunzps4_IQx- | 3,304 | Dataset object has no attribute `to_tf_dataset` | {
"login": "RajkumarGalaxy",
"id": 59993678,
"node_id": "MDQ6VXNlcjU5OTkzNjc4",
"avatar_url": "https://avatars.githubusercontent.com/u/59993678?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RajkumarGalaxy",
"html_url": "https://github.com/RajkumarGalaxy",
"followers_url": "https://api.github.com/users/RajkumarGalaxy/followers",
"following_url": "https://api.github.com/users/RajkumarGalaxy/following{/other_user}",
"gists_url": "https://api.github.com/users/RajkumarGalaxy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RajkumarGalaxy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RajkumarGalaxy/subscriptions",
"organizations_url": "https://api.github.com/users/RajkumarGalaxy/orgs",
"repos_url": "https://api.github.com/users/RajkumarGalaxy/repos",
"events_url": "https://api.github.com/users/RajkumarGalaxy/events{/privacy}",
"received_events_url": "https://api.github.com/users/RajkumarGalaxy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"The issue is due to the older version of transformers and datasets. It has been resolved by upgrading their versions.\r\n\r\n```\r\n# upgrade transformers and datasets to latest versions\r\n!pip install --upgrade transformers\r\n!pip install --upgrade datasets\r\n```\r\n\r\nRegards!"
] | 1,637,409,839,000 | 1,637,478,445,000 | 1,637,478,445,000 | NONE | null | I am following HuggingFace Course. I am at Fine-tuning a model.
Link: https://huggingface.co/course/chapter3/2?fw=tf
I use tokenize_function and `map` as mentioned in the course to process data.
`# define a tokenize function`
`def Tokenize_function(example):`
` return tokenizer(example['sentence'], truncation=True)`
`# tokenize entire data`
`tokenized_data = raw_data.map(Tokenize_function, batched=True)`
I get Dataset object at this point. When I try converting this to a TF dataset object as mentioned in the course, it throws the following error.
`# convert to TF dataset`
`train_data = tokenized_data["train"].to_tf_dataset( `
` columns = ['attention_mask', 'input_ids', 'token_type_ids'], `
` label_cols = ['label'], `
` shuffle = True, `
` collate_fn = data_collator, `
` batch_size = 8 `
`)`
Output:
`---------------------------------------------------------------------------`
`AttributeError Traceback (most recent call last)`
`/tmp/ipykernel_42/103099799.py in <module>`
` 1 # convert to TF dataset`
`----> 2 train_data = tokenized_data["train"].to_tf_dataset( \`
` 3 columns = ['attention_mask', 'input_ids', 'token_type_ids'], \`
` 4 label_cols = ['label'], \`
` 5 shuffle = True, \`
`AttributeError: 'Dataset' object has no attribute 'to_tf_dataset'`
When I look for `dir(tokenized_data["train"])`, there is no method or attribute in the name of `to_tf_dataset`.
Why do I get this error? And how to clear this?
Please help me. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3304/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3304/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3303 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3303/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3303/comments | https://api.github.com/repos/huggingface/datasets/issues/3303/events | https://github.com/huggingface/datasets/issues/3303 | 1,059,129,732 | I_kwDODunzps4_IQmE | 3,303 | DataCollatorWithPadding: TypeError | {
"login": "RajkumarGalaxy",
"id": 59993678,
"node_id": "MDQ6VXNlcjU5OTkzNjc4",
"avatar_url": "https://avatars.githubusercontent.com/u/59993678?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/RajkumarGalaxy",
"html_url": "https://github.com/RajkumarGalaxy",
"followers_url": "https://api.github.com/users/RajkumarGalaxy/followers",
"following_url": "https://api.github.com/users/RajkumarGalaxy/following{/other_user}",
"gists_url": "https://api.github.com/users/RajkumarGalaxy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/RajkumarGalaxy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RajkumarGalaxy/subscriptions",
"organizations_url": "https://api.github.com/users/RajkumarGalaxy/orgs",
"repos_url": "https://api.github.com/users/RajkumarGalaxy/repos",
"events_url": "https://api.github.com/users/RajkumarGalaxy/events{/privacy}",
"received_events_url": "https://api.github.com/users/RajkumarGalaxy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"\r\n> \r\n> Input:\r\n> \r\n> ```\r\n> tokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n> data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors=\"tf\")\r\n> ```\r\n> \r\n> Output:\r\n> \r\n> ```\r\n> TypeError Traceback (most recent call last)\r\n> /tmp/ipykernel_42/1563280798.py in <module>\r\n> 1 checkpoint = 'bert-base-uncased'\r\n> 2 tokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n> ----> 3 data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors=\"pt\")\r\n> TypeError: __init__() got an unexpected keyword argument 'return_tensors'\r\n> ```\r\n> \r\n\r\nThe issue is due to the older version of transformers and datasets. It has been resolved by upgrading their versions.\r\n\r\n`# upgrade transformers and datasets to latest versions`\r\n`!pip install --upgrade transformers`\r\n`!pip install --upgrade datasets`\r\n\r\nCheers!"
] | 1,637,409,595,000 | 1,637,478,337,000 | 1,637,478,337,000 | NONE | null | Hi,
I am following the HuggingFace course. I am now at Fine-tuning [https://huggingface.co/course/chapter3/3?fw=tf](https://huggingface.co/course/chapter3/3?fw=tf). When I set up `DataCollatorWithPadding` as following I got an error while trying to reproduce the course code in Kaggle. This error occurs with either a CPU-only-device or a GPU-device.
Input:
```checkpoint = 'bert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
```
Output:
```---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/tmp/ipykernel_42/1563280798.py in <module>
1 checkpoint = 'bert-base-uncased'
2 tokenizer = AutoTokenizer.from_pretrained(checkpoint)
----> 3 data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="pt")
TypeError: __init__() got an unexpected keyword argument 'return_tensors'
```
When I call `help` method, it too confirms that there is no argument `return_tensors`.
Input:
```
help(DataCollatorWithPadding.__init__)
```
Output:
```
Help on function __init__ in module transformers.data.data_collator:
__init__(self, tokenizer: transformers.tokenization_utils_base.PreTrainedTokenizerBase, padding: Union[bool, str, transformers.file_utils.PaddingStrategy] = True, max_length: Union[int, NoneType] = None, pad_to_multiple_of: Union[int, NoneType] = None) -> None
```
But, the source file *[Data Collator - docs](https://huggingface.co/transformers/main_classes/data_collator.html#datacollatorwithpadding)* says that there is such an argument. By default, it returns Pytorch tensors while I need TF tensors.
Where do I miss?
Please help me. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3303/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3303/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3302 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3302/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3302/comments | https://api.github.com/repos/huggingface/datasets/issues/3302/events | https://github.com/huggingface/datasets/pull/3302 | 1,058,907,168 | PR_kwDODunzps4uynjc | 3,302 | fix old_val typo in f-string | {
"login": "Mehdi2402",
"id": 56029953,
"node_id": "MDQ6VXNlcjU2MDI5OTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehdi2402",
"html_url": "https://github.com/Mehdi2402",
"followers_url": "https://api.github.com/users/Mehdi2402/followers",
"following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions",
"organizations_url": "https://api.github.com/users/Mehdi2402/orgs",
"repos_url": "https://api.github.com/users/Mehdi2402/repos",
"events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehdi2402/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,637,355,068,000 | 1,637,878,483,000 | 1,637,600,659,000 | CONTRIBUTOR | null |
This PR is to correct a typo in #3277 that @Carlosbogo revieled in a comment.
Related closed issue : #3257
Sorry about that 😅. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3302/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3302/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3302",
"html_url": "https://github.com/huggingface/datasets/pull/3302",
"diff_url": "https://github.com/huggingface/datasets/pull/3302.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3302.patch",
"merged_at": 1637600659000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3301/comments | https://api.github.com/repos/huggingface/datasets/issues/3301/events | https://github.com/huggingface/datasets/pull/3301 | 1,058,718,957 | PR_kwDODunzps4uyA9o | 3,301 | Add wikipedia tags | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,637,339,965,000 | 1,637,340,570,000 | 1,637,340,569,000 | MEMBER | null | Add the missing tags to the wikipedia dataset card.
I also added the missing languages code in our language codes list.
This should also fix the code snippet that is presented on the Hub to load the dataset: fix https://github.com/huggingface/datasets/issues/3292 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3301/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3301/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3301",
"html_url": "https://github.com/huggingface/datasets/pull/3301",
"diff_url": "https://github.com/huggingface/datasets/pull/3301.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3301.patch",
"merged_at": 1637340569000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3300 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3300/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3300/comments | https://api.github.com/repos/huggingface/datasets/issues/3300/events | https://github.com/huggingface/datasets/issues/3300 | 1,058,644,459 | I_kwDODunzps4_GaHr | 3,300 | ❓ Dataset loading script from Hugging Face Hub | {
"login": "pietrolesci",
"id": 61748653,
"node_id": "MDQ6VXNlcjYxNzQ4NjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pietrolesci",
"html_url": "https://github.com/pietrolesci",
"followers_url": "https://api.github.com/users/pietrolesci/followers",
"following_url": "https://api.github.com/users/pietrolesci/following{/other_user}",
"gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions",
"organizations_url": "https://api.github.com/users/pietrolesci/orgs",
"repos_url": "https://api.github.com/users/pietrolesci/repos",
"events_url": "https://api.github.com/users/pietrolesci/events{/privacy}",
"received_events_url": "https://api.github.com/users/pietrolesci/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! In the next version of `datasets`, your train and test splits will be correctly separated (changes from #3027) if you create a dataset repository with only your CSV files.\r\n\r\nAlso it seems that you overwrite the `data_files` and `data_dir` arguments in your code, when you instantiate the AGNewsConfig objects. Those parameters are not necessary since you already know which files you want to load.\r\n\r\nYou can find an example on how to specify which file the dataset has to download in this [example script](https://huggingface.co/datasets/lhoestq/custom_squad/blob/main/custom_squad.py#L101-L107):\r\n```python\r\n_URLS = {\r\n \"train\": \"train-v1.1.json\", # you can use a URL or a relative path from the python script to your file in the repository\r\n \"dev\": \"dev-v1.1.json\",\r\n}\r\n```\r\n```python\r\n def _split_generators(self, dl_manager):\r\n downloaded_files = dl_manager.download_and_extract(_URLS)\r\n\r\n return [\r\n datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={\"filepath\": downloaded_files[\"train\"]}),\r\n datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={\"filepath\": downloaded_files[\"dev\"]}),\r\n ]\r\n```",
"Also I think the viewer will be updated when you fix the dataset script, let me know if it doesn't",
"Hi @lhoestq,\r\n\r\nThanks a lot for the super quick answer!\r\n\r\nYour suggestion solves my issue. I am now able to load the dataset properly 🚀 \r\nHowever, the dataviewer is not working yet.\r\n\r\nReally, thanks a lot for your help and consideration!\r\n\r\nBest,\r\nPietro",
"Great ! We'll take a look at the viewer to fix it",
"@lhoestq I think I am having a related problem.\r\nMy call to load_dataset() looks like this:\r\n\r\n```\r\n datasets = load_dataset(\r\n os.path.abspath(layoutlmft.data.datasets.xfun.__file__),\r\n f\"xfun.{data_args.lang}\",\r\n additional_langs=data_args.additional_langs,\r\n keep_in_memory=True,\r\n )\r\n\r\n```\r\n\r\nMy _split_generation code is:\r\n\r\n```\r\n def _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n\r\n downloaded_file = dl_manager.download_and_extract(\"https://guillaumejaume.github.io/FUNSD/dataset.zip\")\r\n return [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN, gen_kwargs={\"filepath\": f\"{downloaded_file}/dataset/training_data/\"}\r\n ),\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TEST, gen_kwargs={\"filepath\": f\"{downloaded_file}/dataset/testing_data/\"}\r\n ),\r\n ]\r\n\r\n```\r\nHowever I get the error \"TypeError: _generate_examples() got an unexpected keyword argument 'filepath'\"\r\nThe path looks right and I see the data in the path so I think the only problem I have is that it doesn't like the key \"filepath\". However, the documentation (example [here](https://huggingface.co/datasets/lhoestq/custom_squad/blob/main/custom_squad.py#L101-L107)) seems to show that this is the correct parameter. \r\n\r\nHere is the full stack trace:\r\n\r\n```\r\nDownloading and preparing dataset xfun/xfun.en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /Users/caseygre/.cache/huggingface/datasets/xfun/xfun.en/0.0.0/96b8cb7c57f6f822f0ab37ae3be7b82d84ac57062e774c9361ccf0a4b9ef61cc...\r\nTraceback (most recent call last):\r\n File \"/Users/caseygre/PycharmProjects/aegis-ml-new/unilm/venv-LayoutLM/lib/python3.9/site-packages/datasets/builder.py\", line 574, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Users/caseygre/PycharmProjects/aegis-ml-new/unilm/venv-LayoutLM/lib/python3.9/site-packages/datasets/builder.py\", line 652, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/Users/caseygre/PycharmProjects/aegis-ml-new/unilm/venv-LayoutLM/lib/python3.9/site-packages/datasets/builder.py\", line 975, in _prepare_split\r\n generator = self._generate_examples(**split_generator.gen_kwargs)\r\nTypeError: _generate_examples() got an unexpected keyword argument 'filepath'\r\npython-BaseException\r\n```",
"Hi ! The `gen_kwargs` dictionary is passed to `_generate_examples`, so in your case it must be defined this way:\r\n```python\r\ndef _generate_examples(self, filepath):\r\n ...\r\n```\r\n\r\nAnd here is an additional tip: you can use `os.path.join(downloaded_file, \"dataset/testing_data\")` instead of `f\"downloaded_file}/dataset/testing_data/\"` to get compatibility with Windows and streaming.\r\n\r\nIndeed Windows uses a backslash separator, not a slash, and streaming uses chained URLs (like `zip://dataset/testing_data::https://https://guillaumejaume.github.io/FUNSD/dataset.zip` for example)",
"Thanks for you quick reply @lhoestq and so sorry for my very delayed response.\r\nWe have gotten around the error another way but I will try to duplicate this when I can. We may have had \"filepaths\" instead of \"filepath\" in our def of _generate_examples() and not noticed the difference. If I find a more useful answer for others I will add to this ticket so they know what the issue was.\r\nNote: we do have our own _generate_examples() defined with the same def as Quentin has. (But one version does have \"filepaths\".)\r\n",
"Fixed in the viewer: https://huggingface.co/datasets/pietrolesci/ag_news"
] | 1,637,335,252,000 | 1,640,170,676,000 | 1,640,170,676,000 | NONE | null | Hi there,
I am trying to add my custom `ag_news` with its own loading script on the Hugging Face datasets hub. In particular, I would like to test the addition of a second configuration to the existing `ag_news` dataset. Once it works in my hub, I plan to make a PR to the original dataset. However, in trying to do so I have encountered certain problems as detailed below.
Issues I have encountered:
- Without a loading script, the train and test files are loaded together into a unique `dataset.Dataset` -> so I wrote a loading script. Also, I need a loading script otherwise I cannot specify multiple configurations
- Once my loading script is working locally, I do not manage to make it work on the hub. In particular, I would like to be able to load the dataset like this
```python
load_dataset("pietrolesci/ag_news", name="my_configuration")
```
Apparently, the `load_dataset` is able to pick up the loading script from the hub and run it. However, it errors because it is unable to find the files. The structure of my hub repo is the following
```
ag_news.py
train.csv
test.csv
```
and the loading script I specify `data_dir=Path(__file__).parent` and `data_files=DataFilesDict({"train": "train.csv", "test": "test.csv"})`. In the documentation I could not find info regarding loading a dataset from the hub using a loading script present on the hub.
Any suggestion is very much appreciated.
Best,
Pietro
Link to the hub repo: https://huggingface.co/datasets/pietrolesci/ag_news
BONUS: how can I make the data viewer work in this specific case? :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3300/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3300/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3299 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3299/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3299/comments | https://api.github.com/repos/huggingface/datasets/issues/3299/events | https://github.com/huggingface/datasets/issues/3299 | 1,058,518,213 | I_kwDODunzps4_F7TF | 3,299 | Add option to find unique elements in nested sequences when calling `Dataset.unique` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi @mariosasko!\r\n\r\nHas this been patched into any of the releases?",
"Hi! Not yet, would you be interested in contributing a PR? I can give you some pointers if needed. "
] | 1,637,327,766,000 | 1,653,391,917,000 | null | CONTRIBUTOR | null | It would be nice to have an option to flatten nested sequences to find unique elements stored in them when calling `Dataset.unique`. ~~Currently, `Dataset.unique` only supports finding unique sequences and not unique elements in that situation.~~ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3299/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3299/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3298 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3298/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3298/comments | https://api.github.com/repos/huggingface/datasets/issues/3298/events | https://github.com/huggingface/datasets/issues/3298 | 1,058,420,201 | I_kwDODunzps4_FjXp | 3,298 | Agnews dataset viewer is not working | {
"login": "pietrolesci",
"id": 61748653,
"node_id": "MDQ6VXNlcjYxNzQ4NjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pietrolesci",
"html_url": "https://github.com/pietrolesci",
"followers_url": "https://api.github.com/users/pietrolesci/followers",
"following_url": "https://api.github.com/users/pietrolesci/following{/other_user}",
"gists_url": "https://api.github.com/users/pietrolesci/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pietrolesci/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pietrolesci/subscriptions",
"organizations_url": "https://api.github.com/users/pietrolesci/orgs",
"repos_url": "https://api.github.com/users/pietrolesci/repos",
"events_url": "https://api.github.com/users/pietrolesci/events{/privacy}",
"received_events_url": "https://api.github.com/users/pietrolesci/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting\r\nWe've already fixed the code that generates the preview for this dataset, we'll release the fix soon :)",
"Hi @lhoestq, thanks for your feedback!",
"Fixed in the viewer.\r\n\r\nhttps://huggingface.co/datasets/ag_news"
] | 1,637,320,739,000 | 1,640,103,845,000 | 1,640,103,845,000 | NONE | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** https://huggingface.co/datasets/ag_news
Hi there, the `ag_news` dataset viewer is not working.
Am I the one who added this dataset? No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3298/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3298/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3297 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3297/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3297/comments | https://api.github.com/repos/huggingface/datasets/issues/3297/events | https://github.com/huggingface/datasets/issues/3297 | 1,058,263,859 | I_kwDODunzps4_E9Mz | 3,297 | .map() cache is wrongfully reused - only happens when the mapping function is imported | {
"login": "eladsegal",
"id": 13485709,
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladsegal",
"html_url": "https://github.com/eladsegal",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! Thanks for reporting. Indeed this is a current limitation of the usage we have of `dill` in `datasets`. I'd suggest you use your workaround for now until we find a way to fix this. Maybe functions that are not coming from a module not installed with pip should be dumped completely, rather than only taking their locations into account",
"I agree. Sounds like a solution for it would be pretty dirty, even [cloudpickle](https://stackoverflow.com/a/16891169) doesn't help in this case.\r\nIn the meanwhile I think that adding a warning and the workaround somewhere in the documentation can be helpful."
] | 1,637,309,916,000 | 1,638,834,340,000 | null | CONTRIBUTOR | null | ## Describe the bug
When `.map` is used with a mapping function that is imported, the cache is reused even if the mapping function has been modified.
The reason for this is that `dill` that is used for creating the fingerprint [pickles imported functions by reference](https://stackoverflow.com/a/67851411).
I guess it is not a widespread case, but it can still lead to unwanted results unnoticeably.
## Steps to reproduce the bug
Create files `a.py` and `b.py`:
```python
# a.py
from datasets import load_dataset
def main():
squad = load_dataset("squad")
squad.map(mapping_func, batched=True)
def mapping_func(examples):
ID_LENGTH = 4
examples["id"] = [id_[:ID_LENGTH] for id_ in examples["id"]]
return examples
if __name__ == "__main__":
main()
```
```python
# b.py
from datasets import load_dataset
from a import mapping_func
def main():
squad = load_dataset("squad")
squad.map(mapping_func, batched=True)
if __name__ == "__main__":
main()
```
Run `python b.py` twice: In the first run you will see tqdm bars showing that the data is processed, and in the second run you will see "Loading cached processed dataset at...".
Now change `ID_LENGTH` to another number in order to change the mapping function, and run `python b.py` again. You'll see that `.map` loads from the cache the result of the previous mapping function.
## Expected results
Run `python a.py` twice: In the first run you will see tqdm bars showing that the data is processed, and in the second run you will see "Loading cached processed dataset at...".
Now change `ID_LENGTH` to another number in order to change the mapping function, and run `python a.py` again. You'll see that the dataset is being processed and that there's no reuse of the previous mapping function result.
## Workaround
Put the mapping function inside a dummy class as a static method:
```python
# a.py
class MappingFuncClass:
@staticmethod
def mapping_func(examples):
ID_LENGTH = 4
examples["id"] = [id_[:ID_LENGTH] for id_ in examples["id"]]
return examples
```
```python
# b.py
from datasets import load_dataset
from a import MappingFuncClass
def main():
squad = load_dataset("squad")
squad.map(MappingFuncClass.mapping_func, batched=True)
if __name__ == "__main__":
main()
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.15.1
- Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3297/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3297/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3296 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3296/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3296/comments | https://api.github.com/repos/huggingface/datasets/issues/3296/events | https://github.com/huggingface/datasets/pull/3296 | 1,057,970,638 | PR_kwDODunzps4uvlQz | 3,296 | Fix temporary dataset_path creation for URIs related to remote fs | {
"login": "francisco-perez-sorrosal",
"id": 918006,
"node_id": "MDQ6VXNlcjkxODAwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/francisco-perez-sorrosal",
"html_url": "https://github.com/francisco-perez-sorrosal",
"followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers",
"following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}",
"gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions",
"organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs",
"repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos",
"events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}",
"received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for the fix :) \r\n\r\nI think this should be `extract_path_from_uri` 's responsibility to strip the extra `/` from a badly formatted path like `hdfs:///absolute/path` (or raise an error). Do you think you can simply do the changes in `extract_path_from_uri` ? This way this fix will be available for all the other parts of the lib that need to extract the inner path from an URI of a remote filesystem\r\n\r\nThen we can also keep your test cases but simply apply them to `extract_path_from_uri` instead",
"Hi @lhoestq! No problem! Thanks for your interest! :)\r\n\r\nI think stripping the 3rd `/` in `hdfs:///absolute/path` inside `extract_path_from_uri` is not the solution. When I provide `hdfs:///absolute/path` to `extract_path_from_uri` we want `/absolute/path` to be returned, as it does now (at least in the case of URIs with `hdfs` schemas, for `s3` is different as it should start with a bucket name).\r\n\r\nThe problem comes in line 1041 in the original code below:\r\n\r\nhttps://github.com/huggingface/datasets/blob/42f6b1d18a4a1b6009b6e62d115491be16dfca22/src/datasets/arrow_dataset.py#L1038-L1042\r\n\r\nLets assume the following parameters for line 1041 after `extract_path_from_uri` has removed the `hdfs` schema part and the `://` from `hdfs:///absolute/path`, and `get_temporary_cache_files_directory()` returns `/tmp/a1b2b3c4`, as it is shown below: \r\n\r\n```python\r\nsrc_dataset_path = '/absolute/path'\r\ntmp_dir = '/tmp/a1b2b3c4'\r\ndataset_path = Path(tmp_dir, src_dataset_path)\r\n```\r\n\r\nAfter passing those paths to the `Path` object, `dataset_path` contains only `/absolute/path`; that is, it has lost the temporary directory path. This is because, when two (or more) absolute paths are passed to the `Path` function, only the last one is taken. However, if the contents of those variables are:\r\n\r\n```python\r\nsrc_dataset_path = 'relative/path'\r\ntmp_dir = '/tmp/a1b2b3c4'\r\ndataset_path = Path(tmp_dir, src_dataset_path)\r\n```\r\n\r\nthen `dataset_path` contains `/tmp/a1b2b3c4/relative/path` as expected.\r\n\r\nAbsolute paths are allowed in hdfs URIs, so that's why I added the extra function `build_local_temp_path` in the PR; so in case the second argument is an absolute path, it still will create the correct absolute path by concatenating the temp dir and the path passed by converting it to a relative path (and it also works for windows paths too.) It also allows to add the tests, checking that the main combinations are ok.\r\n\r\nI've checked all the places where the result of `extract_path_from_uri` is used, and as far as I've seen this is the only place where it is concatenated with another possible absolute path, so no need to add `build_local_temp_path` anywhere else. \r\n"
] | 1,637,278,365,000 | 1,638,787,504,000 | 1,638,787,504,000 | CONTRIBUTOR | null | This aims to close #3295 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3296/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3296/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3296",
"html_url": "https://github.com/huggingface/datasets/pull/3296",
"diff_url": "https://github.com/huggingface/datasets/pull/3296.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3296.patch",
"merged_at": 1638787503000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3295 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3295/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3295/comments | https://api.github.com/repos/huggingface/datasets/issues/3295/events | https://github.com/huggingface/datasets/issues/3295 | 1,057,954,892 | I_kwDODunzps4_DxxM | 3,295 | Temporary dataset_path for remote fs URIs not built properly in arrow_dataset.py::load_from_disk | {
"login": "francisco-perez-sorrosal",
"id": 918006,
"node_id": "MDQ6VXNlcjkxODAwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/francisco-perez-sorrosal",
"html_url": "https://github.com/francisco-perez-sorrosal",
"followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers",
"following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}",
"gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions",
"organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs",
"repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos",
"events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}",
"received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! Good catch and thanks for opening a PR :)\r\n\r\nI just responded in your PR"
] | 1,637,277,842,000 | 1,638,787,504,000 | 1,638,787,504,000 | CONTRIBUTOR | null | ## Describe the bug
When trying to build a temporary dataset path from a remote URI in this block of code:
https://github.com/huggingface/datasets/blob/42f6b1d18a4a1b6009b6e62d115491be16dfca22/src/datasets/arrow_dataset.py#L1038-L1042
the result is not the expected when passing an absolute path in an URI like `hdfs:///absolute/path`.
## Steps to reproduce the bug
```python
dataset_path = "hdfs:///absolute/path"
src_dataset_path = extract_path_from_uri(dataset_path)
tmp_dir = get_temporary_cache_files_directory()
dataset_path = Path(tmp_dir, src_dataset_path)
print(dataset_path)
```
## Expected results
With the code above, we would expect a value in `dataset_path` similar to:
`/tmp/tmpnwxyvao5/absolute/path`
## Actual results
However, we get a `dataset_path` value like:
`/absolute/path`
This is because this line here: https://github.com/huggingface/datasets/blob/42f6b1d18a4a1b6009b6e62d115491be16dfca22/src/datasets/arrow_dataset.py#L1041
returns the last absolute path when two absolute paths (the one in `tmp_dir` and the one extracted from the URI in `src_dataset_path`) are passed as arguments.
## Environment info
- `datasets` version: 1.13.3
- Platform: Linux-3.10.0-1160.15.2.el7.x86_64-x86_64-with-glibc2.33
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3295/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3295/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3294/comments | https://api.github.com/repos/huggingface/datasets/issues/3294/events | https://github.com/huggingface/datasets/issues/3294 | 1,057,495,473 | I_kwDODunzps4_CBmx | 3,294 | Add Natural Adversarial Objects dataset | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | open | false | null | [] | null | [] | 1,637,249,684,000 | 1,638,964,802,000 | null | MEMBER | null | ## Adding a Dataset
- **Name:** Natural Adversarial Objects (NAO)
- **Description:** Natural Adversarial Objects (NAO) is a new dataset to evaluate the robustness of object detection models. NAO contains 7,934 images and 9,943 objects that are unmodified and representative of real-world scenarios, but cause state-of-the-art detection models to misclassify with high confidence.
- **Paper:** https://arxiv.org/abs/2111.04204v1
- **Data:** https://drive.google.com/drive/folders/15P8sOWoJku6SSEiHLEts86ORfytGezi8
- **Motivation:** interesting object detection dataset useful for miscclassifications
cc @NielsRogge
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3294/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3294/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3293 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3293/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3293/comments | https://api.github.com/repos/huggingface/datasets/issues/3293/events | https://github.com/huggingface/datasets/pull/3293 | 1,057,004,431 | PR_kwDODunzps4uslLN | 3,293 | Pin version exclusion for Markdown | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,637,218,561,000 | 1,637,231,285,000 | 1,637,231,284,000 | MEMBER | null | As Markdown version 3.3.5 has a bug, it is better to exclude it in case the users have it previously installed in their environment.
Related to #3289, #3286. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3293/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3293/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3293",
"html_url": "https://github.com/huggingface/datasets/pull/3293",
"diff_url": "https://github.com/huggingface/datasets/pull/3293.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3293.patch",
"merged_at": 1637231284000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3292/comments | https://api.github.com/repos/huggingface/datasets/issues/3292/events | https://github.com/huggingface/datasets/issues/3292 | 1,056,962,554 | I_kwDODunzps4-__f6 | 3,292 | Not able to load 'wikipedia' dataset | {
"login": "abhibisht89",
"id": 13541524,
"node_id": "MDQ6VXNlcjEzNTQxNTI0",
"avatar_url": "https://avatars.githubusercontent.com/u/13541524?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhibisht89",
"html_url": "https://github.com/abhibisht89",
"followers_url": "https://api.github.com/users/abhibisht89/followers",
"following_url": "https://api.github.com/users/abhibisht89/following{/other_user}",
"gists_url": "https://api.github.com/users/abhibisht89/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhibisht89/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhibisht89/subscriptions",
"organizations_url": "https://api.github.com/users/abhibisht89/orgs",
"repos_url": "https://api.github.com/users/abhibisht89/repos",
"events_url": "https://api.github.com/users/abhibisht89/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhibisht89/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! Indeed it looks like the code snippet on the Hugging face Hub doesn't show the second parameter\r\n\r\n![image](https://user-images.githubusercontent.com/42851186/142649237-45ba55c5-1a64-4c30-8692-2c8120572f92.png)\r\n\r\nThanks for reporting, I'm taking a look\r\n"
] | 1,637,214,078,000 | 1,637,340,569,000 | 1,637,340,569,000 | NONE | null | ## Describe the bug
I am following the instruction for loading the wikipedia dataset using datasets. However getting the below error.
## Steps to reproduce the bug
from datasets import load_dataset
dataset = load_dataset("wikipedia")
```
## Expected results
A clear and concise description of the expected results.
## Actual results
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)
339 "Config name is missing."
340 "\nPlease pick one among the available configs: %s" % list(self.builder_configs.keys())
--> 341 + "\nExample of usage:\n\t`{}`".format(example_of_usage)
342 )
343 builder_config = self.BUILDER_CONFIGS[0]
ValueError: Config name is missing.
Please pick one among the available configs: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu']
Example of usage:
`load_dataset('wikipedia', '20200501.aa')`
I think the other parameter is missing in the load_dataset function that is not shown in the instruction. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3292/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3292/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3291 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3291/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3291/comments | https://api.github.com/repos/huggingface/datasets/issues/3291/events | https://github.com/huggingface/datasets/pull/3291 | 1,056,689,876 | PR_kwDODunzps4urikR | 3,291 | Use f-strings in the dataset scripts | {
"login": "Carlosbogo",
"id": 84228424,
"node_id": "MDQ6VXNlcjg0MjI4NDI0",
"avatar_url": "https://avatars.githubusercontent.com/u/84228424?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Carlosbogo",
"html_url": "https://github.com/Carlosbogo",
"followers_url": "https://api.github.com/users/Carlosbogo/followers",
"following_url": "https://api.github.com/users/Carlosbogo/following{/other_user}",
"gists_url": "https://api.github.com/users/Carlosbogo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Carlosbogo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Carlosbogo/subscriptions",
"organizations_url": "https://api.github.com/users/Carlosbogo/orgs",
"repos_url": "https://api.github.com/users/Carlosbogo/repos",
"events_url": "https://api.github.com/users/Carlosbogo/events{/privacy}",
"received_events_url": "https://api.github.com/users/Carlosbogo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,637,187,619,000 | 1,637,599,216,000 | 1,637,599,216,000 | CONTRIBUTOR | null | Uses f-strings to format the .py files in the dataset folder | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3291/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3291/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3291",
"html_url": "https://github.com/huggingface/datasets/pull/3291",
"diff_url": "https://github.com/huggingface/datasets/pull/3291.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3291.patch",
"merged_at": 1637599216000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3290 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3290/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3290/comments | https://api.github.com/repos/huggingface/datasets/issues/3290/events | https://github.com/huggingface/datasets/pull/3290 | 1,056,414,856 | PR_kwDODunzps4uqzcv | 3,290 | Make several audio datasets streamable | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Reading FLAC (for `librispeech_asr`) works OK for me (`soundfile` version: `0.10.3`):\r\n```python\r\nIn [2]: ds = load_dataset(\"datasets/librispeech_asr/librispeech_asr.py\", \"clean\", streaming=True, split=\"train.100\")\r\n\r\nIn [3]: item = next(iter(ds))\r\n\r\nIn [4]: item.keys()\r\nOut[4]: dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'])\r\n\r\nIn [5]: item[\"file\"]\r\nOut[5]: '374-180298-0000.flac'\r\n\r\nIn [6]: item[\"audio\"].keys()\r\nOut[6]: dict_keys(['path', 'array', 'sampling_rate'])\r\n\r\nIn [7]: item[\"audio\"][\"sampling_rate\"]\r\nOut[7]: 16000\r\n\r\nIn [8]: item[\"audio\"][\"path\"]\r\nOut[8]: '374-180298-0000.flac'\r\n\r\nIn [9]: item[\"audio\"][\"array\"].shape\r\nOut[9]: (232480,)\r\n```",
"Oh cool ! I think this might have come from an issue with my local `soundfile` installation then",
"I'll do `multilingual_librispeech` in a separate PR since it requires the data to be in another format (in particular separate the train/dev/test splits in different files)",
"@lhoestq @albertvillanova - think it would have been nice to have added a big message at the top stating that this is a breaking change and ping `transformers` people a bit more here."
] | 1,637,171,021,000 | 1,643,749,252,000 | 1,637,334,537,000 | MEMBER | null | <s>Needs https://github.com/huggingface/datasets/pull/3129 to be merged first</s>
Make those audio datasets streamable:
- [x] common_voice
- [x] openslr
- [x] vivos
- [x] librispeech_asr <s>(still has some issues to read FLAC)</s> *actually it's ok*
- [ ] <s>multilingual_librispeech (yet to be converted)</S> *TODO in a separate PR* | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3290/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3290/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3290",
"html_url": "https://github.com/huggingface/datasets/pull/3290",
"diff_url": "https://github.com/huggingface/datasets/pull/3290.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3290.patch",
"merged_at": 1637334537000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3289 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3289/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3289/comments | https://api.github.com/repos/huggingface/datasets/issues/3289/events | https://github.com/huggingface/datasets/pull/3289 | 1,056,323,715 | PR_kwDODunzps4uqf79 | 3,289 | Unpin markdown for build_docs now that it's fixed | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,637,166,173,000 | 1,637,166,189,000 | 1,637,166,188,000 | MEMBER | null | `markdown`'s bug has been fixed, so this PR reverts #3286 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3289/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3289/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3289",
"html_url": "https://github.com/huggingface/datasets/pull/3289",
"diff_url": "https://github.com/huggingface/datasets/pull/3289.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3289.patch",
"merged_at": 1637166188000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3288 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3288/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3288/comments | https://api.github.com/repos/huggingface/datasets/issues/3288/events | https://github.com/huggingface/datasets/pull/3288 | 1,056,145,703 | PR_kwDODunzps4up6S5 | 3,288 | Allow datasets with indices table when concatenating along axis=1 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,637,156,488,000 | 1,637,163,672,000 | 1,637,163,671,000 | CONTRIBUTOR | null | Calls `flatten_indices` on the datasets with indices table in `concatenate_datasets` to fix issues when concatenating along `axis=1`.
cc @lhoestq: I decided to flatten all the datasets instead of flattening all the datasets except the largest one in the end. The latter approach fails on the following example:
```python
a = Dataset.from_dict({"a": [10, 20, 30, 40]})
b = Dataset.from_dict({"b": [10, 20, 30, 40, 50, 60]}) # largest dataset
a = a.select([1, 2, 3])
b = b.select([1, 2, 3])
concatenate_datasets([a, b], axis=1) # fails at line concat_tables(...) because the real length of b's data is 6 and a's length is 3 after flattening (was 4 before flattening)
```
Also, it requires additional re-ordering of indices to prepare them for working with the indices table of the largest dataset. IMO not worth when we save only one `flatten_indices` call. (feel free to check the code of that approach at https://github.com/huggingface/datasets/commit/6acd10481c70950dcfdbfd2bab0bf0c74ad80bcb if you are interested)
Fixes #3273
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3288/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3288/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3288",
"html_url": "https://github.com/huggingface/datasets/pull/3288",
"diff_url": "https://github.com/huggingface/datasets/pull/3288.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3288.patch",
"merged_at": 1637163671000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3287 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3287/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3287/comments | https://api.github.com/repos/huggingface/datasets/issues/3287/events | https://github.com/huggingface/datasets/pull/3287 | 1,056,079,724 | PR_kwDODunzps4upsWR | 3,287 | Add The Pile dataset and PubMed Central subset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,637,152,558,000 | 1,638,372,548,000 | 1,638,372,547,000 | MEMBER | null | Add:
- The complete final version of The Pile dataset: "all" config
- PubMed Central subset of The Pile: "pubmed_central" config
Close #1675, close bigscience-workshop/data_tooling#74.
CC: @StellaAthena, @lewtun | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3287/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3287/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3287",
"html_url": "https://github.com/huggingface/datasets/pull/3287",
"diff_url": "https://github.com/huggingface/datasets/pull/3287.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3287.patch",
"merged_at": 1638372546000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3286 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3286/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3286/comments | https://api.github.com/repos/huggingface/datasets/issues/3286/events | https://github.com/huggingface/datasets/pull/3286 | 1,056,008,586 | PR_kwDODunzps4updTK | 3,286 | Fix build_docs CI | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,637,147,936,000 | 1,637,147,960,000 | 1,637,147,959,000 | MEMBER | null | Because of https://github.com/Python-Markdown/markdown/issues/1196 we have to temporarily pin `markdown` to 3.3.4 for the docs to build without issues | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3286/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3286",
"html_url": "https://github.com/huggingface/datasets/pull/3286",
"diff_url": "https://github.com/huggingface/datasets/pull/3286.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3286.patch",
"merged_at": 1637147959000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3285 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3285/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3285/comments | https://api.github.com/repos/huggingface/datasets/issues/3285/events | https://github.com/huggingface/datasets/issues/3285 | 1,055,506,730 | I_kwDODunzps4-6cEq | 3,285 | Add IEMOCAP dataset | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | open | false | null | [] | null | [
"The IEMOCAP dataset is private and available only on request.\r\n```\r\nTo obtain the IEMOCAP data you just need to fill out an electronic release form below.\r\n```\r\n\r\n- [Request form](https://sail.usc.edu/iemocap/release_form.php)\r\n- [License ](https://sail.usc.edu/iemocap/Data_Release_Form_IEMOCAP.pdf)\r\n\r\n\r\n> We do not share the dataset for commercial purposes due to privacy concerns surrounding the participants of the research. The login details will only be emailed to the given academic email address.\r\n\r\nI think it won't be possible to add this dataset to 🤗 datasets.",
"Hi @dnaveenr ! We can contact the authors to see if they are interested in hosting the dataset on the Hub. In the meantime, feel free to work on a script with manual download.",
"Hi @mariosasko . Thanks for your response. Sure, I will mail them and find out if they're open to this.\r\n\r\nWork on a script with manual download ? This is new to me, any guidelines would be helpful here.\r\n",
"> Thanks for your response. Sure, I will mail them and find out if they're open to this.\r\n\r\nIt's best to leave this part to us because we have to explain how login would work and (potentially) set up a custom verification for the dataset.\r\n\r\n> Work on a script with manual download ? This is new to me, any guidelines would be helpful here.\r\n\r\nFor instance, this is one of the scripts with manual download: https://huggingface.co/datasets/arxiv_dataset. Compared to the standard dataset, it has the `manual_download_instructions` attribute and uses `dl_manager.manual_dir` (derived from `load_dataset(..., data_dir=\"path/to/data\")`) to access the dataset's data files.",
"> It's best to leave this part to us because we have to explain how login would work and (potentially) set up a custom verification for the dataset.\r\n\r\nYes. That would be perfect. Thanks.\r\n\r\n----\r\nOkay. Thanks for giving a reference. This is helpful. I will go through it.\r\n\r\n"
] | 1,637,102,840,000 | 1,647,321,001,000 | null | MEMBER | null | ## Adding a Dataset
- **Name:** IEMOCAP
- **Description:** acted, multimodal and multispeaker database
- **Paper:** https://sail.usc.edu/iemocap/Busso_2008_iemocap.pdf
- **Data:** https://sail.usc.edu/iemocap/index.html
- **Motivation:** Useful multimodal dataset
cc @anton-l
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3285/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3285/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3284 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3284/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3284/comments | https://api.github.com/repos/huggingface/datasets/issues/3284/events | https://github.com/huggingface/datasets/issues/3284 | 1,055,502,909 | I_kwDODunzps4-6bI9 | 3,284 | Add VoxLingua107 dataset | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] | open | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"#self-assign"
] | 1,637,102,648,000 | 1,638,784,185,000 | null | MEMBER | null | ## Adding a Dataset
- **Name:** VoxLingua107
- **Description:** VoxLingua107 is a speech dataset for training spoken language identification models. The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.
- **Paper:** https://arxiv.org/abs/2011.12998
- **Data:** http://bark.phon.ioc.ee/voxlingua107/
- **Motivation:** Nice audio classification dataset
cc @anton-l
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3284/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3283 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3283/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3283/comments | https://api.github.com/repos/huggingface/datasets/issues/3283/events | https://github.com/huggingface/datasets/issues/3283 | 1,055,495,874 | I_kwDODunzps4-6ZbC | 3,283 | Add Speech Commands dataset | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] | closed | false | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"#self-assign"
] | 1,637,102,396,000 | 1,639,132,215,000 | 1,639,132,215,000 | MEMBER | null | ## Adding a Dataset
- **Name:** Speech commands
- **Description:** A Dataset for Limited-Vocabulary Speech Recognition
- **Paper:** https://arxiv.org/abs/1804.03209
- **Data:** https://www.tensorflow.org/datasets/catalog/speech_commands, Available:
http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz
- **Motivation:** Nice dataset for audio classification training
cc @anton-l
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3283/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3283/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3282/comments | https://api.github.com/repos/huggingface/datasets/issues/3282/events | https://github.com/huggingface/datasets/issues/3282 | 1,055,054,898 | I_kwDODunzps4-4twy | 3,282 | ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py | {
"login": "MinionAttack",
"id": 10078549,
"node_id": "MDQ6VXNlcjEwMDc4NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/10078549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MinionAttack",
"html_url": "https://github.com/MinionAttack",
"followers_url": "https://api.github.com/users/MinionAttack/followers",
"following_url": "https://api.github.com/users/MinionAttack/following{/other_user}",
"gists_url": "https://api.github.com/users/MinionAttack/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MinionAttack/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MinionAttack/subscriptions",
"organizations_url": "https://api.github.com/users/MinionAttack/orgs",
"repos_url": "https://api.github.com/users/MinionAttack/repos",
"events_url": "https://api.github.com/users/MinionAttack/events{/privacy}",
"received_events_url": "https://api.github.com/users/MinionAttack/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting :)\r\nI think this is because the dataset is behind an access page. We can fix the dataset viewer\r\n\r\nIf you also have this error when you use the `datasets` library in python, you should probably pass `use_auth_token=True` to the `load_dataset()` function to use your account to access the dataset.",
"Ah ok, I didn't realise about the login page. I'll try `use_auth_token=True` and see if that solves it.\r\n\r\nRegards!",
"Hi, \r\n\r\nUsing `use_auth_token=True` and downloading the credentials with `huggingface-cli login` (stored in .huggingface/token) solved the issue.\r\n\r\nShould I leave the issue open until you fix the Dataset viewer issue?",
"Cool ! Yes let's keep this issue open until the viewer is fixed - I'll close it when this is fixed. Thanks",
"The error I get when trying to load OSCAR 21.09 is this\r\n```\r\nConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py\r\n```\r\n\r\nThe URL I get in the browser is this\r\n```\r\nhttps://huggingface.co/datasets/oscar-corpus/OSCAR-2109/blob/main/OSCAR-2109.py\r\n```\r\n\r\nMaybe URL is the issue? (resolve vs blob)",
"> The error I get when trying to load OSCAR 21.09 is this\r\n> \r\n> ```\r\n> ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py\r\n> ```\r\n> \r\n> The URL I get in the browser is this\r\n> \r\n> ```\r\n> https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/blob/main/OSCAR-2109.py\r\n> ```\r\n> \r\n> Maybe URL is the issue? (resolve vs blob)\r\n\r\nYou need to download your login credentials. See `huggingface-cli login` documentation and when loading the dataset use `use_auth_token=True`:\r\n`\r\nload_dataset(corpus, language, split=None, use_auth_token=True, cache_dir=cache_folder)`",
"Fixed.\r\n\r\n<img width=\"1542\" alt=\"Capture d’écran 2022-04-12 à 13 57 24\" src=\"https://user-images.githubusercontent.com/1676121/162957585-af96d19c-f86c-47fe-80c4-2b071083cee4.png\">\r\n"
] | 1,637,078,719,000 | 1,649,764,663,000 | 1,649,764,663,000 | NONE | null | ## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)*
*The dataset library cannot download any language from the oscar-corpus/OSCAR-2109 dataset. By entering the URL in your browser I can access the file.*
```
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py
```
Am I the one who added this dataset ? No
Using the older version of [OSCAR](https://huggingface.co/datasets/oscar) I don't have any issues downloading languages with the dataset library. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3282/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3281 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3281/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3281/comments | https://api.github.com/repos/huggingface/datasets/issues/3281/events | https://github.com/huggingface/datasets/pull/3281 | 1,055,018,876 | PR_kwDODunzps4umWZE | 3,281 | [Datasets] Improve Covost 2 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I am trying to use `load_dataset` with the French dataset(common voice corpus 1) which is downloaded from a common voice site and the target language is English (using colab)\r\n\r\nSteps I have followed:\r\n\r\n**1. untar:**\r\n`!tar xvzf fr.tar -C data_dir`\r\n\r\n**2. load data:**\r\n`load_dataset('covost2', 'fr_en', data_dir=\"/content/data_dir\")`\r\n\r\n0 rows are loading as shown below:\r\n```\r\nUsing custom data configuration fr_en-data_dir=%2Fcontent%2Fdata_dir\r\nReusing dataset covost2 (/root/.cache/huggingface/datasets/covost2/fr_en-data_dir=%2Fcontent%2Fdata_dir/1.0.0/bba950aae1ffa5a14b876b7e09c17b44de2c3cf60e7bd5d459640beffc78e35b)\r\n100%\r\n3/3 [00:00<00:00, 54.98it/s]\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['client_id', 'file', 'audio', 'sentence', 'translation', 'id'],\r\n num_rows: 0\r\n })\r\n validation: Dataset({\r\n features: ['client_id', 'file', 'audio', 'sentence', 'translation', 'id'],\r\n num_rows: 0\r\n })\r\n test: Dataset({\r\n features: ['client_id', 'file', 'audio', 'sentence', 'translation', 'id'],\r\n num_rows: 0\r\n })\r\n})\r\n```\r\n\r\nCan you please provide a sample working example code to load the dataset?",
"Hi ! I think it only works with the subsets of Common Voice Corpus 4, not Common Voice Corpus 1"
] | 1,637,076,739,000 | 1,643,213,826,000 | 1,637,232,244,000 | MEMBER | null | It's currently quite confusing to understand the manual data download instruction of Covost and not very user-friendly.
Currenty the user has to:
1. Go on Common Voice website
2. Find the correct dataset which is **not** mentioned in the error message
3. Download it
4. Untar it
5. Create a language id folder (why? this folder does not exist in the `.tar` downloaded file)
6. pass the folder containing the created language id folder
This PR improves this to:
1. Go on Common Voice website
2. Find the correct dataset which **is** mentioned in the error message
3. Download it
4. Untar it
5. pass the untared folder
**Note**: This PR is not at all time-critical | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3281/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3281/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3281",
"html_url": "https://github.com/huggingface/datasets/pull/3281",
"diff_url": "https://github.com/huggingface/datasets/pull/3281.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3281.patch",
"merged_at": 1637232244000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3280 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3280/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3280/comments | https://api.github.com/repos/huggingface/datasets/issues/3280/events | https://github.com/huggingface/datasets/pull/3280 | 1,054,766,828 | PR_kwDODunzps4ulgye | 3,280 | Fix bookcorpusopen RAM usage | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,637,062,072,000 | 1,637,164,408,000 | 1,637,069,670,000 | MEMBER | null | Each document is a full book, so the default arrow writer batch size of 10,000 is too big, and it can fill up RAM quickly before flushing the first batch on disk. I changed its batch size to 256 to use maximum 100MB of memory
Fix #3167. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3280/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3280",
"html_url": "https://github.com/huggingface/datasets/pull/3280",
"diff_url": "https://github.com/huggingface/datasets/pull/3280.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3280.patch",
"merged_at": 1637069670000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3279 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3279/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3279/comments | https://api.github.com/repos/huggingface/datasets/issues/3279/events | https://github.com/huggingface/datasets/pull/3279 | 1,054,711,852 | PR_kwDODunzps4ulVHe | 3,279 | Minor Typo Fix - Precision to Recall | {
"login": "SebastinSanty",
"id": 13795788,
"node_id": "MDQ6VXNlcjEzNzk1Nzg4",
"avatar_url": "https://avatars.githubusercontent.com/u/13795788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SebastinSanty",
"html_url": "https://github.com/SebastinSanty",
"followers_url": "https://api.github.com/users/SebastinSanty/followers",
"following_url": "https://api.github.com/users/SebastinSanty/following{/other_user}",
"gists_url": "https://api.github.com/users/SebastinSanty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SebastinSanty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SebastinSanty/subscriptions",
"organizations_url": "https://api.github.com/users/SebastinSanty/orgs",
"repos_url": "https://api.github.com/users/SebastinSanty/repos",
"events_url": "https://api.github.com/users/SebastinSanty/events{/privacy}",
"received_events_url": "https://api.github.com/users/SebastinSanty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,637,058,742,000 | 1,637,061,483,000 | 1,637,061,482,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3279/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3279",
"html_url": "https://github.com/huggingface/datasets/pull/3279",
"diff_url": "https://github.com/huggingface/datasets/pull/3279.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3279.patch",
"merged_at": 1637061482000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3278 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3278/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3278/comments | https://api.github.com/repos/huggingface/datasets/issues/3278/events | https://github.com/huggingface/datasets/pull/3278 | 1,054,249,463 | PR_kwDODunzps4uj2EQ | 3,278 | Proposed update to the documentation for WER | {
"login": "wooters",
"id": 2111202,
"node_id": "MDQ6VXNlcjIxMTEyMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2111202?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wooters",
"html_url": "https://github.com/wooters",
"followers_url": "https://api.github.com/users/wooters/followers",
"following_url": "https://api.github.com/users/wooters/following{/other_user}",
"gists_url": "https://api.github.com/users/wooters/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wooters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wooters/subscriptions",
"organizations_url": "https://api.github.com/users/wooters/orgs",
"repos_url": "https://api.github.com/users/wooters/repos",
"events_url": "https://api.github.com/users/wooters/events{/privacy}",
"received_events_url": "https://api.github.com/users/wooters/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,637,018,911,000 | 1,637,061,577,000 | 1,637,061,577,000 | CONTRIBUTOR | null | I wanted to submit a minor update to the description of WER for your consideration.
Because of the possibility of insertions, the numerator in the WER formula can be larger than N, so the value of WER can be greater than 1.0:
```
>>> from datasets import load_metric
>>> metric = load_metric("wer")
>>> metric.compute(predictions=["hello how are you"], references=["hello"])
3.0
```
and similarly from the underlying jiwer module's `wer` function:
```
>>> from jiwer import wer
>>> wer("hello", "hello how are you")
3.0
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3278/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3278",
"html_url": "https://github.com/huggingface/datasets/pull/3278",
"diff_url": "https://github.com/huggingface/datasets/pull/3278.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3278.patch",
"merged_at": 1637061577000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3277 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3277/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3277/comments | https://api.github.com/repos/huggingface/datasets/issues/3277/events | https://github.com/huggingface/datasets/pull/3277 | 1,054,122,656 | PR_kwDODunzps4ujk11 | 3,277 | f-string formatting | {
"login": "Mehdi2402",
"id": 56029953,
"node_id": "MDQ6VXNlcjU2MDI5OTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehdi2402",
"html_url": "https://github.com/Mehdi2402",
"followers_url": "https://api.github.com/users/Mehdi2402/followers",
"following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions",
"organizations_url": "https://api.github.com/users/Mehdi2402/orgs",
"repos_url": "https://api.github.com/users/Mehdi2402/repos",
"events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehdi2402/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hello @lhoestq, ```make style``` is applied as asked. :)"
] | 1,637,012,225,000 | 1,637,354,408,000 | 1,637,165,918,000 | CONTRIBUTOR | null | **Fix #3257**
Replaced _.format()_ and _%_ by f-strings in the following modules :
- [x] **tests**
- [x] **metrics**
- [x] **benchmarks**
- [x] **utils**
- [x] **templates**
- [x] **src/Datasets/\*.py**
Modules in **_src/Datasets/_**:
- [x] **commands**
- [x] **features**
- [x] **formatting**
- [x] **io**
- [x] **tasks**
- [x] **utils**
Module **datasets** will not be edited as asked by @mariosasko
-A correction of the first PR (#3267)-
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3277/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3277/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3277",
"html_url": "https://github.com/huggingface/datasets/pull/3277",
"diff_url": "https://github.com/huggingface/datasets/pull/3277.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3277.patch",
"merged_at": 1637165918000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3276 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3276/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3276/comments | https://api.github.com/repos/huggingface/datasets/issues/3276/events | https://github.com/huggingface/datasets/pull/3276 | 1,053,793,063 | PR_kwDODunzps4uihih | 3,276 | Update KILT metadata JSON | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,989,925,000 | 1,637,061,719,000 | 1,637,061,718,000 | MEMBER | null | Fix #3265. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3276/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3276",
"html_url": "https://github.com/huggingface/datasets/pull/3276",
"diff_url": "https://github.com/huggingface/datasets/pull/3276.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3276.patch",
"merged_at": 1637061718000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3275 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3275/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3275/comments | https://api.github.com/repos/huggingface/datasets/issues/3275/events | https://github.com/huggingface/datasets/pull/3275 | 1,053,698,898 | PR_kwDODunzps4uiN9t | 3,275 | Force data files extraction if download_mode='force_redownload' | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,984,824,000 | 1,636,987,523,000 | 1,636,987,523,000 | CONTRIBUTOR | null | Avoids weird issues when redownloading a dataset due to cached data not being fully updated.
With this change, issues #3122 and https://github.com/huggingface/datasets/issues/2956 (not a fix, but a workaround) can be fixed as follows:
```python
dset = load_dataset(..., download_mode="force_redownload")
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3275/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3275/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3275",
"html_url": "https://github.com/huggingface/datasets/pull/3275",
"diff_url": "https://github.com/huggingface/datasets/pull/3275.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3275.patch",
"merged_at": 1636987523000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3274 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3274/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3274/comments | https://api.github.com/repos/huggingface/datasets/issues/3274/events | https://github.com/huggingface/datasets/pull/3274 | 1,053,689,140 | PR_kwDODunzps4uiL8- | 3,274 | Fix some contact information formats | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The CI fail are caused by some missing sections or tags, which is unrelated to this PR. Merging !"
] | 1,636,984,234,000 | 1,636,987,435,000 | 1,636,987,434,000 | MEMBER | null | As reported in https://github.com/huggingface/datasets/issues/3188 some contact information are not displayed correctly.
This PR fixes this for CoNLL-2002 and some other datasets with the same issue | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3274/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3274",
"html_url": "https://github.com/huggingface/datasets/pull/3274",
"diff_url": "https://github.com/huggingface/datasets/pull/3274.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3274.patch",
"merged_at": 1636987434000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3273 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3273/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3273/comments | https://api.github.com/repos/huggingface/datasets/issues/3273/events | https://github.com/huggingface/datasets/issues/3273 | 1,053,554,038 | I_kwDODunzps4-y_V2 | 3,273 | Respect row ordering when concatenating datasets along axis=1 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,636,975,634,000 | 1,637,163,671,000 | 1,637,163,671,000 | CONTRIBUTOR | null | Currently, there is a bug when concatenating datasets along `axis=1` if more than one dataset has the `_indices` attribute defined. In that scenario, all indices mappings except the first one get ignored.
A minimal reproducible example:
```python
>>> from datasets import Dataset, concatenate_datasets
>>> a = Dataset.from_dict({"a": [30, 20, 10]})
>>> b = Dataset.from_dict({"b": [2, 1, 3]})
>>> d = concatenate_datasets([a.sort("a"), b.sort("b")], axis=1)
>>> print(d[:3]) # expected: {'a': [10, 20, 30], 'b': [1, 2, 3]}
{'a': [10, 20, 30], 'b': [3, 1, 2]}
```
I've noticed the bug while working on #3195. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3273/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3273/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3272 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3272/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3272/comments | https://api.github.com/repos/huggingface/datasets/issues/3272/events | https://github.com/huggingface/datasets/issues/3272 | 1,053,516,479 | I_kwDODunzps4-y2K_ | 3,272 | Make iter_archive work with ZIP files | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "Mehdi2402",
"id": 56029953,
"node_id": "MDQ6VXNlcjU2MDI5OTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehdi2402",
"html_url": "https://github.com/Mehdi2402",
"followers_url": "https://api.github.com/users/Mehdi2402/followers",
"following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions",
"organizations_url": "https://api.github.com/users/Mehdi2402/orgs",
"repos_url": "https://api.github.com/users/Mehdi2402/repos",
"events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehdi2402/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Mehdi2402",
"id": 56029953,
"node_id": "MDQ6VXNlcjU2MDI5OTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehdi2402",
"html_url": "https://github.com/Mehdi2402",
"followers_url": "https://api.github.com/users/Mehdi2402/followers",
"following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions",
"organizations_url": "https://api.github.com/users/Mehdi2402/orgs",
"repos_url": "https://api.github.com/users/Mehdi2402/repos",
"events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehdi2402/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hello, is this issue open for any contributor ? can I work on it ?\r\n\r\n",
"Hi ! Sure this is open for any contributor. If you're interested feel free to self-assign this issue to you by commenting `#self-assign`. Then if you have any question or if I can help, feel free to ping me.\r\n\r\nTo begin with, feel free to take a look at both implementations of `iter_archive` for local downloads and for data streaming:\r\n\r\nIn the `DownloadManager` for local dowloads:\r\nhttps://github.com/huggingface/datasets/blob/dfa334bd8dc6cbc854b170379c7d2cb7e3d3fe4f/src/datasets/utils/download_manager.py#L218-L242\r\n\r\nIn the `StreamingDownloadManager` to stream the content of the archive directly from the remote file:\r\nhttps://github.com/huggingface/datasets/blob/dfa334bd8dc6cbc854b170379c7d2cb7e3d3fe4f/src/datasets/utils/streaming_download_manager.py#L502-L526\r\n\r\nNotice the call to `xopen` that opens and streams a file given either an URL or a local path :)",
"Okay thank you for the information. I will work on this :) ",
"#self-assign"
] | 1,636,973,442,000 | 1,637,798,927,000 | null | MEMBER | null | Currently users can use `dl_manager.iter_archive` in their dataset script to iterate over all the files of a TAR archive.
It would be nice if it could work with ZIP files too ! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3272/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3272/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3271 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3271/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3271/comments | https://api.github.com/repos/huggingface/datasets/issues/3271/events | https://github.com/huggingface/datasets/pull/3271 | 1,053,482,919 | PR_kwDODunzps4uhgi1 | 3,271 | Decode audio from remote | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,971,956,000 | 1,637,062,558,000 | 1,637,062,558,000 | MEMBER | null | Currently the Audio feature type can only decode local audio files, not remote files.
To fix this I replaced `open` with our `xopen` functoin that is compatible with remote files in audio.py
cc @albertvillanova @mariosasko | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3271/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3271/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3271",
"html_url": "https://github.com/huggingface/datasets/pull/3271",
"diff_url": "https://github.com/huggingface/datasets/pull/3271.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3271.patch",
"merged_at": 1637062558000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3270 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3270/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3270/comments | https://api.github.com/repos/huggingface/datasets/issues/3270/events | https://github.com/huggingface/datasets/pull/3270 | 1,053,465,662 | PR_kwDODunzps4uhcxm | 3,270 | Add os.listdir for streaming | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,971,244,000 | 1,636,972,023,000 | 1,636,972,023,000 | MEMBER | null | Extend `os.listdir` to support streaming data from remote files. This is often used to navigate in remote ZIP files for example | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3270/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3270",
"html_url": "https://github.com/huggingface/datasets/pull/3270",
"diff_url": "https://github.com/huggingface/datasets/pull/3270.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3270.patch",
"merged_at": 1636972022000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3269 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3269/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3269/comments | https://api.github.com/repos/huggingface/datasets/issues/3269/events | https://github.com/huggingface/datasets/issues/3269 | 1,053,218,769 | I_kwDODunzps4-xtfR | 3,269 | coqa NonMatchingChecksumError | {
"login": "ZhaofengWu",
"id": 11954789,
"node_id": "MDQ6VXNlcjExOTU0Nzg5",
"avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZhaofengWu",
"html_url": "https://github.com/ZhaofengWu",
"followers_url": "https://api.github.com/users/ZhaofengWu/followers",
"following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}",
"gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions",
"organizations_url": "https://api.github.com/users/ZhaofengWu/orgs",
"repos_url": "https://api.github.com/users/ZhaofengWu/repos",
"events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZhaofengWu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @ZhaofengWu, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your bug:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"coqa\")\r\nDownloading: 3.82kB [00:00, 1.91MB/s]\r\nDownloading: 1.79kB [00:00, 1.79MB/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset coqa/default (download: 55.40 MiB, generated: 18.35 MiB, post-processed: Unknown size, total: 73.75 MiB) to .cache\\coqa\\default\\1.0.0\\553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0...\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 49.0M/49.0M [00:06<00:00, 7.17MB/s]\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 9.09M/9.09M [00:01<00:00, 6.08MB/s]\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:12<00:00, 6.48s/it]\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 333.26it/s]\r\nDataset coqa downloaded and prepared to .cache\\coqa\\default\\1.0.0\\553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 285.49it/s]\r\n\r\nIn [3]: ds\r\nOut[3]:\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['source', 'story', 'questions', 'answers'],\r\n num_rows: 7199\r\n })\r\n validation: Dataset({\r\n features: ['source', 'story', 'questions', 'answers'],\r\n num_rows: 500\r\n })\r\n})\r\n```\r\n\r\nCould you please give more details about your development environment? You can run the command `datasets-cli env` and copy-and-paste its output:\r\n```\r\n- `datasets` version:\r\n- Platform:\r\n- Python version:\r\n- PyArrow version:\r\n```\r\nIt might be because you are using an old version of `datasets`. Could you please update it (`pip install -U datasets`) and confirm if the problem parsists? ",
"I'm getting the same error in two separate environments:\r\n```\r\n- `datasets` version: 1.15.1\r\n- Platform: Linux-5.4.0-84-generic-x86_64-with-debian-bullseye-sid\r\n- Python version: 3.7.11\r\n- PyArrow version: 6.0.0\r\n```\r\n\r\n```\r\n- `datasets` version: 1.15.1\r\n- Platform: macOS-10.16-x86_64-i386-64bit\r\n- Python version: 3.9.5\r\n- PyArrow version: 6.0.0\r\n```",
"I'm sorry, but don't get to reproduce the error in the Linux environment.\r\n\r\n@mariosasko @lhoestq can you reproduce it?",
"I also can't reproduce the error on Windows/Linux (tested both the master and the `1.15.1` version). ",
"Maybe the file had issues during the download ? Could you try to delete your cache and try again ?\r\nBy default the downloads cache is at `~/.cache/huggingface/datasets/downloads`\r\n\r\nAlso can you check if you have a proxy that could prevent the download to succeed ? Are you able to download those files via your browser ?",
"I got the same error in a third environment (google cloud) as well. The internet for these three environments are all different so I don't think that's the reason.\r\n```\r\n- `datasets` version: 1.12.1\r\n- Platform: Linux-5.11.0-1022-gcp-x86_64-with-glibc2.31\r\n- Python version: 3.9.7\r\n- PyArrow version: 6.0.0\r\n```\r\nI deleted the entire `~/.cache/huggingface/datasets` on my local mac, and got a different first time error.\r\n```\r\nPython 3.9.5 (default, May 18 2021, 12:31:01) \r\n[Clang 10.0.0 ] :: Anaconda, Inc. on darwin\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"coqa\")\r\nDownloading: 3.82kB [00:00, 1.19MB/s] \r\nDownloading: 1.79kB [00:00, 712kB/s] \r\nUsing custom data configuration default\r\nDownloading and preparing dataset coqa/default (download: 55.40 MiB, generated: 18.35 MiB, post-processed: Unknown size, total: 73.75 MiB) to /Users/zhaofengw/.cache/huggingface/datasets/coqa/default/1.0.0/553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0...\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 222/222 [00:00<00:00, 1.36MB/s]\r\n 50%|████████████████████████████████████████████████████████████████████████████████████████████████████████████▌ | 1/2 [00:00<00:00, 2.47it/s]Traceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/load.py\", line 1632, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py\", line 607, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py\", line 675, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/Users/zhaofengw/.cache/huggingface/modules/datasets_modules/datasets/coqa/553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0/coqa.py\", line 70, in _split_generators\r\n downloaded_files = dl_manager.download_and_extract(urls_to_download)\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 284, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 196, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 216, in map_nested\r\n mapped = [\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 217, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True))\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 152, in _single_map_nested\r\n return function(data_struct)\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/download_manager.py\", line 217, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 295, in cached_path\r\n output_path = get_from_cache(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/file_utils.py\", line 594, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json\r\n>>> dataset = load_dataset(\"coqa\")\r\nUsing custom data configuration default\r\nDownloading and preparing dataset coqa/default (download: 55.40 MiB, generated: 18.35 MiB, post-processed: Unknown size, total: 73.75 MiB) to /Users/zhaofengw/.cache/huggingface/datasets/coqa/default/1.0.0/553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0...\r\nDownloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 222/222 [00:00<00:00, 1.38MB/s]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 6.26it/s]\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1087.45it/s]\r\n 50%|████████████████████████████████████████████████████████████████████████████████████████████████████████████▌ | 1/2 [00:45<00:45, 45.60s/it]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/load.py\", line 1632, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py\", line 607, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py\", line 679, in _download_and_prepare\r\n verify_checksums(\r\n File \"/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/info_utils.py\", line 40, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://nlp.stanford.edu/data/coqa/coqa-train-v1.0.json', 'https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json']\r\n```\r\nI can access the URL using my browser, though I did notice a redirection -- could that have something to do with it?",
"Hi @ZhaofengWu, \r\n\r\nWhat about in Google Colab? Can you run this notebook without errors? \r\nhttps://colab.research.google.com/drive/1CCpiiHmtNlfO_4CZ3-fW-TSShr1M0rL4?usp=sharing",
"I can run your notebook fine, but if I create one myself, it has that error: https://colab.research.google.com/drive/107GIdhrauPO6ZiFDY7G9S74in4qqI2Kx?usp=sharing.\r\n\r\nIt's so funny -- it's like whenever you guys run it it's fine but whenever I run it it fails, whatever the environment is.",
"I guess it must be some connection issue: the data owner may be blocking requests coming from your country or IP range...",
"I mean, I don't think google colab sends the connection from my IP. Same applies to google cloud.",
"Hello, I am having the same error with @ZhaofengWu first with \"social bias frames\" dataset. As I found this report, I tried also \"coqa\" and it fails as well. \r\n\r\nI test this on Google Colab. \r\n\r\n```\r\n- `datasets` version: 1.15.1\r\n- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic\r\n- Python version: 3.7.12\r\n- PyArrow version: 3.0.0\r\n```\r\n\r\nThen another environment\r\n\r\n```\r\n- `datasets` version: 1.15.1\r\n- Platform: macOS-12.0.1-arm64-arm-64bit\r\n- Python version: 3.9.7\r\n- PyArrow version: 6.0.1\r\n```\r\n\r\nI tried the notebook @albertvillanova provided earlier, and it fails...\r\n",
"Hi, still not able to reproduce the issue with `coqa`. If you still have this issue, could you please run these additional commands ?\r\n```python\r\n>>> import os\r\n>>> from hashlib import md5\r\n>>> from datasets.utils import DownloadManager, DownloadConfig\r\n>>> path = DownloadManager(download_config=DownloadConfig(use_etag=False)).download(\"https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json\") # it returns the cached file\r\n>>> os.path.getsize(path)\r\n9090845\r\n>>> m = md5()\r\n>>> m.update(open(path, \"rb\").read())\r\n>>> m.hexdigest()\r\n`95d427588e3733e4ebec55f6938dbba6`\r\n>>> open(path).read(500)\r\n'{\\n \"version\": \"1.0\",\\n \"data\": [\\n {\\n \"source\": \"mctest\",\\n \"id\": \"3dr23u6we5exclen4th8uq9rb42tel\",\\n \"filename\": \"mc160.test.41\",\\n \"story\": \"Once upon a time, in a barn near a farm house, there lived a little white kitten named Cotton. Cotton lived high up in a nice warm place above the barn where all of the farmer\\'s horses slept. But Cotton wasn\\'t alone in her little home above the barn, oh no. She shared her hay bed with her mommy and 5 other sisters. All of her sisters w'\r\n```\r\n\r\nThis way we can know whether you downloaded a corrupted file or an error file that could cause the `NonMatchingChecksumError` error to happen",
"```\r\n>>> import os\r\n>>> from hashlib import md5\r\n>>> from datasets.utils import DownloadManager, DownloadConfig\r\n>>> path = DownloadManager(download_config=DownloadConfig(use_etag=False)).download(\"https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json\") # it returns the cached file\r\n>>> os.path.getsize(path)\r\n222\r\n>>> m = md5()\r\n>>> m.update(open(path, \"rb\").read())\r\n>>> m.hexdigest()\r\n'1195812a37c01a4481a4748c85d0c6a9'\r\n>>> open(path).read(500)\r\n'<html>\\n<head><title>503 Service Temporarily Unavailable</title></head>\\n<body bgcolor=\"white\">\\n<center><h1>503 Service Temporarily Unavailable</h1></center>\\n<hr><center>nginx/1.10.3 (Ubuntu)</center>\\n</body>\\n</html>\\n'\r\n```\r\nLooks like there was a server-side error when downloading the dataset? But I don't believe this is a transient error given (a) deleting the cache and re-downloading gives the same error; (b) it happens on multiple platforms with different network configurations; (c) other people are getting this error too, see above. So I'm not sure why it works for some people but not others.",
"`wget https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json` does work. So I suspect there might be some problem in `datasets`' networking code? Can you give me some snippet that simulates how `datasets` requests the resource which I can run on my end?",
"There is a redirection -- I don't know if that's the cause.",
"Ok This is an issue with the server that hosts the data at `https://nlp.stanford.edu/nlp/data` that randomly returns 503 (by trying several times it also happens on my side), hopefully it can be fixed soon. I'll try to reach the people in charge of hosting the data",
"Thanks. Also it might help to display a more informative error message?",
"You're right. I just opened a PR that would show this error if it happens again:\r\n```python\r\nConnectionError: Couldn't reach https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json (error 503)\r\n```"
] | 1,636,952,647,000 | 1,642,600,699,000 | 1,642,600,699,000 | NONE | null | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("coqa")
Downloading: 3.82kB [00:00, 1.26MB/s]
Downloading: 1.79kB [00:00, 733kB/s]
Using custom data configuration default
Downloading and preparing dataset coqa/default (download: 55.40 MiB, generated: 18.35 MiB, post-processed: Unknown size, total: 73.75 MiB) to /Users/zhaofengw/.cache/huggingface/datasets/coqa/default/1.0.0/553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0...
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 222/222 [00:00<00:00, 1.38MB/s]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 222/222 [00:00<00:00, 1.32MB/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:01<00:00, 1.91it/s]
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1117.44it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py", line 679, in _download_and_prepare
verify_checksums(
File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://nlp.stanford.edu/data/coqa/coqa-train-v1.0.json', 'https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json']
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3269/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3268 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3268/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3268/comments | https://api.github.com/repos/huggingface/datasets/issues/3268/events | https://github.com/huggingface/datasets/issues/3268 | 1,052,992,681 | I_kwDODunzps4-w2Sp | 3,268 | Dataset viewer issue for 'liweili/c4_200m' | {
"login": "liliwei25",
"id": 22389228,
"node_id": "MDQ6VXNlcjIyMzg5MjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/22389228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liliwei25",
"html_url": "https://github.com/liliwei25",
"followers_url": "https://api.github.com/users/liliwei25/followers",
"following_url": "https://api.github.com/users/liliwei25/following{/other_user}",
"gists_url": "https://api.github.com/users/liliwei25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liliwei25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liliwei25/subscriptions",
"organizations_url": "https://api.github.com/users/liliwei25/orgs",
"repos_url": "https://api.github.com/users/liliwei25/repos",
"events_url": "https://api.github.com/users/liliwei25/events{/privacy}",
"received_events_url": "https://api.github.com/users/liliwei25/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! I think the issue comes from this [line](https://huggingface.co/datasets/liweili/c4_200m/blob/main/c4_200m.py#L87):\r\n```python\r\npath = filepath + \"/*.tsv*\"\r\n```\r\n\r\nYou can fix this by doing this instead:\r\n```python\r\npath = os.path.join(filepath, \"/*.tsv*\")\r\n```\r\n\r\nHere is why:\r\n\r\nLocally you can append `\"/*.tsv*\"` to your local path, however it doesn't work in streaming mode, and the dataset viewer does use the streaming mode.\r\nIn streaming mode, the download and extract part is done lazily. It means that instead of using local paths, it's still passing around URLs and [chained URLs](https://filesystem-spec.readthedocs.io/en/latest/features.html#url-chaining)\r\n\r\nTherefore in streaming mode, `filepath` is not a local path, but instead is equal to\r\n```python\r\nzip://::https://huggingface.co/datasets/liweili/c4_200m/resolve/main/data.zip\r\n```\r\nThe `zip://` part means that we navigate inside the remote ZIP file.\r\n\r\nYou must use `os.path.join` to navigate inside it and get your TSV files:\r\n```python\r\n>>> os.path.join(filepath, \"/*.tsv*\")\r\nzip://*.tsv*::https://huggingface.co/datasets/liweili/c4_200m/resolve/main/data.zip\r\n```\r\n\r\n`datasets` extends `os.path.join`, `glob.glob`, etc. in your dataset scripts to work with remote files.",
"hi @lhoestq ! thanks for the tip! i've updated the line of code but it's still not working. am i doing something else wrong? thank you!",
"Hi ! Your dataset code is all good now :)\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: d = load_dataset(\"liweili/c4_200m\", streaming=True)\r\nDownloading: 100%|█████████████████████████████████████████████| 2.79k/2.79k [00:00<00:00, 4.83MB/s]\r\nUsing custom data configuration default\r\n\r\nIn [3]: next(iter(d[\"train\"]))\r\nOut[3]: \r\n{'input': 'Bitcoin is for $7,094 this morning, which CoinDesk says.',\r\n 'output': 'Bitcoin goes for $7,094 this morning, according to CoinDesk.'}\r\n```\r\nThough the viewer doesn't seem to be updated, I'll take a look at what's wrong",
"thank you @lhoestq! 😄 ",
"It's working\r\n\r\n<img width=\"1424\" alt=\"Capture d’écran 2021-12-21 à 11 24 29\" src=\"https://user-images.githubusercontent.com/1676121/146914238-24bf87c0-c68d-4699-8d6c-fa3065656d1d.png\">\r\n\r\n"
] | 1,636,910,326,000 | 1,640,082,320,000 | 1,640,082,291,000 | NONE | null | ## Dataset viewer issue for '*liweili/c4_200m*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/liweili/c4_200m)*
*Server Error*
```
Status code: 404
Exception: Status404Error
Message: Not found. Maybe the cache is missing, or maybe the ressource does not exist.
```
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3268/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3268/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3267 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3267/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3267/comments | https://api.github.com/repos/huggingface/datasets/issues/3267/events | https://github.com/huggingface/datasets/pull/3267 | 1,052,750,084 | PR_kwDODunzps4ufQzB | 3,267 | Replacing .format() and % by f-strings | {
"login": "Mehdi2402",
"id": 56029953,
"node_id": "MDQ6VXNlcjU2MDI5OTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehdi2402",
"html_url": "https://github.com/Mehdi2402",
"followers_url": "https://api.github.com/users/Mehdi2402/followers",
"following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions",
"organizations_url": "https://api.github.com/users/Mehdi2402/orgs",
"repos_url": "https://api.github.com/users/Mehdi2402/repos",
"events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehdi2402/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! It looks like most of your changes are just `black` changes. All those changes are not necessary. In particular if you want to use `black`, please use the `make style` command instead. It runs `black` with additional parameters and you shouldn't end up with that many changes\r\n\r\nFeel free to open a new PR that doesn't include all the unnecessary `black` changes that you have on your branch :)",
"> Hi ! It looks like most of your changes are just `black` changes. All those changes are not necessary. In particular if you want to use `black`, please use the `make style` command instead. It runs `black` with additional parameters and you shouldn't end up with that many changes\r\n> \r\n> Feel free to open a new PR that doesn't include all the unnecessary `black` changes that you have on your branch :)\r\n\r\nThank you for your answer :) , I will open a new PR with the correct changes.",
"Hi @lhoestq, I submitted 3 commits in a new PR (#3277) where I did not apply black.\r\n\r\nI can apply the ```make style``` command if asked.",
"Cool thanks ! Yes feel free to make sure you have `black==21.4b0` and run `make style`"
] | 1,636,830,722,000 | 1,637,096,426,000 | 1,637,074,543,000 | CONTRIBUTOR | null | **Fix #3257**
Replaced _.format()_ and _%_ by f-strings in the following modules :
- [x] **tests**
- [x] **metrics**
- [x] **benchmarks**
- [x] **utils**
- [x] **templates**
Will follow in the next PR the modules left :
- [ ] **src**
Module **datasets** will not be edited as asked by @mariosasko
PS : black and isort applied to files
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3267/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3267/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3267",
"html_url": "https://github.com/huggingface/datasets/pull/3267",
"diff_url": "https://github.com/huggingface/datasets/pull/3267.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3267.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3266 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3266/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3266/comments | https://api.github.com/repos/huggingface/datasets/issues/3266/events | https://github.com/huggingface/datasets/pull/3266 | 1,052,700,155 | PR_kwDODunzps4ufH94 | 3,266 | Fix URLs for WikiAuto Manual, jeopardy and definite_pronoun_resolution | {
"login": "LashaO",
"id": 28014149,
"node_id": "MDQ6VXNlcjI4MDE0MTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/28014149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LashaO",
"html_url": "https://github.com/LashaO",
"followers_url": "https://api.github.com/users/LashaO/followers",
"following_url": "https://api.github.com/users/LashaO/following{/other_user}",
"gists_url": "https://api.github.com/users/LashaO/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LashaO/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LashaO/subscriptions",
"organizations_url": "https://api.github.com/users/LashaO/orgs",
"repos_url": "https://api.github.com/users/LashaO/repos",
"events_url": "https://api.github.com/users/LashaO/events{/privacy}",
"received_events_url": "https://api.github.com/users/LashaO/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"There seems to be problems with datasets metadata, of which I dont have access to. I think one of the datasets is from reddit. Can anyone help?",
"Hello @LashaO , I think the errors were caused by `_DATA_FILES` in `definite_pronoun_resolution.py`. Here are details of the test error.\r\n```\r\nself = BuilderConfig(name='plain_text', version=1.0.0, data_dir=None, data_files={'train': 'train.c.txt', 'test': 'test.c.txt'}, description='Plain text import of the Definite Pronoun Resolution Dataset.')\r\n\r\n def __post_init__(self):\r\n # The config name is used to name the cache directory.\r\n invalid_windows_characters = r\"<>:/\\|?*\"\r\n for invalid_char in invalid_windows_characters:\r\n if invalid_char in self.name:\r\n raise InvalidConfigName(\r\n f\"Bad characters from black list '{invalid_windows_characters}' found in '{self.name}'. \"\r\n f\"They could create issues when creating a directory for this config on Windows filesystem.\"\r\n )\r\n if self.data_files is not None and not isinstance(self.data_files, DataFilesDict):\r\n> raise ValueError(f\"Expected a DataFilesDict in data_files but got {self.data_files}\")\r\nE ValueError: Expected a DataFilesDict in data_files but got {'train': 'train.c.txt', 'test': 'test.c.txt'}\r\n```",
"Hi ! Thanks for the fixes :)\r\n\r\nInstead of uploading the `definite_pronoun_resolution` data files in this PR, maybe we can just update the URL ?\r\nThe old url was http://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt, but now it's https://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt (https instead of http)",
"Actually the bad certificate creates an issue with the download\r\n```python\r\nimport datasets \r\ndatasets.DownloadManager().download(\"https://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt\")\r\n# raises: ConnectionError: Couldn't reach https://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt\r\n```\r\n\r\nLet me see if I can fix that",
"I uploaded them to these URLs, feel free to use them instead of having the text files here in the PR :)\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/definite_pronoun_resolution/train.c.txt\r\nhttps://s3.amazonaws.com/datasets.huggingface.co/definite_pronoun_resolution/test.c.txt",
"Thank you for the tips! Having a busy week so anyone willing to commit the suggestions is welcome. Else, I will try to get back to this in a while.",
"@LashaO Thanks for working on this. Yes, I'll take over as we already have a request to fix the URL of the Jeopardy! dataset in a separate issue.",
"~~Still have to fix the error in the dummy data test of the WikiAuto dataset (so please don't merge).~~ Done! Ready for merging.",
"Thank you, Mario!",
"The CI failure is only related to missing tags in the dataset cards, merging :)"
] | 1,636,815,694,000 | 1,638,789,391,000 | 1,638,789,391,000 | CONTRIBUTOR | null | [#3264](https://github.com/huggingface/datasets/issues/3264) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3266/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3266/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3266",
"html_url": "https://github.com/huggingface/datasets/pull/3266",
"diff_url": "https://github.com/huggingface/datasets/pull/3266.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3266.patch",
"merged_at": 1638789391000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3265 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3265/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3265/comments | https://api.github.com/repos/huggingface/datasets/issues/3265/events | https://github.com/huggingface/datasets/issues/3265 | 1,052,666,558 | I_kwDODunzps4-vmq- | 3,265 | Checksum error for kilt_task_wow | {
"login": "slyviacassell",
"id": 22296717,
"node_id": "MDQ6VXNlcjIyMjk2NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/22296717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slyviacassell",
"html_url": "https://github.com/slyviacassell",
"followers_url": "https://api.github.com/users/slyviacassell/followers",
"following_url": "https://api.github.com/users/slyviacassell/following{/other_user}",
"gists_url": "https://api.github.com/users/slyviacassell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slyviacassell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slyviacassell/subscriptions",
"organizations_url": "https://api.github.com/users/slyviacassell/orgs",
"repos_url": "https://api.github.com/users/slyviacassell/repos",
"events_url": "https://api.github.com/users/slyviacassell/events{/privacy}",
"received_events_url": "https://api.github.com/users/slyviacassell/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Using `dataset = load_dataset(\"kilt_tasks\", \"wow\", ignore_verifications=True)` may fix it, but I do not think it is a elegant solution.",
"Hi @slyviacassell, thanks for reporting.\r\n\r\nYes, there is an issue with the checksum verification. I'm fixing it.\r\n\r\nAnd as you pointed out, in the meantime, you can circumvent the problem by passing `ignore_verifications=True`. "
] | 1,636,805,057,000 | 1,637,061,833,000 | 1,637,061,718,000 | NONE | null | ## Describe the bug
Checksum failed when downloads kilt_tasks_wow. See error output for details.
## Steps to reproduce the bug
```python
import datasets
datasets.load_datasets('kilt_tasks','wow')
```
## Expected results
Download successful
## Actual results
```
Downloading and preparing dataset kilt_tasks/wow (download: 72.07 MiB, generated: 61.82 MiB, post-processed: Unknown size, total: 133.89 MiB) to /root/.cache/huggingface/datasets/kilt_tasks/wow/1.0.0/57dc8b2431e76637e0c6ef79689ca4af61ed3a330e2e0cd62c8971465a35db3a...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 5121.25it/s]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1527.42it/s]
Traceback (most recent call last):
File "kilt_wow.py", line 30, in <module>
main()
File "kilt_wow.py", line 27, in main
train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/")
File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 79, in generate_k_shot_data
dataset = self.load_dataset()
File "kilt_wow.py", line 21, in load_dataset
return datasets.load_dataset('kilt_tasks','wow')
File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 679, in _download_and_prepare
verify_checksums(
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['http://dl.fbaipublicfiles.com/KILT/wow-train-kilt.jsonl', 'http://dl.fbaipublicfiles.com/KILT/wow-dev-kilt.jsonl']
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.15.1
- Platform: Linux-4.15.0-161-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3265/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3264 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3264/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3264/comments | https://api.github.com/repos/huggingface/datasets/issues/3264/events | https://github.com/huggingface/datasets/issues/3264 | 1,052,663,513 | I_kwDODunzps4-vl7Z | 3,264 | Downloading URL change for WikiAuto Manual, jeopardy and definite_pronoun_resolution | {
"login": "slyviacassell",
"id": 22296717,
"node_id": "MDQ6VXNlcjIyMjk2NzE3",
"avatar_url": "https://avatars.githubusercontent.com/u/22296717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slyviacassell",
"html_url": "https://github.com/slyviacassell",
"followers_url": "https://api.github.com/users/slyviacassell/followers",
"following_url": "https://api.github.com/users/slyviacassell/following{/other_user}",
"gists_url": "https://api.github.com/users/slyviacassell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slyviacassell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slyviacassell/subscriptions",
"organizations_url": "https://api.github.com/users/slyviacassell/orgs",
"repos_url": "https://api.github.com/users/slyviacassell/repos",
"events_url": "https://api.github.com/users/slyviacassell/events{/privacy}",
"received_events_url": "https://api.github.com/users/slyviacassell/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"#take\r\nI am willing to fix this. Links can be replaced for WikiAuto Manual and jeopardy with new ones provided by authors.\r\n\r\nAs for the definite_pronoun_resolution URL, a certificate error seems to be preventing a download. I have the files on my local machine. I can include them in the dataset folder as the files are <1MB in size total.",
"> #take I am willing to fix this. Links can be replaced for WikiAuto Manual and jeopardy.\r\n> \r\n> As for the definite_pronoun_resolution URL, a certificate error seems to be preventing a download. I have the files on my local machine. Anyone has opinions on whether it is preferable for me to host them somewhere (e.g. personal GDrive account) or upload them to the dataset folder directly and use github raw URLs? The files are <1MB in size.\r\n\r\nI am planning to fix it next few days. But my to-do list is full and I do not have the cache of definite_pronoun_resolution. I am glad that you can take this. Thanks a lot!",
"No problem, buddy! Will submit a PR over this weekend."
] | 1,636,804,032,000 | 1,654,105,096,000 | 1,654,105,096,000 | NONE | null | ## Describe the bug
- WikiAuto Manual
The original manual datasets with the following downloading URL in this [repository](https://github.com/chaojiang06/wiki-auto) was [deleted](https://github.com/chaojiang06/wiki-auto/commit/0af9b066f2b4e02726fb8a9be49283c0ad25367f) by the author.
```
https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/train.tsv
```
- jeopardy
The downloading URL for jeopardy may move from
```
http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz
```
to
```
https://drive.google.com/file/d/0BwT5wj_P7BKXb2hfM3d2RHU1ckE/view?resourcekey=0-1abK4cJq-mqxFoSg86ieIg
```
- definite_pronoun_resolution
The following downloading URL for definite_pronoun_resolution cannot be reached for some reasons.
```
http://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt
```
## Steps to reproduce the bug
```python
import datasets
datasets.load_datasets('wiki_auto','manual')
datasets.load_datasets('jeopardy')
datasets.load_datasets('definite_pronoun_resolution')
```
## Expected results
Download successfully
## Actual results
- WikiAuto Manual
```
Downloading and preparing dataset wiki_auto/manual (download: 151.65 MiB, generated: 155.97 MiB, post-processed: Unknown size, total: 307.61 MiB) to /root/.cache/huggingface/datasets/wiki_auto/manual/1.0.0/5ffdd9fc62422d29bd02675fb9606f77c1251ee17169ac10b143ce07ef2f4db8...
0%| | 0/3 [00:00<?, ?it/s]Traceback (most recent call last):
File "wiki_auto.py", line 43, in <module>
main()
File "wiki_auto.py", line 40, in main
train, dev, test = dataset.generate_k_shot_data(k=16, seed=seed, path="../data/")
File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 24, in generate_k_shot_data
dataset = self.load_dataset()
File "wiki_auto.py", line 34, in load_dataset
return datasets.load_dataset('wiki_auto', 'manual')
File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/root/.cache/huggingface/modules/datasets_modules/datasets/wiki_auto/5ffdd9fc62422d29bd02675fb9606f77c1251ee17169ac10b143ce07ef2f4db8/wiki_auto.py", line 193, in _split_generators
data_dir = dl_manager.download_and_extract(my_urls)
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download
downloaded_path_or_paths = map_nested(
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 216, in map_nested
mapped = [
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 217, in <listcomp>
_single_map_nested((function, obj, types, None, True))
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 152, in _single_map_nested
return function(data_struct)
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path
output_path = get_from_cache(
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 592, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/train.tsv
```
- jeopardy
```
Using custom data configuration default
Downloading and preparing dataset jeopardy/default (download: 12.13 MiB, generated: 34.46 MiB, post-processed: Unknown size, total: 46.59 MiB) to /root/.cache/huggingface/datasets/jeopardy/default/0.1.0/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810...
Traceback (most recent call last):
File "jeopardy.py", line 45, in <module>
main()
File "jeopardy.py", line 42, in main
train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/")
File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 79, in generate_k_shot_data
dataset = self.load_dataset()
File "jeopardy.py", line 36, in load_dataset
return datasets.load_dataset("jeopardy")
File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/root/.cache/huggingface/modules/datasets_modules/datasets/jeopardy/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810/jeopardy.py", line 72, in _split_generators
filepath = dl_manager.download_and_extract(_DATA_URL)
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download
downloaded_path_or_paths = map_nested(
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 206, in map_nested
return function(data_struct)
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path
output_path = get_from_cache(
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz
```
- definite_pronoun_resolution
```
Downloading and preparing dataset definite_pronoun_resolution/plain_text (download: 222.12 KiB, generated: 239.12 KiB, post-processed: Unknown size, total: 461.24 KiB) to /root/.cache/huggingface/datasets/definite_pronoun_resolution/plain_text/1.0.0/35a1dfd4fba4afb8ba226cbbb65ac7cef0dd3cf9302d8f803740f05d2f16ceff...
0%| | 0/2 [00:00<?, ?it/s]Traceback (most recent call last):
File "definite_pronoun_resolution.py", line 37, in <module>
main()
File "definite_pronoun_resolution.py", line 34, in main
train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/")
File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 79, in generate_k_shot_data
dataset = self.load_dataset()
File "definite_pronoun_resolution.py", line 28, in load_dataset
return datasets.load_dataset('definite_pronoun_resolution')
File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/root/.cache/huggingface/modules/datasets_modules/datasets/definite_pronoun_resolution/35a1dfd4fba4afb8ba226cbbb65ac7cef0dd3cf9302d8f803740f05d2f16ceff/definite_pronoun_resolution.py", line 76, in _split_generators
files = dl_manager.download_and_extract(
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download
downloaded_path_or_paths = map_nested(
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 216, in map_nested
mapped = [
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 217, in <listcomp>
_single_map_nested((function, obj, types, None, True))
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 152, in _single_map_nested
return function(data_struct)
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path
output_path = get_from_cache(
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.15.1
- Platform: Linux-4.15.0-161-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3264/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3264/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3263 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3263/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3263/comments | https://api.github.com/repos/huggingface/datasets/issues/3263/events | https://github.com/huggingface/datasets/issues/3263 | 1,052,552,516 | I_kwDODunzps4-vK1E | 3,263 | FET DATA | {
"login": "FStell01",
"id": 90987031,
"node_id": "MDQ6VXNlcjkwOTg3MDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/90987031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FStell01",
"html_url": "https://github.com/FStell01",
"followers_url": "https://api.github.com/users/FStell01/followers",
"following_url": "https://api.github.com/users/FStell01/following{/other_user}",
"gists_url": "https://api.github.com/users/FStell01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FStell01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FStell01/subscriptions",
"organizations_url": "https://api.github.com/users/FStell01/orgs",
"repos_url": "https://api.github.com/users/FStell01/repos",
"events_url": "https://api.github.com/users/FStell01/events{/privacy}",
"received_events_url": "https://api.github.com/users/FStell01/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [] | 1,636,782,366,000 | 1,636,810,307,000 | 1,636,810,307,000 | NONE | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3263/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3263/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3262 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3262/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3262/comments | https://api.github.com/repos/huggingface/datasets/issues/3262/events | https://github.com/huggingface/datasets/pull/3262 | 1,052,455,082 | PR_kwDODunzps4uej4t | 3,262 | asserts replaced with exception for image classification task, csv, json | {
"login": "manisnesan",
"id": 153142,
"node_id": "MDQ6VXNlcjE1MzE0Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/153142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manisnesan",
"html_url": "https://github.com/manisnesan",
"followers_url": "https://api.github.com/users/manisnesan/followers",
"following_url": "https://api.github.com/users/manisnesan/following{/other_user}",
"gists_url": "https://api.github.com/users/manisnesan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manisnesan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manisnesan/subscriptions",
"organizations_url": "https://api.github.com/users/manisnesan/orgs",
"repos_url": "https://api.github.com/users/manisnesan/repos",
"events_url": "https://api.github.com/users/manisnesan/events{/privacy}",
"received_events_url": "https://api.github.com/users/manisnesan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,756,499,000 | 1,636,974,517,000 | 1,636,974,517,000 | CONTRIBUTOR | null | Fixes for csv, json in io module and image_classification task with tests referenced in https://github.com/huggingface/datasets/issues/3171 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3262/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3262",
"html_url": "https://github.com/huggingface/datasets/pull/3262",
"diff_url": "https://github.com/huggingface/datasets/pull/3262.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3262.patch",
"merged_at": 1636974517000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3261 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3261/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3261/comments | https://api.github.com/repos/huggingface/datasets/issues/3261/events | https://github.com/huggingface/datasets/issues/3261 | 1,052,346,381 | I_kwDODunzps4-uYgN | 3,261 | Scifi_TV_Shows: Having trouble getting viewer to find appropriate files | {
"login": "lara-martin",
"id": 37913218,
"node_id": "MDQ6VXNlcjM3OTEzMjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/37913218?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lara-martin",
"html_url": "https://github.com/lara-martin",
"followers_url": "https://api.github.com/users/lara-martin/followers",
"following_url": "https://api.github.com/users/lara-martin/following{/other_user}",
"gists_url": "https://api.github.com/users/lara-martin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lara-martin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lara-martin/subscriptions",
"organizations_url": "https://api.github.com/users/lara-martin/orgs",
"repos_url": "https://api.github.com/users/lara-martin/repos",
"events_url": "https://api.github.com/users/lara-martin/events{/privacy}",
"received_events_url": "https://api.github.com/users/lara-martin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"Hi ! I think this is because `iter_archive` doesn't support ZIP files yet. See https://github.com/huggingface/datasets/issues/3272\r\n\r\nYou can navigate into the archive this way instead:\r\n```python\r\n# in split_generators\r\ndata_dir = dl_manager.download_and_extract(url)\r\ntrain_filepath = os.path.join(data_dir, \"all-sci-fi-data-train.txt\")\r\nreturn [\r\n datasets.SplitGenerator(\r\n name=datasets.Split.TRAIN,\r\n gen_kwargs={\r\n \"filepath\": train_filepath,\r\n },\r\n ),\r\n...\r\n])\r\n\r\n# in generate_examples\r\nwith open(filepath, encoding=\"utf-8\") as f:\r\n ...\r\n```",
"It's working: https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/viewer/Scifi_TV_Shows/test\r\n\r\n<img width=\"1494\" alt=\"Capture d’écran 2021-12-21 à 11 23 51\" src=\"https://user-images.githubusercontent.com/1676121/146914068-f4b7225f-42c5-471d-9c73-2adac722162f.png\">\r\n"
] | 1,636,745,119,000 | 1,640,082,250,000 | 1,640,082,250,000 | NONE | null | ## Dataset viewer issue for '*Science Fiction TV Show Plots Corpus (Scifi_TV_Shows)*'
**Link:** [link](https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows)
I tried adding both a script (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/blob/main/Scifi_TV_Shows.py) and some dummy examples (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/tree/main/dummy), but the viewer still has a 404 error ("Not found. Maybe the cache is missing, or maybe the ressource does not exist."). I'm not sure what to try next. Thanks in advance!
Am I the one who added this dataset? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3261/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3260 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3260/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3260/comments | https://api.github.com/repos/huggingface/datasets/issues/3260/events | https://github.com/huggingface/datasets/pull/3260 | 1,052,247,373 | PR_kwDODunzps4ueCIU | 3,260 | Fix ConnectionError in Scielo dataset | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The CI error is unrelated to the change."
] | 1,636,740,157,000 | 1,637,086,697,000 | 1,637,085,322,000 | CONTRIBUTOR | null | This PR:
* allows 403 status code in HEAD requests to S3 buckets to fix the connection error in the Scielo dataset (instead of `url`, uses `response.url` to check the URL of the final endpoint)
* makes the Scielo dataset streamable
Fixes #3255. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3260/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3260",
"html_url": "https://github.com/huggingface/datasets/pull/3260",
"diff_url": "https://github.com/huggingface/datasets/pull/3260.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3260.patch",
"merged_at": 1637085322000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3259 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3259/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3259/comments | https://api.github.com/repos/huggingface/datasets/issues/3259/events | https://github.com/huggingface/datasets/pull/3259 | 1,052,189,775 | PR_kwDODunzps4ud5W3 | 3,259 | Updating details of IRC disentanglement data | {
"login": "jkkummerfeld",
"id": 1298052,
"node_id": "MDQ6VXNlcjEyOTgwNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1298052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jkkummerfeld",
"html_url": "https://github.com/jkkummerfeld",
"followers_url": "https://api.github.com/users/jkkummerfeld/followers",
"following_url": "https://api.github.com/users/jkkummerfeld/following{/other_user}",
"gists_url": "https://api.github.com/users/jkkummerfeld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jkkummerfeld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jkkummerfeld/subscriptions",
"organizations_url": "https://api.github.com/users/jkkummerfeld/orgs",
"repos_url": "https://api.github.com/users/jkkummerfeld/repos",
"events_url": "https://api.github.com/users/jkkummerfeld/events{/privacy}",
"received_events_url": "https://api.github.com/users/jkkummerfeld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thank you for the cleanup!"
] | 1,636,737,418,000 | 1,637,255,973,000 | 1,637,255,973,000 | CONTRIBUTOR | null | I was pleasantly surprised to find that someone had already added my dataset to the huggingface library, but some details were missing or incorrect. This PR fixes the documentation. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3259/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3259",
"html_url": "https://github.com/huggingface/datasets/pull/3259",
"diff_url": "https://github.com/huggingface/datasets/pull/3259.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3259.patch",
"merged_at": 1637255973000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3258 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3258/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3258/comments | https://api.github.com/repos/huggingface/datasets/issues/3258/events | https://github.com/huggingface/datasets/issues/3258 | 1,052,188,195 | I_kwDODunzps4-tx4j | 3,258 | Reload dataset that was already downloaded with `load_from_disk` from cloud storage | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,636,737,299,000 | 1,636,737,299,000 | null | MEMBER | null | `load_from_disk` downloads the dataset to a temporary directory without checking if the dataset has already been downloaded once.
It would be nice to have some sort of caching for datasets downloaded this way. This could leverage the fingerprint of the dataset that was saved in the `state.json` file. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3258/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3257 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3257/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3257/comments | https://api.github.com/repos/huggingface/datasets/issues/3257/events | https://github.com/huggingface/datasets/issues/3257 | 1,052,118,365 | I_kwDODunzps4-tg1d | 3,257 | Use f-strings for string formatting | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | {
"login": "Mehdi2402",
"id": 56029953,
"node_id": "MDQ6VXNlcjU2MDI5OTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehdi2402",
"html_url": "https://github.com/Mehdi2402",
"followers_url": "https://api.github.com/users/Mehdi2402/followers",
"following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions",
"organizations_url": "https://api.github.com/users/Mehdi2402/orgs",
"repos_url": "https://api.github.com/users/Mehdi2402/repos",
"events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehdi2402/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Mehdi2402",
"id": 56029953,
"node_id": "MDQ6VXNlcjU2MDI5OTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mehdi2402",
"html_url": "https://github.com/Mehdi2402",
"followers_url": "https://api.github.com/users/Mehdi2402/followers",
"following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}",
"gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions",
"organizations_url": "https://api.github.com/users/Mehdi2402/orgs",
"repos_url": "https://api.github.com/users/Mehdi2402/repos",
"events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mehdi2402/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, I would be glad to help with this. Is there anyone else working on it?",
"Hi, I would be glad to work on this too.",
"#self-assign",
"Hi @Carlosbogo,\r\n\r\nwould you be interested in replacing the `.format` and `%` syntax with f-strings in the modules in the `datasets` directory since @Mehdi2402 has opened a PR that does that for all the other directories?",
"Oh I see. I will be glad to help with the `datasets` directory then."
] | 1,636,732,935,000 | 1,637,165,918,000 | 1,637,165,918,000 | CONTRIBUTOR | null | f-strings offer better readability/performance than `str.format` and `%`, so we should use them in all places in our codebase unless there is good reason to keep the older syntax.
> **NOTE FOR CONTRIBUTORS**: To avoid large PRs and possible merge conflicts, do 1-3 modules per PR. Also, feel free to ignore the files located under `datasets/*`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3257/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3257/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3256 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3256/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3256/comments | https://api.github.com/repos/huggingface/datasets/issues/3256/events | https://github.com/huggingface/datasets/pull/3256 | 1,052,000,613 | PR_kwDODunzps4udTqg | 3,256 | asserts replaced by exception for text classification task with test. | {
"login": "manisnesan",
"id": 153142,
"node_id": "MDQ6VXNlcjE1MzE0Mg==",
"avatar_url": "https://avatars.githubusercontent.com/u/153142?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/manisnesan",
"html_url": "https://github.com/manisnesan",
"followers_url": "https://api.github.com/users/manisnesan/followers",
"following_url": "https://api.github.com/users/manisnesan/following{/other_user}",
"gists_url": "https://api.github.com/users/manisnesan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/manisnesan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manisnesan/subscriptions",
"organizations_url": "https://api.github.com/users/manisnesan/orgs",
"repos_url": "https://api.github.com/users/manisnesan/repos",
"events_url": "https://api.github.com/users/manisnesan/events{/privacy}",
"received_events_url": "https://api.github.com/users/manisnesan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Haha it looks like you got the chance of being reviewed twice at the same time and got the same suggestion twice x)\r\nAnyway it's all good now so we can merge !",
"Thanks for the feedback. "
] | 1,636,725,936,000 | 1,636,729,773,000 | 1,636,729,172,000 | CONTRIBUTOR | null | I have replaced only a single assert in text_classification.py along with a unit test to verify an exception is raised based on https://github.com/huggingface/datasets/issues/3171 .
I would like to first understand the code contribution workflow. So keeping the change to a single file rather than making too many changes. Once this gets approved, I will look into the rest.
Thanks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3256/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3256",
"html_url": "https://github.com/huggingface/datasets/pull/3256",
"diff_url": "https://github.com/huggingface/datasets/pull/3256.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3256.patch",
"merged_at": 1636729172000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3255 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3255/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3255/comments | https://api.github.com/repos/huggingface/datasets/issues/3255/events | https://github.com/huggingface/datasets/issues/3255 | 1,051,783,129 | I_kwDODunzps4-sO_Z | 3,255 | SciELO dataset ConnectionError | {
"login": "WojciechKusa",
"id": 2575047,
"node_id": "MDQ6VXNlcjI1NzUwNDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2575047?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WojciechKusa",
"html_url": "https://github.com/WojciechKusa",
"followers_url": "https://api.github.com/users/WojciechKusa/followers",
"following_url": "https://api.github.com/users/WojciechKusa/following{/other_user}",
"gists_url": "https://api.github.com/users/WojciechKusa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WojciechKusa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WojciechKusa/subscriptions",
"organizations_url": "https://api.github.com/users/WojciechKusa/orgs",
"repos_url": "https://api.github.com/users/WojciechKusa/repos",
"events_url": "https://api.github.com/users/WojciechKusa/events{/privacy}",
"received_events_url": "https://api.github.com/users/WojciechKusa/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,636,711,034,000 | 1,637,085,322,000 | 1,637,085,322,000 | NONE | null | ## Describe the bug
I get `ConnectionError` when I am trying to load the SciELO dataset.
When I try the URL with `requests` I get:
```
>>> requests.head("https://ndownloader.figstatic.com/files/14019287")
<Response [302]>
```
And as far as I understand redirections in `datasets` are not supported for downloads.
https://github.com/huggingface/datasets/blob/807341d0db0728073ab605c812c67f927d148f38/datasets/scielo/scielo.py#L45
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("scielo", "en-es")
```
## Expected results
Download SciELO dataset and load Dataset object
## Actual results
```
Downloading and preparing dataset scielo/en-es (download: 21.90 MiB, generated: 68.45 MiB, post-processed: Unknown size, total: 90.35 MiB) to /Users/test/.cache/huggingface/datasets/scielo/en-es/1.0.0/7e05d55a20257efeb9925ff5de65bd4884fc6ddb6d765f1ea3e8860449d90e0e...
Traceback (most recent call last):
File "scielo.py", line 3, in <module>
dataset = load_dataset("scielo", "en-es")
File "../lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "../lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "../lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/Users/test/.cache/huggingface/modules/datasets_modules/datasets/scielo/7e05d55a20257efeb9925ff5de65bd4884fc6ddb6d765f1ea3e8860449d90e0e/scielo.py", line 77, in _split_generators
data_dir = dl_manager.download_and_extract(_URLS[self.config.name])
File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract
return self.extract(self.download(url_or_urls))
File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download
downloaded_path_or_paths = map_nested(
File "../lib/python3.8/site-packages/datasets/utils/py_utils.py", line 206, in map_nested
return function(data_struct)
File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "../lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path
output_path = get_from_cache(
File "../lib/python3.8/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://ndownloader.figstatic.com/files/14019287
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.15.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.12
- PyArrow version: 6.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3255/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3254 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3254/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3254/comments | https://api.github.com/repos/huggingface/datasets/issues/3254/events | https://github.com/huggingface/datasets/pull/3254 | 1,051,351,172 | PR_kwDODunzps4ubPwR | 3,254 | Update xcopa dataset (fix checksum issues + add translated data) | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The CI failures are unrelated to the changes (missing fields in the readme and the CER metric error fixed in #3252)."
] | 1,636,663,893,000 | 1,636,713,058,000 | 1,636,713,057,000 | CONTRIBUTOR | null | This PR updates the checksums (as reported [here](https://discuss.huggingface.co/t/how-to-load-dataset-locally/11601/2)) of the `xcopa` dataset. Additionally, it adds new configs that hold the translated data of the original set of configs. This data was not available at the time of adding this dataset to the lib. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3254/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3254/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3254",
"html_url": "https://github.com/huggingface/datasets/pull/3254",
"diff_url": "https://github.com/huggingface/datasets/pull/3254.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3254.patch",
"merged_at": 1636713057000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3253 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3253/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3253/comments | https://api.github.com/repos/huggingface/datasets/issues/3253/events | https://github.com/huggingface/datasets/issues/3253 | 1,051,308,972 | I_kwDODunzps4-qbOs | 3,253 | `GeneratorBasedBuilder` does not support `None` values | {
"login": "pavel-lexyr",
"id": 69010336,
"node_id": "MDQ6VXNlcjY5MDEwMzM2",
"avatar_url": "https://avatars.githubusercontent.com/u/69010336?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pavel-lexyr",
"html_url": "https://github.com/pavel-lexyr",
"followers_url": "https://api.github.com/users/pavel-lexyr/followers",
"following_url": "https://api.github.com/users/pavel-lexyr/following{/other_user}",
"gists_url": "https://api.github.com/users/pavel-lexyr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pavel-lexyr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pavel-lexyr/subscriptions",
"organizations_url": "https://api.github.com/users/pavel-lexyr/orgs",
"repos_url": "https://api.github.com/users/pavel-lexyr/repos",
"events_url": "https://api.github.com/users/pavel-lexyr/events{/privacy}",
"received_events_url": "https://api.github.com/users/pavel-lexyr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi,\r\n\r\nthanks for reporting and providing a minimal reproducible example. \r\n\r\nThis line of the PR I've linked in our discussion on the Forum will add support for `None` values:\r\nhttps://github.com/huggingface/datasets/blob/a53de01842aac65c66a49b2439e18fa93ff73ceb/src/datasets/features/features.py#L835\r\n\r\nI expect that PR to be merged soon."
] | 1,636,660,281,000 | 1,639,060,018,000 | 1,639,060,018,000 | NONE | null | ## Describe the bug
`GeneratorBasedBuilder` does not support `None` values.
## Steps to reproduce the bug
See [this repository](https://github.com/pavel-lexyr/huggingface-datasets-bug-reproduction) for minimal reproduction.
## Expected results
Dataset is initialized with a `None` value in the `value` column.
## Actual results
```
Traceback (most recent call last):
File "main.py", line 3, in <module>
datasets.load_dataset("./bad-data")
File ".../datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File ".../datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File ".../datasets/builder.py", line 697, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File ".../datasets/builder.py", line 1103, in _prepare_split
example = self.info.features.encode_example(record)
File ".../datasets/features/features.py", line 1033, in encode_example
return encode_nested_example(self, example)
File ".../datasets/features/features.py", line 808, in encode_nested_example
return {
File ".../datasets/features/features.py", line 809, in <dictcomp>
k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
File ".../datasets/features/features.py", line 855, in encode_nested_example
return schema.encode_example(obj)
File ".../datasets/features/features.py", line 299, in encode_example
return float(value)
TypeError: float() argument must be a string or a number, not 'NoneType'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.15.1
- Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 6.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3253/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3252 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3252/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3252/comments | https://api.github.com/repos/huggingface/datasets/issues/3252/events | https://github.com/huggingface/datasets/pull/3252 | 1,051,124,749 | PR_kwDODunzps4uagoy | 3,252 | Fix failing CER metric test in CI after update | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,646,236,000 | 1,636,726,004,000 | 1,636,726,003,000 | CONTRIBUTOR | null | Fixes the [failing CER metric test](https://app.circleci.com/pipelines/github/huggingface/datasets/8644/workflows/79816553-fa2f-4756-b022-d5937f00bf7b/jobs/53298) in CI by adding support for `jiwer==2.3.0`, which was released yesterday. Also, I verified that all the tests in `metrics/cer/test_cer.py` pass after the change, so the results should be the same irrespective of the `jiwer` version. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3252/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3252/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3252",
"html_url": "https://github.com/huggingface/datasets/pull/3252",
"diff_url": "https://github.com/huggingface/datasets/pull/3252.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3252.patch",
"merged_at": 1636726003000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3250 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3250/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3250/comments | https://api.github.com/repos/huggingface/datasets/issues/3250/events | https://github.com/huggingface/datasets/pull/3250 | 1,050,541,348 | PR_kwDODunzps4uYmkr | 3,250 | Add ETHICS dataset | {
"login": "ssss1029",
"id": 7088559,
"node_id": "MDQ6VXNlcjcwODg1NTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7088559?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ssss1029",
"html_url": "https://github.com/ssss1029",
"followers_url": "https://api.github.com/users/ssss1029/followers",
"following_url": "https://api.github.com/users/ssss1029/following{/other_user}",
"gists_url": "https://api.github.com/users/ssss1029/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ssss1029/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ssss1029/subscriptions",
"organizations_url": "https://api.github.com/users/ssss1029/orgs",
"repos_url": "https://api.github.com/users/ssss1029/repos",
"events_url": "https://api.github.com/users/ssss1029/events{/privacy}",
"received_events_url": "https://api.github.com/users/ssss1029/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,636,602,334,000 | 1,637,087,545,000 | null | NONE | null | This PR adds the ETHICS dataset, including all 5 sub-datasets.
From https://arxiv.org/abs/2008.02275 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3250/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/3250/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3250",
"html_url": "https://github.com/huggingface/datasets/pull/3250",
"diff_url": "https://github.com/huggingface/datasets/pull/3250.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3250.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3249 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3249/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3249/comments | https://api.github.com/repos/huggingface/datasets/issues/3249/events | https://github.com/huggingface/datasets/pull/3249 | 1,050,193,138 | PR_kwDODunzps4uXeea | 3,249 | Fix streaming for id_newspapers_2018 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,570,530,000 | 1,636,725,692,000 | 1,636,725,691,000 | MEMBER | null | To be compatible with streaming, this dataset must use `dl_manager.iter_archive` since the data are in a .tgz file | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3249/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3249/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3249",
"html_url": "https://github.com/huggingface/datasets/pull/3249",
"diff_url": "https://github.com/huggingface/datasets/pull/3249.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3249.patch",
"merged_at": 1636725691000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3248 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3248/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3248/comments | https://api.github.com/repos/huggingface/datasets/issues/3248/events | https://github.com/huggingface/datasets/pull/3248 | 1,050,171,082 | PR_kwDODunzps4uXZzU | 3,248 | Stream from Google Drive and other hosts | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I just tried some datasets and noticed that `spider` is not working for some reason (the compression type is not recognized), resulting in FileNotFoundError. I can take a look tomorrow",
"I'm fixing the remaining files based on TAR archives",
"THANKS A LOT"
] | 1,636,569,152,000 | 1,638,288,223,000 | 1,636,737,491,000 | MEMBER | null | Streaming from Google Drive is a bit more challenging than the other host we've been supporting:
- the download URL must be updated to add the confirm token obtained by HEAD request
- it requires to use cookies to keep the connection alive
- the URL doesn't tell any information about whether the file is compressed or not
Therefore I did two things:
- I added a step for URL and headers/cookies preparation in the StreamingDownloadManager
- I added automatic compression type inference by reading the [magic number](https://en.wikipedia.org/wiki/List_of_file_signatures)
This allows to do do fancy things like
```python
from datasets.utils.streaming_download_manager import StreamingDownloadManager, xopen, xjoin, xglob
# zip file containing a train.tsv file
url = "https://drive.google.com/uc?export=download&id=1k92sUfpHxKq8PXWRr7Y5aNHXwOCNUmqh"
extracted = StreamingDownloadManager().download_and_extract(url)
for inner_file in xglob(xjoin(extracted, "*.tsv")):
with xopen(inner_file) as f:
# streaming starts here
for line in f:
print(line)
```
This should make around 80 datasets streamable. It concerns those hosted on Google Drive but also any dataset for which the URL doesn't give any information about compression. Here is the full list:
```
amazon_polarity, ami, arabic_billion_words, ascent_kb, asset, big_patent, billsum, capes, cmrc2018, cnn_dailymail,
code_x_glue_cc_code_completion_token, code_x_glue_cc_code_refinement, code_x_glue_cc_code_to_code_trans,
code_x_glue_tt_text_to_text, conll2002, craigslist_bargains, dbpedia_14, docred, ehealth_kd, emo, euronews, germeval_14,
gigaword, grail_qa, great_code, has_part, head_qa, health_fact, hope_edi, id_newspapers_2018,
igbo_english_machine_translation, irc_disentangle, jfleg, jnlpba, journalists_questions, kor_ner, linnaeus, med_hop, mrqa,
mt_eng_vietnamese, multi_news, norwegian_ner, offcombr, offenseval_dravidian, para_pat, peoples_daily_ner, pn_summary,
poleval2019_mt, pubmed_qa, qangaroo, reddit_tifu, refresd, ro_sts_parallel, russian_super_glue, samsum, sberquad, scielo,
search_qa, species_800, spider, squad_adversarial, tamilmixsentiment, tashkeela, ted_talks_iwslt, trec, turk, turkish_ner,
twi_text_c3, universal_morphologies, web_of_science, weibo_ner, wiki_bio, wiki_hop, wiki_lingua, wiki_summary, wili_2018,
wisesight1000, wnut_17, yahoo_answers_topics, yelp_review_full, yoruba_text_c3
```
Some of them may not work if the host doesn't support HTTP range requests for example
Fix https://github.com/huggingface/datasets/issues/2742
Fix https://github.com/huggingface/datasets/issues/3188 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3248/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3248/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3248",
"html_url": "https://github.com/huggingface/datasets/pull/3248",
"diff_url": "https://github.com/huggingface/datasets/pull/3248.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3248.patch",
"merged_at": 1636737490000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3247 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3247/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3247/comments | https://api.github.com/repos/huggingface/datasets/issues/3247/events | https://github.com/huggingface/datasets/issues/3247 | 1,049,699,088 | I_kwDODunzps4-kSMQ | 3,247 | Loading big json dataset raises pyarrow.lib.ArrowNotImplementedError | {
"login": "maxzirps",
"id": 29249513,
"node_id": "MDQ6VXNlcjI5MjQ5NTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/29249513?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxzirps",
"html_url": "https://github.com/maxzirps",
"followers_url": "https://api.github.com/users/maxzirps/followers",
"following_url": "https://api.github.com/users/maxzirps/following{/other_user}",
"gists_url": "https://api.github.com/users/maxzirps/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxzirps/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxzirps/subscriptions",
"organizations_url": "https://api.github.com/users/maxzirps/orgs",
"repos_url": "https://api.github.com/users/maxzirps/repos",
"events_url": "https://api.github.com/users/maxzirps/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxzirps/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi,\r\n\r\nthis issue is similar to https://github.com/huggingface/datasets/issues/3093, so you can either use the solution provided there or try to load the data in one chunk (you can control the chunk size by specifying the `chunksize` parameter (`int`) in `load_dataset`).\r\n\r\n@lhoestq Is this worth opening an issue on Jira? Basically, PyArrow doesn't allow casts that change the order of the struct fields because they treat `pa.struct` as an ordered sequence. Reordering fields manually in Python is probably too slow, so I think this needs to be fixed by them to be usable on our side.",
"I agree I would expect PyArrow to be able to handle this, do you want to open the issue @mariosasko ?\r\nAlthough maybe it's possible to fix struct casting on our side without hurting performance too much, if it's simply a matter of reordering the arrays in the StructArray",
"Fixed in #3575, so I'm closing this issue."
] | 1,636,543,079,000 | 1,649,599,557,000 | 1,649,599,557,000 | NONE | null | ## Describe the bug
When trying to create a dataset from a json file with around 25MB, the following error is raised `pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct`
Splitting the big file into smaller ones and then loading it with the `load_dataset` method did also not work.
Creating a pandas dataframe from it and then loading it with `Dataset.from_pandas` works
## Steps to reproduce the bug
```python
load_dataset("json", data_files="test.json")
```
test.json ~25MB
```json
{"a": {"c": 8, "b": 5}}
{"a": {"b": 7, "c": 6}}
{"a": {"c": 8, "b": 5}}
{"a": {"b": 7, "c": 6}}
{"a": {"c": 8, "b": 5}}
...
```
working.json ~160bytes
```json
{"a": {"c": 8, "b": 5}}
{"a": {"b": 7, "c": 6}}
{"a": {"c": 8, "b": 5}}
{"a": {"b": 7, "c": 6}}
{"a": {"c": 8, "b": 5}}
```
## Expected results
It should load the dataset from the json file without error.
## Actual results
It raises Exception `pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct`
```
Traceback (most recent call last):
File "/Users/m/workspace/xxx/project/main.py", line 60, in <module>
dataset = load_dataset("json", data_files="result.json")
File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/load.py", line 1627, in load_dataset
builder_instance.download_and_prepare(
File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/builder.py", line 697, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/builder.py", line 1159, in _prepare_split
writer.write_table(table)
File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/arrow_writer.py", line 428, in write_table
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "pyarrow/table.pxi", line 1685, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 630, in pyarrow.lib._sanitize_arrays
File "pyarrow/array.pxi", line 338, in pyarrow.lib.asarray
File "pyarrow/table.pxi", line 304, in pyarrow.lib.ChunkedArray.cast
File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/pyarrow/compute.py", line 309, in cast
return call_function("cast", [arr], options)
File "pyarrow/_compute.pyx", line 528, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 327, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 120, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct
```
## Environment info
- `datasets` version: 1.14.0
- Platform: macOS-12.0.1-arm64-arm-64bit
- Python version: 3.9.7
- PyArrow version: 6.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3247/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3247/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3246 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3246/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3246/comments | https://api.github.com/repos/huggingface/datasets/issues/3246/events | https://github.com/huggingface/datasets/pull/3246 | 1,049,662,746 | PR_kwDODunzps4uVvaW | 3,246 | [tiny] fix typo in stream docs | {
"login": "nollied",
"id": 26421036,
"node_id": "MDQ6VXNlcjI2NDIxMDM2",
"avatar_url": "https://avatars.githubusercontent.com/u/26421036?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nollied",
"html_url": "https://github.com/nollied",
"followers_url": "https://api.github.com/users/nollied/followers",
"following_url": "https://api.github.com/users/nollied/following{/other_user}",
"gists_url": "https://api.github.com/users/nollied/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nollied/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nollied/subscriptions",
"organizations_url": "https://api.github.com/users/nollied/orgs",
"repos_url": "https://api.github.com/users/nollied/repos",
"events_url": "https://api.github.com/users/nollied/events{/privacy}",
"received_events_url": "https://api.github.com/users/nollied/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,540,802,000 | 1,636,542,639,000 | 1,636,542,639,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3246/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3246/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3246",
"html_url": "https://github.com/huggingface/datasets/pull/3246",
"diff_url": "https://github.com/huggingface/datasets/pull/3246.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3246.patch",
"merged_at": 1636542639000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3245 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3245/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3245/comments | https://api.github.com/repos/huggingface/datasets/issues/3245/events | https://github.com/huggingface/datasets/pull/3245 | 1,048,726,062 | PR_kwDODunzps4uSqqq | 3,245 | Fix load_from_disk temporary directory | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,470,915,000 | 1,636,471,852,000 | 1,636,471,851,000 | MEMBER | null | `load_from_disk` uses `tempfile.TemporaryDirectory()` instead of our `get_temporary_cache_files_directory()` function. This can cause the temporary directory to be deleted before the dataset object is garbage collected.
In practice, it prevents anyone from using methods like `shuffle` on a dataset loaded this way, because it can't write the shuffled indices in a directory that doesn't exist anymore.
In this PR I switch to using `get_temporary_cache_files_directory()` and I update the tests.
cc @mariosasko since you worked on `get_temporary_cache_files_directory()` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3245/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3245/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3245",
"html_url": "https://github.com/huggingface/datasets/pull/3245",
"diff_url": "https://github.com/huggingface/datasets/pull/3245.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3245.patch",
"merged_at": 1636471851000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3244 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3244/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3244/comments | https://api.github.com/repos/huggingface/datasets/issues/3244/events | https://github.com/huggingface/datasets/pull/3244 | 1,048,675,741 | PR_kwDODunzps4uSgG5 | 3,244 | Fix filter method for batched=True | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,468,259,000 | 1,636,473,178,000 | 1,636,473,177,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3244/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3244",
"html_url": "https://github.com/huggingface/datasets/pull/3244",
"diff_url": "https://github.com/huggingface/datasets/pull/3244.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3244.patch",
"merged_at": 1636473177000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3243 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3243/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3243/comments | https://api.github.com/repos/huggingface/datasets/issues/3243/events | https://github.com/huggingface/datasets/pull/3243 | 1,048,630,754 | PR_kwDODunzps4uSWtB | 3,243 | Remove redundant isort module placement | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,465,830,000 | 1,636,725,765,000 | 1,636,725,765,000 | CONTRIBUTOR | null | `isort` can place modules by itself from [version 5.0.0](https://pycqa.github.io/isort/docs/upgrade_guides/5.0.0.html#module-placement-changes-known_third_party-known_first_party-default_section-etc) onwards, making the `known_first_party` and `known_third_party` fields in `setup.cfg` redundant (this is why our CI works, even though we haven't touched these options in a while). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3243/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3243",
"html_url": "https://github.com/huggingface/datasets/pull/3243",
"diff_url": "https://github.com/huggingface/datasets/pull/3243.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3243.patch",
"merged_at": 1636725765000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3242 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3242/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3242/comments | https://api.github.com/repos/huggingface/datasets/issues/3242/events | https://github.com/huggingface/datasets/issues/3242 | 1,048,527,232 | I_kwDODunzps4-f0GA | 3,242 | Adding ANERcorp-CAMeLLab dataset | {
"login": "vitalyshalumov",
"id": 33824221,
"node_id": "MDQ6VXNlcjMzODI0MjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/33824221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vitalyshalumov",
"html_url": "https://github.com/vitalyshalumov",
"followers_url": "https://api.github.com/users/vitalyshalumov/followers",
"following_url": "https://api.github.com/users/vitalyshalumov/following{/other_user}",
"gists_url": "https://api.github.com/users/vitalyshalumov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vitalyshalumov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitalyshalumov/subscriptions",
"organizations_url": "https://api.github.com/users/vitalyshalumov/orgs",
"repos_url": "https://api.github.com/users/vitalyshalumov/repos",
"events_url": "https://api.github.com/users/vitalyshalumov/events{/privacy}",
"received_events_url": "https://api.github.com/users/vitalyshalumov/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [
"Adding ANERcorp dataset\r\n\r\n## Adding a Dataset\r\n- **Name:** *ANERcorp-CAMeLLab*\r\n- **Description:** *Since its creation in 2008, the ANERcorp dataset (Benajiba & Rosso, 2008) has been a standard reference used by Arabic named entity recognition researchers around the world. However, over time, this dataset was copied over from user to user, modified slightly here and there, and split in many different configurations that made it hard to compare fairly across papers and systems.\r\n\r\nIn 2020, a group of researchers from CAMeL Lab (Habash, Alhafni and Oudah), and Mind Lab (Antoun and Baly) met with the creator of the corpus, Yassine Benajiba, to consult with him and collectively agree on an exact split, and accepted minor corrections from the original dataset. Bashar Alhafni from CAMeL Lab working with Nizar Habash implemented the decisions provided in this release.*\r\n\r\n- **Paper:** *(a) Benajiba, Yassine, Paolo Rosso, and José Miguel Benedí Ruiz. \"Anersys: An Arabic named entity recognition system based on maximum entropy.\" In International Conference on Intelligent Text Processing and Computational Linguistics, pp. 143-153. Springer, Berlin, Heidelberg, 2007.\r\n\r\n(b)Ossama Obeid, Nasser Zalmout, Salam Khalifa, Dima Taji, Mai Oudah, Bashar Alhafni, Go Inoue, Fadhl Eryani, Alexander Erdmann, and Nizar Habash. \"CAMeL Tools: An Open Source Python Toolkit, for Arabic Natural Language Processing.\" In Proceedings of the Conference on Language Resources and Evaluation (LREC 2020), Marseille, 2020.*\r\n- **Data:** *https://camel.abudhabi.nyu.edu/anercorp/*\r\n- **Motivation:** This is the standard dataset for evaluating NER performance in Arabic*\r\n\r\nInstructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)."
] | 1,636,459,444,000 | 1,636,461,675,000 | null | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3242/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3241 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3241/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3241/comments | https://api.github.com/repos/huggingface/datasets/issues/3241/events | https://github.com/huggingface/datasets/pull/3241 | 1,048,461,852 | PR_kwDODunzps4uRzHa | 3,241 | Swap descriptions of v1 and raw-v1 configs of WikiText dataset and fix metadata | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,455,255,000 | 1,644,853,560,000 | 1,636,465,768,000 | MEMBER | null | Fix #3237, fix #795. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3241/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3241",
"html_url": "https://github.com/huggingface/datasets/pull/3241",
"diff_url": "https://github.com/huggingface/datasets/pull/3241.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3241.patch",
"merged_at": 1636465768000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3240 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3240/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3240/comments | https://api.github.com/repos/huggingface/datasets/issues/3240/events | https://github.com/huggingface/datasets/issues/3240 | 1,048,376,021 | I_kwDODunzps4-fPLV | 3,240 | Couldn't reach data file for disaster_response_messages | {
"login": "pandya6988",
"id": 81331791,
"node_id": "MDQ6VXNlcjgxMzMxNzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/81331791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pandya6988",
"html_url": "https://github.com/pandya6988",
"followers_url": "https://api.github.com/users/pandya6988/followers",
"following_url": "https://api.github.com/users/pandya6988/following{/other_user}",
"gists_url": "https://api.github.com/users/pandya6988/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pandya6988/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pandya6988/subscriptions",
"organizations_url": "https://api.github.com/users/pandya6988/orgs",
"repos_url": "https://api.github.com/users/pandya6988/repos",
"events_url": "https://api.github.com/users/pandya6988/events{/privacy}",
"received_events_url": "https://api.github.com/users/pandya6988/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"It looks like the dataset isn't available anymore on appen.com\r\n\r\nThe CSV files appear to still be available at https://www.kaggle.com/landlord/multilingual-disaster-response-messages though. It says that the data are under the CC0 license so I guess we can host the dataset elsewhere instead ?"
] | 1,636,450,002,000 | 1,639,492,709,000 | 1,639,492,709,000 | NONE | null | ## Describe the bug
Following command gives an ConnectionError.
## Steps to reproduce the bug
```python
disaster = load_dataset('disaster_response_messages')
```
## Error
```
ConnectionError: Couldn't reach https://datasets.appen.com/appen_datasets/disaster_response_data/disaster_response_messages_training.csv
```
## Expected results
It should load dataset without an error
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Google Colab
- Python version: 3.7
- PyArrow version:
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3240/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3239 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3239/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3239/comments | https://api.github.com/repos/huggingface/datasets/issues/3239/events | https://github.com/huggingface/datasets/issues/3239 | 1,048,360,232 | I_kwDODunzps4-fLUo | 3,239 | Inconsistent performance of the "arabic_billion_words" dataset | {
"login": "vitalyshalumov",
"id": 33824221,
"node_id": "MDQ6VXNlcjMzODI0MjIx",
"avatar_url": "https://avatars.githubusercontent.com/u/33824221?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vitalyshalumov",
"html_url": "https://github.com/vitalyshalumov",
"followers_url": "https://api.github.com/users/vitalyshalumov/followers",
"following_url": "https://api.github.com/users/vitalyshalumov/following{/other_user}",
"gists_url": "https://api.github.com/users/vitalyshalumov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vitalyshalumov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vitalyshalumov/subscriptions",
"organizations_url": "https://api.github.com/users/vitalyshalumov/orgs",
"repos_url": "https://api.github.com/users/vitalyshalumov/repos",
"events_url": "https://api.github.com/users/vitalyshalumov/events{/privacy}",
"received_events_url": "https://api.github.com/users/vitalyshalumov/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,636,449,060,000 | 1,636,449,060,000 | null | NONE | null | ## Describe the bug
When downloaded from macine 1 the dataset is downloaded and parsed correctly.
When downloaded from machine two (which has a different cache directory),
the following script:
import datasets
from datasets import load_dataset
raw_dataset_elkhair_1 = load_dataset('arabic_billion_words', 'Alittihad', split="train",download_mode='force_redownload')
gives the following error:
**Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: 1.49 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/arabic_billion_words/Alittihad/1.1.0/687a1f963284c8a766558661375ea8f7ab3fa3633f8cd9c9f42a53ebe83bfe17...
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 348M/348M [00:24<00:00, 14.0MB/s]
Traceback (most recent call last):
File ".../why_mismatch.py", line 3, in <module>
File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 709, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=1601790302, num_examples=349342, dataset_name='arabic_billion_words'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='arabic_billion_words')}]**
Note that the package versions of datasets (1.15.1) and rarfile (4.0) are identical.
## Steps to reproduce the bug
import datasets
from datasets import load_dataset
raw_dataset_elkhair_1 = load_dataset('arabic_billion_words', 'Alittihad', split="train",download_mode='force_redownload')
# Sample code to reproduce the bug
## Expected results
Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: 1.49 GiB, post-processed: Unknown size, total: 1.82 GiB) to .../.cache/huggingface/datasets/arabic_billion_words/Alittihad/1.1.0/687a1f963284c8a766558661375ea8f7ab3fa3633f8cd9c9f42a53ebe83bfe17...
Downloading: 100%|███████████████████████████| 348M/348M [00:22<00:00, 15.8MB/s]
Dataset arabic_billion_words downloaded and prepared to .../.cache/huggingface/datasets/arabic_billion_words/Alittihad/1.1.0/687a1f963284c8a766558661375ea8f7ab3fa3633f8cd9c9f42a53ebe83bfe17. Subsequent calls will reuse this data.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
Machine 1:
- `datasets` version: 1.15.1
- Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 4.0.1
Machine 2 (the bugged one)
- `datasets` version: 1.15.1
- Platform: Linux-4.4.0-210-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyArrow version: 6.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3239/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3238 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3238/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3238/comments | https://api.github.com/repos/huggingface/datasets/issues/3238/events | https://github.com/huggingface/datasets/issues/3238 | 1,048,226,086 | I_kwDODunzps4-eqkm | 3,238 | Reuters21578 Couldn't reach | {
"login": "TingNLP",
"id": 54096137,
"node_id": "MDQ6VXNlcjU0MDk2MTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/54096137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TingNLP",
"html_url": "https://github.com/TingNLP",
"followers_url": "https://api.github.com/users/TingNLP/followers",
"following_url": "https://api.github.com/users/TingNLP/following{/other_user}",
"gists_url": "https://api.github.com/users/TingNLP/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TingNLP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TingNLP/subscriptions",
"organizations_url": "https://api.github.com/users/TingNLP/orgs",
"repos_url": "https://api.github.com/users/TingNLP/repos",
"events_url": "https://api.github.com/users/TingNLP/events{/privacy}",
"received_events_url": "https://api.github.com/users/TingNLP/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Hi ! The URL works fine on my side today, could you try again ?",
"thank you @lhoestq \r\nit works"
] | 1,636,438,136,000 | 1,636,588,977,000 | 1,636,588,977,000 | NONE | null | ``## Adding a Dataset
- **Name:** *Reuters21578*
- **Description:** *ConnectionError: Couldn't reach https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz*
- **Data:** *https://huggingface.co/datasets/reuters21578*
`from datasets import load_dataset`
`dataset = load_dataset("reuters21578", 'ModLewis')`
ConnectionError: Couldn't reach https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz
And I try to request the link as follow:
`import requests`
`requests.head('https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz')`
SSLError: HTTPSConnectionPool(host='kdd.ics.uci.edu', port=443): Max retries exceeded with url: /databases/reuters21578/reuters21578.tar.gz (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),))
This problem likes #575
What should I do ?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3238/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3237 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3237/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3237/comments | https://api.github.com/repos/huggingface/datasets/issues/3237/events | https://github.com/huggingface/datasets/issues/3237 | 1,048,165,525 | I_kwDODunzps4-ebyV | 3,237 | wikitext description wrong | {
"login": "hongyuanmei",
"id": 19693633,
"node_id": "MDQ6VXNlcjE5NjkzNjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/19693633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hongyuanmei",
"html_url": "https://github.com/hongyuanmei",
"followers_url": "https://api.github.com/users/hongyuanmei/followers",
"following_url": "https://api.github.com/users/hongyuanmei/following{/other_user}",
"gists_url": "https://api.github.com/users/hongyuanmei/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hongyuanmei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hongyuanmei/subscriptions",
"organizations_url": "https://api.github.com/users/hongyuanmei/orgs",
"repos_url": "https://api.github.com/users/hongyuanmei/repos",
"events_url": "https://api.github.com/users/hongyuanmei/events{/privacy}",
"received_events_url": "https://api.github.com/users/hongyuanmei/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @hongyuanmei, thanks for reporting.\r\n\r\nI'm fixing it.",
"Duplicate of:\r\n- #795"
] | 1,636,430,812,000 | 1,644,853,511,000 | 1,636,465,768,000 | NONE | null | ## Describe the bug
Descriptions of the wikitext datasests are wrong.
## Steps to reproduce the bug
Please see: https://github.com/huggingface/datasets/blob/f6dcafce996f39b6a4bbe3a9833287346f4a4b68/datasets/wikitext/wikitext.py#L50
## Expected results
The descriptions for raw-v1 and v1 should be switched. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3237/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3236 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3236/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3236/comments | https://api.github.com/repos/huggingface/datasets/issues/3236/events | https://github.com/huggingface/datasets/issues/3236 | 1,048,026,358 | I_kwDODunzps4-d5z2 | 3,236 | Loading of datasets changed in #3110 returns no examples | {
"login": "eladsegal",
"id": 13485709,
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladsegal",
"html_url": "https://github.com/eladsegal",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @eladsegal, thanks for reporting.\r\n\r\nI am sorry, but I can't reproduce the bug:\r\n```\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"qasper\")\r\nDownloading: 5.11kB [00:00, ?B/s]\r\nDownloading and preparing dataset qasper/qasper (download: 9.88 MiB, generated: 35.11 MiB, post-processed: Unknown size, total: 44.99 MiB) to .cache\\qasper\\qasper\\0.1.0\\b99154d2a15aa54bfc669f82b2eda715a2e342e81023d39613b0e2920fdb3ad8...\r\nDataset qasper downloaded and prepared to .cache\\qasper\\qasper\\0.1.0\\b99154d2a15aa54bfc669f82b2eda715a2e342e81023d39613b0e2920fdb3ad8. Subsequent calls will reuse this data.\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<?, ?it/s]\r\n\r\nIn [3]: ds\r\nOut[3]:\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'title', 'abstract', 'full_text', 'qas'],\r\n num_rows: 888\r\n })\r\n validation: Dataset({\r\n features: ['id', 'title', 'abstract', 'full_text', 'qas'],\r\n num_rows: 281\r\n })\r\n})\r\n``` \r\n\r\nThis makes me suspect that the origin of the problem might be the cache: I didn't have this dataset in my cache, although I guess you already had it, before the code change introduced by #3110.\r\n\r\n@lhoestq might it be possible that the code change introduced by #3110 makes \"inaccessible\" all previously cached TAR-based datasets?\r\n- Before the caching system downloaded and extracted the tar dataset\r\n- Now it only downloads the tar dataset (no extraction is done)",
"I can't reproduce either in my environment (macos, python 3.7).\r\n\r\nIn your case it generates zero examples. This can only happen if the extraction of the TAR archive doesn't output the right filenames. Indeed if the `qasper` script can't find the right file to load, it's currently ignored and it returns zero examples. This case was not even considered when #3110 was developed since we considered the file names to be deterministic - and not depend on your environment.\r\n\r\nTherefore here is my hypothesis:\r\n- either the cache is corrupted somehow with an empty TAR archive\r\n- OR I suspect that the issue comes from python 3.8\r\n",
"I just tried again on python 3.8 and I was able to reproduce the issue. Let me work on a fix",
"Ok I found the issue. It's not related to python 3.8 in itself though. This issue happens because your local installation of `datasets` is outdated compared to the changes to datasets in #3110\r\n\r\nTo fix this you just have to pull the latest changes from `master` :)\r\n\r\nLet me know if that helps !\r\n\r\n--------------\r\n\r\nHere are more details about my investigation:\r\n\r\nIt's possible to reproduce this issue if you use `datasets<=1.15.1` or before b6469baa22c174b3906c631802a7016fedea6780 and if you load the dataset after revision b6469baa22c174b3906c631802a7016fedea6780. This is because `dl_manager.iter_archive` had issues at that time (and it was not used anywhere anyway).\r\n\r\nIn particular it was returning the absolute path to extracted files instead of the relative path of the file inside the archive. This was an issue because `dl_manager.iter_archive` isn't supposed to extract the TAR archive. Instead, it iterates over all the files inside the archive, without creating a directory with the extracted content.\r\n\r\nTherefore if you want to use the datasets on `master`, make sure that you have an up-to-date local installation of `datasets` as well, or you may face incompatibilities like this.",
"Thanks!\r\nBut what about code that is already using older version of datasets? \r\nThe reason I encountered this issue was that suddenly one of my repos with version 1.12.1 started getting 0 examples.\r\nI handled it by adding `revision` to `load_dataset`, but I guess it would still be an issue for other users who doesn't know this.",
"Hi, in 1.12.1 it uses the dataset scripts from that time, not the one on master.\r\n\r\nIt only uses the datasets from master if you installed `datasets` from source, or if the dataset isn't available in your local version (in this case it shows a warning and it loads from master).\r\n",
"OK, I understand the issue a bit better now.\r\nI see I wasn't on 1.12.1, but on 1.12.1.dev0 and since it is a dev version it uses master.\r\nSo users that use an old dev version must specify revision or else they'll encounter this problem.\r\n\r\nBTW, when I opened the issue I installed the latest master version with\r\n```\r\npip install git+git://github.com/huggingface/datasets@master#egg=datasets\r\n```\r\nand also used `download_mode=\"force_redownload\"`, and it still returned 0 examples.\r\nNow I deleted all of the cache and ran the code again, and it worked.\r\nI'm not sure what exactly happened here, but looks like it was due to a mix of an unofficial version and its cache.\r\n\r\nThanks again!"
] | 1,636,414,186,000 | 1,636,476,365,000 | 1,636,476,347,000 | CONTRIBUTOR | null | ## Describe the bug
Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples:
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
})
validation: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 0
})
})
```
## Steps to reproduce the bug
Load any of the datasets that were changed in https://github.com/huggingface/datasets/pull/3110:
```python
from datasets import load_dataset
load_dataset("qasper")
# The problem only started with the commit of #3110
load_dataset("qasper", revision="b6469baa22c174b3906c631802a7016fedea6780")
```
## Expected results
```python
DatasetDict({
train: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 888
})
validation: Dataset({
features: ['id', 'title', 'abstract', 'full_text', 'qas'],
num_rows: 281
})
})
```
Which can be received when specifying revision of the commit before https://github.com/huggingface/datasets/pull/3110:
```python
from datasets import load_dataset
load_dataset("qasper", revision="acfe2abda1ca79f0ce5c1896aa83b4b78af76b7d")
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.15.2.dev0 (master)
- Python version: 3.8.10
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3236/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3236/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3235 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3235/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3235/comments | https://api.github.com/repos/huggingface/datasets/issues/3235/events | https://github.com/huggingface/datasets/pull/3235 | 1,047,808,263 | PR_kwDODunzps4uPr9Z | 3,235 | Addd options to use updated bleurt checkpoints | {
"login": "jaehlee",
"id": 11873078,
"node_id": "MDQ6VXNlcjExODczMDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/11873078?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaehlee",
"html_url": "https://github.com/jaehlee",
"followers_url": "https://api.github.com/users/jaehlee/followers",
"following_url": "https://api.github.com/users/jaehlee/following{/other_user}",
"gists_url": "https://api.github.com/users/jaehlee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaehlee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaehlee/subscriptions",
"organizations_url": "https://api.github.com/users/jaehlee/orgs",
"repos_url": "https://api.github.com/users/jaehlee/repos",
"events_url": "https://api.github.com/users/jaehlee/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaehlee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,397,634,000 | 1,636,725,928,000 | 1,636,725,928,000 | CONTRIBUTOR | null | Adds options to use newer recommended checkpoint (as of 2021/10/8) bleurt-20 and its distilled versions.
Updated checkpoints are described in https://github.com/google-research/bleurt/blob/master/checkpoints.md#the-recommended-checkpoint-bleurt-20
This change won't affect the default behavior of metrics/bleurt. It only adds option to load newer checkpoints as
`datasets.load_metric('bleurt', 'bleurt-20')`
`bluert-20` generates scores roughly between 0 and 1, which wasn't the case for the previous checkpoints. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3235/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3235",
"html_url": "https://github.com/huggingface/datasets/pull/3235",
"diff_url": "https://github.com/huggingface/datasets/pull/3235.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3235.patch",
"merged_at": 1636725928000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3234 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3234/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3234/comments | https://api.github.com/repos/huggingface/datasets/issues/3234/events | https://github.com/huggingface/datasets/pull/3234 | 1,047,634,236 | PR_kwDODunzps4uPHRk | 3,234 | Avoid PyArrow type optimization if it fails | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"That's good to have a way to disable this easily :)\r\nI just find it a bit unfortunate that users would have to experience the error once and then do `DISABLE_PYARROW_TYPES_OPTIMIZATION=1`. Do you know if there's a way to simply fallback on disabling it automatically when it fails ?",
"@lhoestq Actually, I agree a fallback makes more sense. The current approach is not very practical indeed and would require a mention in the docs.\r\n",
"Replaced the env variable with a fallback!",
"Hmm if the fallback automatically happens without the user knowing it, then I don't think we really need to mention it. But if you really wanted to, I think the [Improve performance](https://huggingface.co/docs/datasets/cache.html#improve-performance) section would be a great place for it! ",
"Yea I think this could just end up in a note that says that `datasets` automatically picks the most optimized integer precision for your tokenized text data to save you disk space. Maybe later if we have a page on text processing we can add this note, but for now I agree it doesn't fit well into the doc.\r\n\r\nIn particular in the \"Improve performance\" section we mention what users can do to speed up their computations, while this behavior is just some internal feature that users don't have control over anyway."
] | 1,636,387,827,000 | 1,636,545,869,000 | 1,636,545,868,000 | CONTRIBUTOR | null | Adds a new variable, `DISABLE_PYARROW_TYPES_OPTIMIZATION`, to `config.py` for easier control of the Arrow type optimization.
Fix #2206 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3234/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3234",
"html_url": "https://github.com/huggingface/datasets/pull/3234",
"diff_url": "https://github.com/huggingface/datasets/pull/3234.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3234.patch",
"merged_at": 1636545868000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3233 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3233/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3233/comments | https://api.github.com/repos/huggingface/datasets/issues/3233/events | https://github.com/huggingface/datasets/pull/3233 | 1,047,474,931 | PR_kwDODunzps4uOl9- | 3,233 | Improve repository structure docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,379,495,000 | 1,636,452,138,000 | 1,636,452,137,000 | MEMBER | null | Continuation of the documentation started in https://github.com/huggingface/datasets/pull/3221, taking into account @stevhliu 's comments | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3233/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3233",
"html_url": "https://github.com/huggingface/datasets/pull/3233",
"diff_url": "https://github.com/huggingface/datasets/pull/3233.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3233.patch",
"merged_at": 1636452137000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3232 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3232/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3232/comments | https://api.github.com/repos/huggingface/datasets/issues/3232/events | https://github.com/huggingface/datasets/issues/3232 | 1,047,361,573 | I_kwDODunzps4-bXgl | 3,232 | The Xsum datasets seems not able to download. | {
"login": "FYYFU",
"id": 37999885,
"node_id": "MDQ6VXNlcjM3OTk5ODg1",
"avatar_url": "https://avatars.githubusercontent.com/u/37999885?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/FYYFU",
"html_url": "https://github.com/FYYFU",
"followers_url": "https://api.github.com/users/FYYFU/followers",
"following_url": "https://api.github.com/users/FYYFU/following{/other_user}",
"gists_url": "https://api.github.com/users/FYYFU/gists{/gist_id}",
"starred_url": "https://api.github.com/users/FYYFU/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/FYYFU/subscriptions",
"organizations_url": "https://api.github.com/users/FYYFU/orgs",
"repos_url": "https://api.github.com/users/FYYFU/repos",
"events_url": "https://api.github.com/users/FYYFU/events{/privacy}",
"received_events_url": "https://api.github.com/users/FYYFU/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! On my side the URL is working fine, could you try again ?",
"> Hi ! On my side the URL is working fine, could you try again ?\r\n\r\nI try it again and cannot download the file (might because of my location). Could you please provide another download link(such as google drive)? :>",
"I don't know other download links - this is the one provided by the authors of the dataset. Maybe you can try downloading from another location ? There are several solutions: a VPN, a remote VM or Google Colab for example.",
"> I don't know other download links - this is the one provided by the authors of the dataset. Maybe you can try downloading from another location ? There are several solutions: a VPN, a remote VM or Google Colab for example.\r\n\r\n:> ok. Thanks for your reply."
] | 1,636,372,734,000 | 1,636,470,436,000 | 1,636,470,436,000 | NONE | null | ## Describe the bug
The download Link of the Xsum dataset provided in the repository is [Link](http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz). It seems not able to download.
## Steps to reproduce the bug
```python
load_dataset('xsum')
```
## Actual results
``` python
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3232/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3231 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3231/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3231/comments | https://api.github.com/repos/huggingface/datasets/issues/3231/events | https://github.com/huggingface/datasets/pull/3231 | 1,047,170,906 | PR_kwDODunzps4uNmWT | 3,231 | Group tests in multiprocessing workers by test file | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,361,163,000 | 1,636,377,558,000 | 1,636,361,984,000 | MEMBER | null | By grouping tests by test file, we make sure that all the tests in `test_load.py` are sent to the same worker.
Therefore, the fixture `hf_token` will be called only once (and from the same worker).
Related to: #3200.
Fix #3219. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3231/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3231",
"html_url": "https://github.com/huggingface/datasets/pull/3231",
"diff_url": "https://github.com/huggingface/datasets/pull/3231.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3231.patch",
"merged_at": 1636361983000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3230 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3230/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3230/comments | https://api.github.com/repos/huggingface/datasets/issues/3230/events | https://github.com/huggingface/datasets/pull/3230 | 1,047,135,583 | PR_kwDODunzps4uNfEd | 3,230 | Add full tagset to conll2003 README | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I also added the missing `pretty_name` tag in the dataset card to fix the CI"
] | 1,636,358,764,000 | 1,636,454,918,000 | 1,636,454,458,000 | CONTRIBUTOR | null | Even though it is possible to manually get the tagset list with
```python
dset.features[field_name].feature.names
```
I think it is useful to have an overview of the used tagset on the dataset card. This is particularly useful in light of the **dataset viewer**: the tags are encoded, so it is not immediately obvious what they are for a given sample. Adding a label-int mapping should make it easier for visitors to get a grasp of what they mean.
From user-experience perspective, I would urge the full tagsets to always be available in the README's but I understand that that would take a lot of work, probably. Perhaps it can be automated?
closes #3189 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3230/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3230/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3230",
"html_url": "https://github.com/huggingface/datasets/pull/3230",
"diff_url": "https://github.com/huggingface/datasets/pull/3230.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3230.patch",
"merged_at": 1636454458000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3229 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3229/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3229/comments | https://api.github.com/repos/huggingface/datasets/issues/3229/events | https://github.com/huggingface/datasets/pull/3229 | 1,046,706,425 | PR_kwDODunzps4uMKsx | 3,229 | Fix URL in CITATION file | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,279,475,000 | 1,636,279,486,000 | 1,636,279,485,000 | MEMBER | null | Currently the BibTeX citation parsed from the CITATION file has wrong URL (it shows the repo URL instead of the proceedings paper URL):
```
@inproceedings{Lhoest_Datasets_A_Community_2021,
author = {Lhoest, Quentin and Villanova del Moral, Albert and von Platen, Patrick and Wolf, Thomas and Šaško, Mario and Jernite, Yacine and Thakur, Abhishek and Tunstall, Lewis and Patil, Suraj and Drame, Mariama and Chaumond, Julien and Plu, Julien and Davison, Joe and Brandeis, Simon and Sanh, Victor and Le Scao, Teven and Canwen Xu, Kevin and Patry, Nicolas and Liu, Steven and McMillan-Major, Angelina and Schmid, Philipp and Gugger, Sylvain and Raw, Nathan and Lesage, Sylvain and Lozhkov, Anton and Carrigan, Matthew and Matussière, Théo and von Werra, Leandro and Debut, Lysandre and Bekman, Stas and Delangue, Clément},
booktitle = {Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
month = {11},
pages = {175--184},
publisher = {Association for Computational Linguistics},
title = {{Datasets: A Community Library for Natural Language Processing}},
url = {https://github.com/huggingface/datasets},
year = {2021}
}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3229/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3229",
"html_url": "https://github.com/huggingface/datasets/pull/3229",
"diff_url": "https://github.com/huggingface/datasets/pull/3229.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3229.patch",
"merged_at": 1636279485000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3228 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3228/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3228/comments | https://api.github.com/repos/huggingface/datasets/issues/3228/events | https://github.com/huggingface/datasets/pull/3228 | 1,046,702,143 | PR_kwDODunzps4uMJ58 | 3,228 | Add CITATION file | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,636,278,019,000 | 1,636,278,707,000 | 1,636,278,706,000 | MEMBER | null | Add CITATION file. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3228/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3228/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3228",
"html_url": "https://github.com/huggingface/datasets/pull/3228",
"diff_url": "https://github.com/huggingface/datasets/pull/3228.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3228.patch",
"merged_at": 1636278706000
} | true |