url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.26B
| node_id
stringlengths 18
32
| number
int64 1
4.44k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,654B
| updated_at
int64 1,587B
1,654B
| closed_at
int64 1,587B
1,654B
β | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
β | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 1
value | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3932 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3932/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3932/comments | https://api.github.com/repos/huggingface/datasets/issues/3932/events | https://github.com/huggingface/datasets/pull/3932 | 1,170,221,773 | PR_kwDODunzps40fd0T | 3,932 | Create SARI metric card | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,376,643,000 | 1,647,625,021,000 | 1,647,624,775,000 | CONTRIBUTOR | null | SARI metric card! (do we have an expert in text simplification to validate?.. :sweat_smile: ) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3932/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3932",
"html_url": "https://github.com/huggingface/datasets/pull/3932",
"diff_url": "https://github.com/huggingface/datasets/pull/3932.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3932.patch",
"merged_at": 1647624775000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3931/comments | https://api.github.com/repos/huggingface/datasets/issues/3931/events | https://github.com/huggingface/datasets/pull/3931 | 1,170,097,208 | PR_kwDODunzps40fBjx | 3,931 | Add align_labels_with_mapping docs | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,372,297,000 | 1,647,620,911,000 | 1,647,620,673,000 | MEMBER | null | This PR documents the `align_labels_with_mapping` function to ensure predicted labels are aligned with the dataset, or to assign a different mapping of labels to ids (requested by @mariosasko π ).
For this specific code sample, the current dataset has a `mixed` label that the original [dataset](https://huggingface.co/datasets/poem_sentiment#data-fields) didn't. Is there a way to remove this label so it is completely aligned with the original dataset mappings? Otherwise, I'll just leave it as it is. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3931/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3931",
"html_url": "https://github.com/huggingface/datasets/pull/3931",
"diff_url": "https://github.com/huggingface/datasets/pull/3931.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3931.patch",
"merged_at": 1647620673000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3930 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3930/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3930/comments | https://api.github.com/repos/huggingface/datasets/issues/3930/events | https://github.com/huggingface/datasets/pull/3930 | 1,170,087,793 | PR_kwDODunzps40e_fb | 3,930 | Create README.md | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,371,819,000 | 1,649,085,795,000 | 1,649,085,448,000 | CONTRIBUTOR | null | Creating a README for IndicGLUE
cc @mcmillanmajora for fact checking in terms of languages (also, are there any limitations of the dataset or eval metric that I'm not aware of?) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3930/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3930",
"html_url": "https://github.com/huggingface/datasets/pull/3930",
"diff_url": "https://github.com/huggingface/datasets/pull/3930.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3930.patch",
"merged_at": 1649085448000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3929/comments | https://api.github.com/repos/huggingface/datasets/issues/3929/events | https://github.com/huggingface/datasets/issues/3929 | 1,170,066,235 | I_kwDODunzps5Fvcs7 | 3,929 | Load a local dataset twice | {
"login": "caush",
"id": 28349961,
"node_id": "MDQ6VXNlcjI4MzQ5OTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/28349961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/caush",
"html_url": "https://github.com/caush",
"followers_url": "https://api.github.com/users/caush/followers",
"following_url": "https://api.github.com/users/caush/following{/other_user}",
"gists_url": "https://api.github.com/users/caush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/caush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/caush/subscriptions",
"organizations_url": "https://api.github.com/users/caush/orgs",
"repos_url": "https://api.github.com/users/caush/repos",
"events_url": "https://api.github.com/users/caush/events{/privacy}",
"received_events_url": "https://api.github.com/users/caush/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @caush, thanks for reporting:\r\n\r\nIn order to load local CSV files, you can use our \"csv\" loading script: https://huggingface.co/docs/datasets/loading#csv\r\n```python\r\ndataset = load_dataset(\"csv\", data_files=[\"data/file1.csv\", \"data/file2.csv\"])\r\n```\r\nOR:\r\n```python\r\ndataset = load_dataset(\"csv\", data_dir=\"data\")\r\n```\r\n\r\nAlternatively, you may also use:\r\n```python\r\ndataset = load_dataset(\"data\")"
] | 1,647,370,766,000 | 1,647,424,509,000 | 1,647,424,446,000 | NONE | null | ## Describe the bug
Load a local "dataset" composed of two csv files twice.
## Steps to reproduce the bug
Put the two joined files in a repository named "Data".
Then in python:
import datasets as ds
ds.load_dataset('Data', data_files = {'file1.csv', 'file2.csv'})
## Expected results
Should give something like (because files have only one data row):
Title, clicks
Truc et astuce, 123
Machin, 12
## Actual results
Gives
Title, clicks
Truc et astuce, 123
Machin, 12
Truc et astuce, 123
Machin, 12
## Environment info
[file1.csv](https://github.com/huggingface/datasets/files/8256322/file1.csv)
[file2.csv](https://github.com/huggingface/datasets/files/8256323/file2.csv)
- `datasets` version: 2.0.0
- Platform: Linux-5.4.0-65-generic-x86_64-with-glibc2.10
- Python version: 3.8.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3929/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3928/comments | https://api.github.com/repos/huggingface/datasets/issues/3928/events | https://github.com/huggingface/datasets/issues/3928 | 1,170,017,132 | I_kwDODunzps5FvQts | 3,928 | Frugal score deprecations | {
"login": "Ierezell",
"id": 30974685,
"node_id": "MDQ6VXNlcjMwOTc0Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/30974685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ierezell",
"html_url": "https://github.com/Ierezell",
"followers_url": "https://api.github.com/users/Ierezell/followers",
"following_url": "https://api.github.com/users/Ierezell/following{/other_user}",
"gists_url": "https://api.github.com/users/Ierezell/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ierezell/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ierezell/subscriptions",
"organizations_url": "https://api.github.com/users/Ierezell/orgs",
"repos_url": "https://api.github.com/users/Ierezell/repos",
"events_url": "https://api.github.com/users/Ierezell/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ierezell/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @Ierezell, thanks for reporting.\r\n\r\nI'm making a PR to suppress those logs from the terminal. "
] | 1,647,367,842,000 | 1,647,506,244,000 | 1,647,506,244,000 | NONE | null | ## Describe the bug
The frugal score returns a really verbose output with warnings that can be easily changed.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets.load import load_metric
frugal = load_metric("frugalscore")
frugal.compute(predictions=["Do you like spinachis"], references=["Do you like spinach"])
```
## Expected results
A clear and concise description of the expected results.
```
{'scores': [0.9946]}
```
## Actual results
Specify the actual results or traceback.
```
PyTorch: setting up devices
The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 864.09ba/s]
Using amp half precision backend
The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence2, sentence1. If sentence2, sentence1 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message.
***** Running Prediction *****
Num examples = 1
Batch size = 64
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 4644.85it/s]
{'scores': [0.9946]}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3928/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3927 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3927/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3927/comments | https://api.github.com/repos/huggingface/datasets/issues/3927/events | https://github.com/huggingface/datasets/pull/3927 | 1,170,016,465 | PR_kwDODunzps40ewN2 | 3,927 | Update main readme | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"What do you think @albertvillanova ?"
] | 1,647,367,799,000 | 1,648,548,827,000 | 1,648,548,500,000 | MEMBER | null | The main readme was still focused on text datasets - I extended it by mentioning that we also support image and audio datasets | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3927/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3927",
"html_url": "https://github.com/huggingface/datasets/pull/3927",
"diff_url": "https://github.com/huggingface/datasets/pull/3927.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3927.patch",
"merged_at": 1648548500000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3926 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3926/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3926/comments | https://api.github.com/repos/huggingface/datasets/issues/3926/events | https://github.com/huggingface/datasets/pull/3926 | 1,169,945,052 | PR_kwDODunzps40ehVP | 3,926 | Doc maintenance | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3926). All of your documentation changes will be reflected on that endpoint."
] | 1,647,363,646,000 | 1,647,372,435,000 | 1,647,372,432,000 | MEMBER | null | This PR adds some minor maintenance to the docs. The main fix is properly linking to pages in the callouts because some of the links would just redirect to a non-existent section on the same page. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3926/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3926",
"html_url": "https://github.com/huggingface/datasets/pull/3926",
"diff_url": "https://github.com/huggingface/datasets/pull/3926.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3926.patch",
"merged_at": 1647372432000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3925/comments | https://api.github.com/repos/huggingface/datasets/issues/3925/events | https://github.com/huggingface/datasets/pull/3925 | 1,169,913,769 | PR_kwDODunzps40eaq8 | 3,925 | Fix main_classes docs index | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hmm it's still not good \r\n![image](https://user-images.githubusercontent.com/42851186/158429361-e19ce25b-c259-4ded-8473-075deafdbb96.png)\r\n\r\nany idea what could cause this ?",
"Ok fixed :)"
] | 1,647,362,026,000 | 1,647,956,951,000 | 1,647,956,644,000 | MEMBER | null | Currently the `main_classes` documentation has a wrong index. I believe this comes from issues in the examples of the Translation feature types
![image](https://user-images.githubusercontent.com/42851186/158426345-2ee1ceef-ddf3-4a6f-a93e-d1a8f38a44f5.png)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3925/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3925",
"html_url": "https://github.com/huggingface/datasets/pull/3925",
"diff_url": "https://github.com/huggingface/datasets/pull/3925.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3925.patch",
"merged_at": 1647956644000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3924/comments | https://api.github.com/repos/huggingface/datasets/issues/3924/events | https://github.com/huggingface/datasets/pull/3924 | 1,169,805,813 | PR_kwDODunzps40eED5 | 3,924 | Document cases for github datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3924). All of your documentation changes will be reflected on that endpoint.",
"Yay!"
] | 1,647,357,010,000 | 1,649,183,595,000 | 1,647,358,883,000 | MEMBER | null | In general we recommend adding the new dataset under a username or organization in the Hugging Face Hub at [hf.co/datasets](hf.co/datasets), but users can still add a dataset on github in some cases.
I added a paragraph in the documentation to explain in which cases it can make more sense to open a PR on github:
- when you need the dataset to be reviewed
- when you need long-term maintenance from the HF team
- when thereβs no clear org name / namespace that you can put the dataset under | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3924/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3924/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3924",
"html_url": "https://github.com/huggingface/datasets/pull/3924",
"diff_url": "https://github.com/huggingface/datasets/pull/3924.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3924.patch",
"merged_at": 1647358883000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3923/comments | https://api.github.com/repos/huggingface/datasets/issues/3923/events | https://github.com/huggingface/datasets/pull/3923 | 1,169,773,869 | PR_kwDODunzps40d9YU | 3,923 | Add methods to IterableDatasetDict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3923). All of your documentation changes will be reflected on that endpoint."
] | 1,647,355,563,000 | 1,647,362,708,000 | 1,647,362,706,000 | MEMBER | null | Following the new methods added in #3826 and https://github.com/huggingface/datasets/pull/3862 I added several methods to IterableDatasetDict:
- map
- filter
- shuffle
- with_format
- cast
- cast_column
- remove_columns
- rename_column
- rename_columns
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3923/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3923",
"html_url": "https://github.com/huggingface/datasets/pull/3923",
"diff_url": "https://github.com/huggingface/datasets/pull/3923.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3923.patch",
"merged_at": 1647362706000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3922/comments | https://api.github.com/repos/huggingface/datasets/issues/3922/events | https://github.com/huggingface/datasets/pull/3922 | 1,169,761,293 | PR_kwDODunzps40d6vm | 3,922 | Fix NonMatchingChecksumError in MultiWOZ 2.2 dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3922). All of your documentation changes will be reflected on that endpoint.",
"Unrelated CI test failure. This PR can be merged."
] | 1,647,354,988,000 | 1,647,360,424,000 | 1,647,360,423,000 | MEMBER | null | Fix #2957 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3922/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3922",
"html_url": "https://github.com/huggingface/datasets/pull/3922",
"diff_url": "https://github.com/huggingface/datasets/pull/3922.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3922.patch",
"merged_at": 1647360422000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3921 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3921/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3921/comments | https://api.github.com/repos/huggingface/datasets/issues/3921/events | https://github.com/huggingface/datasets/pull/3921 | 1,169,749,338 | PR_kwDODunzps40d4Mk | 3,921 | Fix NonMatchingChecksumError in CRD3 dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3921). All of your documentation changes will be reflected on that endpoint.",
"Unrelated test failure. This PR can be merged."
] | 1,647,354,434,000 | 1,647,359,667,000 | 1,647,359,666,000 | MEMBER | null | Fix #3051 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3921/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3921",
"html_url": "https://github.com/huggingface/datasets/pull/3921",
"diff_url": "https://github.com/huggingface/datasets/pull/3921.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3921.patch",
"merged_at": 1647359666000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3920 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3920/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3920/comments | https://api.github.com/repos/huggingface/datasets/issues/3920/events | https://github.com/huggingface/datasets/issues/3920 | 1,169,532,807 | I_kwDODunzps5FtaeH | 3,920 | 'datasets.features' is not a package | {
"login": "Arij-Aladel",
"id": 68355048,
"node_id": "MDQ6VXNlcjY4MzU1MDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/68355048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Arij-Aladel",
"html_url": "https://github.com/Arij-Aladel",
"followers_url": "https://api.github.com/users/Arij-Aladel/followers",
"following_url": "https://api.github.com/users/Arij-Aladel/following{/other_user}",
"gists_url": "https://api.github.com/users/Arij-Aladel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Arij-Aladel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Arij-Aladel/subscriptions",
"organizations_url": "https://api.github.com/users/Arij-Aladel/orgs",
"repos_url": "https://api.github.com/users/Arij-Aladel/repos",
"events_url": "https://api.github.com/users/Arij-Aladel/events{/privacy}",
"received_events_url": "https://api.github.com/users/Arij-Aladel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @Arij-Aladel,\r\n\r\nYou are using a very old version of our library `datasets`: 1.8.0\r\nCurrent version is 2.0.0 (and the previous one was 1.18.4)\r\n\r\nPlease, try to update `datasets` library and check if the problem persists:\r\n```shell\r\n/env/bin/pip install -U datasets",
"The problem I can no I have build my project on this version and old version on transformers. I have preprocessed the data again to use it. Thank for your reply"
] | 1,647,342,863,000 | 1,647,422,232,000 | 1,647,422,232,000 | NONE | null | @albertvillanova
python 3.9
os: ubuntu 20.04
In conda environment
torch installed by
```/env/bin/pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html```
datasets package is installed by
```
/env/bin/pip install datasets==1.8.0
```
During runing the code I have this error
```
[6]<stderr>: File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class
[6]<stderr>: return super().find_class(mod_name, name)
[6]<stderr>:ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package
```
precisely this error appears when
torch.load('data_file.pt')
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 607, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 882, in _load
result = unpickler.load()
File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class
return super().find_class(mod_name, name)
ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package
```
Why I am getting this error?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3920/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3919 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3919/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3919/comments | https://api.github.com/repos/huggingface/datasets/issues/3919/events | https://github.com/huggingface/datasets/issues/3919 | 1,169,497,210 | I_kwDODunzps5FtRx6 | 3,919 | AttributeError: 'DatasetDict' object has no attribute 'features' | {
"login": "jswapnil10",
"id": 48145785,
"node_id": "MDQ6VXNlcjQ4MTQ1Nzg1",
"avatar_url": "https://avatars.githubusercontent.com/u/48145785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jswapnil10",
"html_url": "https://github.com/jswapnil10",
"followers_url": "https://api.github.com/users/jswapnil10/followers",
"following_url": "https://api.github.com/users/jswapnil10/following{/other_user}",
"gists_url": "https://api.github.com/users/jswapnil10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jswapnil10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jswapnil10/subscriptions",
"organizations_url": "https://api.github.com/users/jswapnil10/orgs",
"repos_url": "https://api.github.com/users/jswapnil10/repos",
"events_url": "https://api.github.com/users/jswapnil10/events{/privacy}",
"received_events_url": "https://api.github.com/users/jswapnil10/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"You are likely trying to get the `features` from a `DatasetDict`, a dictionary containing `Datasets`. You probably first want to index into a particular split from your `DatasetDict` i.e. `dataset['train'].features`. \r\n\r\nFor example \r\n\r\n```python \r\nds = load_dataset('mnist')\r\nds.features\r\n```\r\nReturns \r\n```python\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\n[<ipython-input-39-791c1f9df6c2>](https://localhost:8080/#) in <module>()\r\n----> 1 ds.features\r\n\r\nAttributeError: 'DatasetDict' object has no attribute 'features'\r\n```\r\n\r\nIf we look at the dataset variable, we see it is a `DatasetDict`:\r\n\r\n```python \r\nprint(ds)\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 60000\r\n })\r\n test: Dataset({\r\n features: ['image', 'label'],\r\n num_rows: 10000\r\n })\r\n})\r\n```\r\n\r\nWe can grab the features from a split by indexing into `train`:\r\n```python\r\nds['train'].features\r\n{'image': Image(decode=True, id=None),\r\n 'label': ClassLabel(num_classes=10, names=['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'], id=None)}\r\n```\r\n\r\nHope that helps ",
"Yes, Thanks for that clarification,"
] | 1,647,341,219,000 | 1,647,490,574,000 | 1,647,490,574,000 | NONE | null | ## Describe the bug
Receiving the error when trying to check for Dataset features
## Steps to reproduce the bug
from datasets import Dataset
dataset = Dataset.from_pandas(df[['id', 'words', 'bboxes', 'ner_tags', 'image_path']])
dataset.features
## Expected results
A clear and concise description of the expected results.
## Actual results
Getting the following errror
AttributeError: 'DatasetDict' object has no attribute 'features'
## Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 1.18.4
- Platform: Linux-4.14.252-131.483.amzn1.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.13
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3919/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3918 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3918/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3918/comments | https://api.github.com/repos/huggingface/datasets/issues/3918/events | https://github.com/huggingface/datasets/issues/3918 | 1,169,366,117 | I_kwDODunzps5Fsxxl | 3,918 | datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files | {
"login": "willowdong",
"id": 51409295,
"node_id": "MDQ6VXNlcjUxNDA5Mjk1",
"avatar_url": "https://avatars.githubusercontent.com/u/51409295?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/willowdong",
"html_url": "https://github.com/willowdong",
"followers_url": "https://api.github.com/users/willowdong/followers",
"following_url": "https://api.github.com/users/willowdong/following{/other_user}",
"gists_url": "https://api.github.com/users/willowdong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/willowdong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/willowdong/subscriptions",
"organizations_url": "https://api.github.com/users/willowdong/orgs",
"repos_url": "https://api.github.com/users/willowdong/repos",
"events_url": "https://api.github.com/users/willowdong/events{/privacy}",
"received_events_url": "https://api.github.com/users/willowdong/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | null | [] | null | [
"Hi @willowdong! These issues were fixed on master. We will have a new release of `datasets` later today. In the meantime, you can avoid these issues by installing `datasets` from master as follows:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets.git\r\n```",
"You should force redownload:\r\n```python\r\ndataset = load_dataset(\"multi_news\", download_mode=\"force_redownload\")\r\ndataset_2 = load_dataset(\"reddit_tifu\", \"long\", download_mode=\"force_redownload\")",
"Fixed by:\r\n- #3787 \r\n- #3843"
] | 1,647,334,425,000 | 1,647,445,018,000 | 1,647,352,885,000 | NONE | null | ## Describe the bug
Can't load the dataset
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset('multi_news')
dataset_2=load_dataset("reddit_tifu", "long")
## Actual results
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF']
## Environment info
- `datasets` version: 1.18.4
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.0
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3918/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3918/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3917 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3917/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3917/comments | https://api.github.com/repos/huggingface/datasets/issues/3917/events | https://github.com/huggingface/datasets/pull/3917 | 1,168,906,154 | PR_kwDODunzps40bGZA | 3,917 | Create README.md | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3917). All of your documentation changes will be reflected on that endpoint."
] | 1,647,292,090,000 | 1,647,539,139,000 | 1,647,539,139,000 | CONTRIBUTOR | null | This follows the same structure as the GLUE metric card, hope that works for everyone :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3917/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3917",
"html_url": "https://github.com/huggingface/datasets/pull/3917",
"diff_url": "https://github.com/huggingface/datasets/pull/3917.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3917.patch",
"merged_at": 1647539139000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3916 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3916/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3916/comments | https://api.github.com/repos/huggingface/datasets/issues/3916/events | https://github.com/huggingface/datasets/pull/3916 | 1,168,869,191 | PR_kwDODunzps40a-cR | 3,916 | Create README.md for GLUE | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3916). All of your documentation changes will be reflected on that endpoint."
] | 1,647,289,642,000 | 1,647,364,017,000 | 1,647,364,016,000 | CONTRIBUTOR | null | I still have a hesitation regarding the format of inputs -- whether it's a list or a list of lists? -- hopefully @lhoestq will be able to clarify.
Also tagging @yjernite for the Limitations section. Happy to hear your thoughts! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3916/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3916",
"html_url": "https://github.com/huggingface/datasets/pull/3916",
"diff_url": "https://github.com/huggingface/datasets/pull/3916.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3916.patch",
"merged_at": 1647364016000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3915 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3915/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3915/comments | https://api.github.com/repos/huggingface/datasets/issues/3915/events | https://github.com/huggingface/datasets/pull/3915 | 1,168,848,101 | PR_kwDODunzps40a54e | 3,915 | Metric card template | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Looks like a great start! I have a general comment and a few specific comments.\r\n\r\nMy general comment is I wonder if we need a post for this template and the data and model card templates (or a combined one?) explaining why this documentation is needed and how it serves both the writer and the audience.\r\n\r\nSpecific comments:\r\n- Maybe we can add some more desiderata to the overview instructions like: what task was the metric originally developed for, what tasks is it used for now, what is the range of possible outputs?\r\n- In the data card, we call the data instances inputs `fields`. It might be good to synchronize on that across the templates and change `input_name` to `input_field`? Also are the instructions for the `input_name` complete? It ends with 'In the *' and I'm not sure what that refers to.\r\n- 'Values' seems ambiguous to me, maybe 'scores' would be more explicit? Also could add a request for the range of possible outputs.\r\n- We could add a reference in the examples section to the overview section if that's where further explanation should go. Suggestion to add: 'Provide a range of examples that show both typical and atypical results' or something similar.\r\n- I'm not sure if we'd want to add this to the example section or make a new section, but it would be good to prompt somewhere for links to specific use cases in HF\r\n- In the limitations and bias section, add 'with links'\r\n",
"Looks like a great start! I have a general comment and a few specific comments.\r\n\r\nMy general comment is I wonder if we need a post for this template and the data and model card templates (or a combined one?) explaining why this documentation is needed and how it serves both the writer and the audience.\r\n\r\nSpecific comments:\r\n- Maybe we can add some more desiderata to the overview instructions like: what task was the metric originally developed for, what tasks is it used for now, what is the range of possible outputs?\r\n- In the data card, we call the data instances `fields`. It might be good to synchronize on that across the templates and change `input_name` to `input_field`? Also are the instructions for the `input_name` complete? It ends with 'In the *' and I'm not sure what that refers to.\r\n- 'Values' seems ambiguous to me, maybe 'scores' would be more explicit? Also could add a request for the range of possible outputs.\r\n- We could add a reference to the examples section to the overview section if that's where further explanation should go. Suggestion to add: 'Provide a range of examples that show both typical and atypical results' or something similar.\r\n- I'm not sure if we'd want to add this to the example section or make a new section, but it would be good to prompt somewhere for links to specific use cases in HF\r\n- In the limitations and bias section, add 'with links'\r\n",
"Thanks for your feedback, @mcmillanmajora ! I totally agree that we should write a post -- we were going to write one up when we are done with a good chunk of the metric cards, but we can also do that earlier :smile: \r\n\r\nWith regards to your more specific comments:\r\n\r\n- It is our intention to put what the metric was developed for (whether it is a specific task or dataset, for example). You can see the [WER](https://github.com/huggingface/datasets/tree/master/metrics/wer) metric card for that.\r\n- `input_field` works for me!\r\n- the values aren't always scores, it's more like the values the metric can take. And it does include the range of possible values, including the max and min, that are outputted.\r\n- I like the suggestion to add: 'Provide a range of examples that show both typical and atypical results' :hugs: \r\n- I have been putting specific use cases in 'Further references', just because there isn't always something to put there, especially for less popular metrics",
"Oh cool! I was just looking at the template, it definitely helps seeing an example metric card. Based on just the instructions, I had assumed that examples meant research papers where the metric was used to evaluate a model, but I like the explicit coding examples! ",
"Oh cool! I was just looking at the template, it definitely helps seeing an example metric card. Based on just the instructions, I had assumed that examples meant research papers where the metric was used to evaluate a model, but I like the explicit coding examples! "
] | 1,647,288,428,000 | 1,651,661,049,000 | 1,651,660,626,000 | CONTRIBUTOR | null | Adding a metric card template, based on ideas and edits from @sashavor and I, as well as from comments from @lhoestq and others (thank you!).
All feedback is welcome, but am especially curious about feedback in terms of:
- things that should be included but aren't
- things that are included but should be changed or removed
- the instructions I included, and whether they should be added to, clarified, or deleted altogether | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3915/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3915",
"html_url": "https://github.com/huggingface/datasets/pull/3915",
"diff_url": "https://github.com/huggingface/datasets/pull/3915.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3915.patch",
"merged_at": 1651660626000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3914 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3914/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3914/comments | https://api.github.com/repos/huggingface/datasets/issues/3914/events | https://github.com/huggingface/datasets/pull/3914 | 1,168,777,880 | PR_kwDODunzps40aq2r | 3,914 | Use templates for doc-builidng jobs | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3914). All of your documentation changes will be reflected on that endpoint.",
"You can ignore the CI failures btw, they're unrelated to this PR"
] | 1,647,283,986,000 | 1,647,529,379,000 | 1,647,529,378,000 | MEMBER | null | This PR updates the jobs for all doc-building related things by using the templates introduced on `doc-builder`. By putting those once there, we make sure every repo gets the latest fixes on the doc-building github actions :-)
Note: all libraries must share the same docker image for those doc-building jobs. For now, all the one used (`huggingface/transformers-doc-builder`) contains all extra steps of the datasets install for docbuling (mainly libsndfile) but if in the future some additional steps are necessary on top of `pip install -e .[dev]`, this docker image will need to be updated with the extra deps. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3914/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3914",
"html_url": "https://github.com/huggingface/datasets/pull/3914",
"diff_url": "https://github.com/huggingface/datasets/pull/3914.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3914.patch",
"merged_at": 1647529378000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3913 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3913/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3913/comments | https://api.github.com/repos/huggingface/datasets/issues/3913/events | https://github.com/huggingface/datasets/pull/3913 | 1,168,723,950 | PR_kwDODunzps40afYJ | 3,913 | Deterministic split order in DatasetDict.map | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3913). All of your documentation changes will be reflected on that endpoint.",
"I'm surprised this is needed because the order of the `dict` keys is deterministic as of Python 3.6 (documented in 3.7). Is there a reproducer for this behavior? I wouldn't make this change unless it's absolutely needed because `sorted` modifies the initial order of the keys.",
"Indeed this doesn't fix the issue apparently. Actually this is probably because the tokenizer used to process the second split is in a state that has been modified by the first split.\r\n\r\nTherefore after reloading the first split from the cache, then the second split can't be reloaded since the tokenizer hasn't seen the first split (and therefore is considered a different tokenizer)."
] | 1,647,280,717,000 | 1,647,341,115,000 | 1,647,341,115,000 | MEMBER | null | The order in which the splits are processed by `map` is not deterministic in `DatasetDict.map`. This can cause caching issues when the processing function is stateful and sensible to the order in which examples are processed
Close https://github.com/huggingface/datasets/issues/3847 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3913/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3913",
"html_url": "https://github.com/huggingface/datasets/pull/3913",
"diff_url": "https://github.com/huggingface/datasets/pull/3913.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3913.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3912 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3912/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3912/comments | https://api.github.com/repos/huggingface/datasets/issues/3912/events | https://github.com/huggingface/datasets/pull/3912 | 1,168,720,098 | PR_kwDODunzps40aekr | 3,912 | add draft of registering function for pandas | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3912). All of your documentation changes will be reflected on that endpoint.",
"That's cool ! Though I would expect such an integration to only require `huggingface_hub`, not the full `datasets` library. \r\n Indeed if users want to use the `datasets` lib they could just to `Dataset.from_pandas(df).push_to_hub()` already. Therefore I would explore something that doesn't not necessarily requires `datasets`.\r\n\r\nFor other could storage solutions (S3, GCS, etc.), pandas allows users to pass URIs like `s3://bucket-name/path/data.csv` to the `read_xxx` and `to_xxx` (for csv, parquet, json, etc). It also support passing the **root directory** like `s3://bucket-name/dataset-dir` instead of a single file name.\r\n\r\nIn the Hugging Face Hub case, we have one dataset = one repository. We can enter pandas' paradigm by saying one dataset = one repository = one root directory. Here is what we could have:\r\n\r\n### push to Hub:\r\n```python\r\n\"\"\"\r\nDemo script for writing a pandas data frame to a CSV file on HF using fsspec-supported pandas APIs\r\n\"\"\"\r\nimport pandas as pd\r\n\r\nHF_USER = os.getenv(\"HF_USER\")\r\nHF_TOKEN = os.getenv(\"HF_TOKEN\")\r\n\r\nbooks_df = pd.DataFrame(\r\n data={\"Title\": [\"Book I\", \"Book II\", \"Book III\"], \"Price\": [56.6, 59.87, 74.54]},\r\n columns=[\"Title\", \"Price\"],\r\n)\r\n\r\ndataset_name = \"books1\"\r\n\r\nbooks_df.to_csv(\r\n f\"hf://{HF_USER}/{dataset_name}\",\r\n index=False,\r\n storage_options={\r\n \"repo_type\": \"dataset\",\r\n \"token\": HF_TOKEN,\r\n },\r\n)\r\n\r\n```\r\n\r\n### load from Hub:\r\n```python\r\n\"\"\"\r\nDemo script for reading a CSV file from HF into a pandas data frame using fsspec-supported pandas\r\nAPIs\r\n\"\"\"\r\nimport pandas as pd\r\n\r\nHF_USER = os.getenv(\"HF_USER\")\r\nHF_TOKEN = os.getenv(\"HF_TOKEN\")\r\n\r\ndataset_name = \"books1\"\r\n\r\nbooks_df = pd.read_csv(\r\n f\"hf://{HF_USER}/{dataset_name}\",\r\n storage_options={\r\n \"repo_type\": \"dataset\",\r\n \"token\": HF_TOKEN,\r\n },\r\n)\r\n\r\nprint(books_df)\r\n```\r\n\r\nAnd you could do the same with Parquet data using `read/to_parquet` or other formats. Formats like CSV, Parquet or JSON Lines would work out of the box with `datasets`. This API would also allow anyone to use Dask with the Hugging Face Hub for example.\r\n\r\nWhat do you think ?"
] | 1,647,280,469,000 | 1,647,877,299,000 | null | MEMBER | null | This PR adds a register function for `pandas`. It allows to directly push `DataFrame` objects to the hub and in return loading datasets on the hub from `DataFrame`. The motivation for this integration is to enable the vast number of `pandas` users to be able to easily push `DataFrames` to the hub.
Here is an example:
```python
import pandas as pd
from datasets import register_pandas
register_pandas()
# push to hub
df = pd.DataFrame.from_dict({"test": [1,2,3]})
df.push_to_hub("my_test")
# load from hub
df_retrieved = pd.DataFrame.load_from_hub("lvwerra/my_test")
```
It follows a similar philosophy as the `tqdm` [integration](https://github.com/tqdm/tqdm#pandas-integration). Also see [this issue](https://github.com/pandas-dev/pandas/issues/46000) on the `pandas` repository.
This is just a rough draft of what such integration could look like but I would like appreciate some feedback on this: is this something you would like to add the library and is this the way to go? cc @lhoestq @albertvillanova @julien-c | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3912/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3912",
"html_url": "https://github.com/huggingface/datasets/pull/3912",
"diff_url": "https://github.com/huggingface/datasets/pull/3912.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3912.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3911 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3911/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3911/comments | https://api.github.com/repos/huggingface/datasets/issues/3911/events | https://github.com/huggingface/datasets/pull/3911 | 1,168,652,374 | PR_kwDODunzps40aQHz | 3,911 | Create README.md for CER metric | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,276,891,000 | 1,647,539,380,000 | 1,647,539,154,000 | CONTRIBUTOR | null | Initial proposal for a CER metric card
cc @patrickvonplaten - wdyt this time around? :smile: | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3911/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3911",
"html_url": "https://github.com/huggingface/datasets/pull/3911",
"diff_url": "https://github.com/huggingface/datasets/pull/3911.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3911.patch",
"merged_at": 1647539154000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3910 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3910/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3910/comments | https://api.github.com/repos/huggingface/datasets/issues/3910/events | https://github.com/huggingface/datasets/pull/3910 | 1,168,579,694 | PR_kwDODunzps40aAiX | 3,910 | Fix text loader to split only on universal newlines | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3910). All of your documentation changes will be reflected on that endpoint.",
"Looks like the test needs to be updated for windows ^^'",
"I don't think this is the same issue as in https://github.com/oscar-corpus/corpus/issues/18, where the OSCAR metadata has line offsets that use only `\\n` as the newline marker to count lines, not `\\r\\n` or `\\r`.\r\n\r\nIt looks like the OSCAR data loader is opening the data files with `gzip.open` directly and I don't think this text loader is used, but I'm not familiar with a lot of `datasets` internals so I could be mistaken?",
"You are right @adrianeboyd.\r\n\r\nThis PR fixes #3729.\r\n\r\nAdditionally, this PR is somehow related to the OSCAR issue. However, the OSCAR issue have multiple root causes: one is the offset initialization (as you pointed out); other is similar to this case: Unicode newlines are not properly handled.\r\n\r\nI will make a change proposal for OSCAR this afternoon.",
"@lhoestq I'm working on fixing the Windows tests on my Windows machine...",
"I finally changed the approach in order to avoid having \"\\r\\n\" and \"\\r\" line breaks in Python `str` read from files on Windows/old Macintosh machines."
] | 1,647,273,298,000 | 1,647,360,971,000 | 1,647,360,969,000 | MEMBER | null | Currently, `text` loader breaks on a superset of universal newlines, which also contains Unicode line boundaries. See: https://docs.python.org/3/library/stdtypes.html#str.splitlines
However, the expected behavior is to get the lines splitted only on universal newlines: "\n", "\r\n" and "\r".
See: oscar-corpus/corpus#18
Fix #3729. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3910/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3910",
"html_url": "https://github.com/huggingface/datasets/pull/3910",
"diff_url": "https://github.com/huggingface/datasets/pull/3910.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3910.patch",
"merged_at": 1647360969000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3909 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3909/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3909/comments | https://api.github.com/repos/huggingface/datasets/issues/3909/events | https://github.com/huggingface/datasets/issues/3909 | 1,168,578,058 | I_kwDODunzps5FpxYK | 3,909 | Error loading file audio when downloading the Common Voice dataset directly from the Hub | {
"login": "aliceinland",
"id": 30385910,
"node_id": "MDQ6VXNlcjMwMzg1OTEw",
"avatar_url": "https://avatars.githubusercontent.com/u/30385910?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aliceinland",
"html_url": "https://github.com/aliceinland",
"followers_url": "https://api.github.com/users/aliceinland/followers",
"following_url": "https://api.github.com/users/aliceinland/following{/other_user}",
"gists_url": "https://api.github.com/users/aliceinland/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aliceinland/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aliceinland/subscriptions",
"organizations_url": "https://api.github.com/users/aliceinland/orgs",
"repos_url": "https://api.github.com/users/aliceinland/repos",
"events_url": "https://api.github.com/users/aliceinland/events{/privacy}",
"received_events_url": "https://api.github.com/users/aliceinland/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! It could an issue with torchaudio, which version of torchaudio are you using ? Can you also try updating `datasets` to 2.0.0 and see if it works ?",
"I _might_ have a similar issue. I'm trying to use the librispeech_asr dataset and read it with soundfile.\r\n\r\n```python\r\nfrom datasets import load_dataset, load_metric\r\nfrom transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor\r\nimport soundfile as sf\r\n\r\nlibrispeech_eval = load_dataset(\"librispeech_asr\", \"clean\", split=\"test\") # change to \"other\" for other test dataset\r\nwer = load_metric(\"wer\")\r\n\r\nmodel = Speech2TextForConditionalGeneration.from_pretrained(\"facebook/s2t-small-librispeech-asr\").to(\"cuda\")\r\nprocessor = Speech2TextProcessor.from_pretrained(\"facebook/s2t-small-librispeech-asr\", do_upper_case=True)\r\n\r\ndef map_to_array(batch):\r\n speech, _ = sf.read(batch[\"file\"])\r\n batch[\"speech\"] = speech\r\n return batch\r\n\r\nlibrispeech_eval = librispeech_eval.map(map_to_array)\r\n\r\ndef map_to_pred(batch):\r\n features = processor(batch[\"speech\"], sampling_rate=16000, padding=True, return_tensors=\"pt\")\r\n input_features = features.input_features.to(\"cuda\")\r\n attention_mask = features.attention_mask.to(\"cuda\")\r\n\r\n gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask)\r\n batch[\"transcription\"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)\r\n return batch\r\n\r\nresult = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=[\"speech\"])\r\n\r\nprint(\"WER:\", wer(predictions=result[\"transcription\"], references=result[\"text\"]))\r\n```\r\n\r\nThe code is taken directly from \"https://huggingface.co/facebook/s2t-small-librispeech-asr\".\r\n\r\nThe short error code is \"RuntimeError: Error opening '6930-75918-0000.flac': System error.\" (it can't find the first file), and I agree, I can't find the file either. The dataset has downloaded correctly (it says), but on the location, there are only \".arrow\" files, no \".flac\" files.\r\n\r\n**Error message:**\r\n\r\n```python\r\nRuntimeError Traceback (most recent call last)\r\nInput In [15], in <cell line: 16>()\r\n 13 batch[\"speech\"] = speech\r\n 14 return batch\r\n---> 16 librispeech_eval = librispeech_eval.map(map_to_array)\r\n 18 def map_to_pred(batch):\r\n 19 features = processor(batch[\"speech\"], sampling_rate=16000, padding=True, return_tensors=\"pt\")\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:1953, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)\r\n 1950 disable_tqdm = not logging.is_progress_bar_enabled()\r\n 1952 if num_proc is None or num_proc == 1:\r\n-> 1953 return self._map_single(\r\n 1954 function=function,\r\n 1955 with_indices=with_indices,\r\n 1956 with_rank=with_rank,\r\n 1957 input_columns=input_columns,\r\n 1958 batched=batched,\r\n 1959 batch_size=batch_size,\r\n 1960 drop_last_batch=drop_last_batch,\r\n 1961 remove_columns=remove_columns,\r\n 1962 keep_in_memory=keep_in_memory,\r\n 1963 load_from_cache_file=load_from_cache_file,\r\n 1964 cache_file_name=cache_file_name,\r\n 1965 writer_batch_size=writer_batch_size,\r\n 1966 features=features,\r\n 1967 disable_nullable=disable_nullable,\r\n 1968 fn_kwargs=fn_kwargs,\r\n 1969 new_fingerprint=new_fingerprint,\r\n 1970 disable_tqdm=disable_tqdm,\r\n 1971 desc=desc,\r\n 1972 )\r\n 1973 else:\r\n 1975 def format_cache_file_name(cache_file_name, rank):\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:519, in transmit_tasks.<locals>.wrapper(*args, **kwargs)\r\n 517 self: \"Dataset\" = kwargs.pop(\"self\")\r\n 518 # apply actual function\r\n--> 519 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 520 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 521 for dataset in datasets:\r\n 522 # Remove task templates if a column mapping of the template is no longer valid\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:486, in transmit_format.<locals>.wrapper(*args, **kwargs)\r\n 479 self_format = {\r\n 480 \"type\": self._format_type,\r\n 481 \"format_kwargs\": self._format_kwargs,\r\n 482 \"columns\": self._format_columns,\r\n 483 \"output_all_columns\": self._output_all_columns,\r\n 484 }\r\n 485 # apply actual function\r\n--> 486 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 487 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 488 # re-apply format to the output\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\fingerprint.py:458, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)\r\n 452 kwargs[fingerprint_name] = update_fingerprint(\r\n 453 self._fingerprint, transform, kwargs_for_fingerprint\r\n 454 )\r\n 456 # Call actual function\r\n--> 458 out = func(self, *args, **kwargs)\r\n 460 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n 462 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:2318, in Dataset._map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)\r\n 2316 if not batched:\r\n 2317 for i, example in enumerate(pbar):\r\n-> 2318 example = apply_function_on_filtered_inputs(example, i, offset=offset)\r\n 2319 if update_data:\r\n 2320 if i == 0:\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:2218, in Dataset._map_single.<locals>.apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)\r\n 2216 if with_rank:\r\n 2217 additional_args += (rank,)\r\n-> 2218 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)\r\n 2219 if update_data is None:\r\n 2220 # Check if the function returns updated examples\r\n 2221 update_data = isinstance(processed_inputs, (Mapping, pa.Table))\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\datasets\\arrow_dataset.py:1913, in Dataset.map.<locals>.decorate.<locals>.decorated(item, *args, **kwargs)\r\n 1909 decorated_item = (\r\n 1910 Example(item, features=self.features) if not batched else Batch(item, features=self.features)\r\n 1911 )\r\n 1912 # Use the LazyDict internally, while mapping the function\r\n-> 1913 result = f(decorated_item, *args, **kwargs)\r\n 1914 # Return a standard dict\r\n 1915 return result.data if isinstance(result, LazyDict) else result\r\n\r\nInput In [15], in map_to_array(batch)\r\n 11 def map_to_array(batch):\r\n---> 12 speech, _ = sf.read(batch[\"file\"])\r\n 13 batch[\"speech\"] = speech\r\n 14 return batch\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\soundfile.py:256, in read(file, frames, start, stop, dtype, always_2d, fill_value, out, samplerate, channels, format, subtype, endian, closefd)\r\n 170 def read(file, frames=-1, start=0, stop=None, dtype='float64', always_2d=False,\r\n 171 fill_value=None, out=None, samplerate=None, channels=None,\r\n 172 format=None, subtype=None, endian=None, closefd=True):\r\n 173 \"\"\"Provide audio data from a sound file as NumPy array.\r\n 174 \r\n 175 By default, the whole file is read from the beginning, but the\r\n (...)\r\n 254 \r\n 255 \"\"\"\r\n--> 256 with SoundFile(file, 'r', samplerate, channels,\r\n 257 subtype, endian, format, closefd) as f:\r\n 258 frames = f._prepare_read(start, stop, frames)\r\n 259 data = f.read(frames, dtype, always_2d, fill_value, out)\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\soundfile.py:629, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd)\r\n 626 self._mode = mode\r\n 627 self._info = _create_info_struct(file, mode, samplerate, channels,\r\n 628 format, subtype, endian)\r\n--> 629 self._file = self._open(file, mode_int, closefd)\r\n 630 if set(mode).issuperset('r+') and self.seekable():\r\n 631 # Move write position to 0 (like in Python file objects)\r\n 632 self.seek(0)\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\soundfile.py:1183, in SoundFile._open(self, file, mode_int, closefd)\r\n 1181 else:\r\n 1182 raise TypeError(\"Invalid file: {0!r}\".format(self.name))\r\n-> 1183 _error_check(_snd.sf_error(file_ptr),\r\n 1184 \"Error opening {0!r}: \".format(self.name))\r\n 1185 if mode_int == _snd.SFM_WRITE:\r\n 1186 # Due to a bug in libsndfile version <= 1.0.25, frames != 0\r\n 1187 # when opening a named pipe in SFM_WRITE mode.\r\n 1188 # See http://github.com/erikd/libsndfile/issues/77.\r\n 1189 self._info.frames = 0\r\n\r\nFile C:\\ProgramData\\Miniconda3\\envs\\noise_cancel\\lib\\site-packages\\soundfile.py:1357, in _error_check(err, prefix)\r\n 1355 if err != 0:\r\n 1356 err_str = _snd.sf_error_number(err)\r\n-> 1357 raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace'))\r\n\r\nRuntimeError: Error opening '6930-75918-0000.flac': System error.\r\n```\r\n\r\n**Package versions:**\r\n```python\r\npython: 3.9\r\ntransformers: 4.17.0\r\ndatasets: 2.0.0\r\nSoundFile: 0.10.3.post1\r\n```\r\n",
"Hi ! In `datasets` 2.0 can access the audio array with `librispeech_eval[0][\"audio\"][\"array\"]` already, no need to use `map_to_array`. See our documentation on [how to process audio data](https://huggingface.co/docs/datasets/audio_process) :)\r\n\r\ncc @patrickvonplaten we will need to update the readme at [facebook/s2t-small-librispeech-asr](https://huggingface.co/facebook/s2t-small-librispeech-asr) as well as https://huggingface.co/docs/transformers/model_doc/speech_to_text",
"Thanks!\r\n\r\nAnd sorry for posting this problem in what turned on to be an unrelated thread.\r\n\r\nI rewrote the code, and the model works. The WER is 0.137 however, so I'm not sure if I have missed a step. I will look further into that at a later point. The transcriptions look good through manual inspection.\r\n\r\nThe rewritten code:\r\n```python\r\nfrom datasets import load_dataset, load_metric\r\nfrom transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor, Wav2Vec2Processor\r\n\r\nlibrispeech_eval = load_dataset(\"librispeech_asr\", \"clean\", split=\"test\") # change to \"other\" for other test dataset\r\nwer = load_metric(\"wer\")\r\n\r\nmodel = Speech2TextForConditionalGeneration.from_pretrained(\"facebook/s2t-small-librispeech-asr\").to(\"cuda\")\r\nprocessor = Speech2TextProcessor.from_pretrained(\"facebook/s2t-small-librispeech-asr\", do_upper_case=True)\r\n\r\ndef map_to_pred(batch):\r\n audio = batch[\"audio\"]\r\n features = processor(audio[\"array\"], sampling_rate=audio[\"sampling_rate\"], padding=True, return_tensors=\"pt\")\r\n input_features = features.input_features.to(\"cuda\")\r\n attention_mask = features.attention_mask.to(\"cuda\")\r\n\r\n gen_tokens = model.generate(input_features=input_features, attention_mask=attention_mask)\r\n batch[\"transcription\"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)\r\n return batch\r\n\r\nresult = librispeech_eval.map(map_to_pred)#, batched=True, batch_size=8)\r\n\r\nprint(\"WER:\", wer.compute(predictions=result[\"transcription\"], references=result[\"text\"]))\r\n```",
"I think the issue comes from the fact that you set `batched=False` while `map_to_pred` still returns a list of strings for \"transcription\". You can fix it by adding `[0]` at the end of this line to get the string:\r\n```python\r\nbatch[\"transcription\"] = processor.batch_decode(gen_tokens, skip_special_tokens=True)[0]\r\n```",
"Updating as many model cards now as I can find",
"https://github.com/huggingface/transformers/pull/16611"
] | 1,647,273,230,000 | 1,649,176,618,000 | null | NONE | null | ## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "it", split="test")
#test_dataset = load_dataset('csv', data_files = {'test': '/workspace/Dataset/Common_Voice/cv-corpus80/it/test.csv'})
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\β\'\οΏ½]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
```
## Expected results
The common voice dataset downloaded and correctly loaded whit the use of the hugging face datasets library.
## Actual results
The error is:
```python
0ex [00:00, ?ex/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-48-ef87f4129e6e> in <module>
7 return batch
8
----> 9 test_dataset = test_dataset.map(speech_file_to_array_fn)
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2107
2108 if num_proc is None or num_proc == 1:
-> 2109 return self._map_single(
2110 function=function,
2111 with_indices=with_indices,
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
516 self: "Dataset" = kwargs.pop("self")
517 # apply actual function
--> 518 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
519 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
520 for dataset in datasets:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
483 }
484 # apply actual function
--> 485 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
486 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
487 # re-apply format to the output
/opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
411 # Call actual function
412
--> 413 out = func(self, *args, **kwargs)
414
415 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2465 if not batched:
2466 for i, example in enumerate(pbar):
-> 2467 example = apply_function_on_filtered_inputs(example, i, offset=offset)
2468 if update_data:
2469 if i == 0:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2372 if with_rank:
2373 additional_args += (rank,)
-> 2374 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2375 if update_data is None:
2376 # Check if the function returns updated examples
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in decorated(item, *args, **kwargs)
2067 )
2068 # Use the LazyDict internally, while mapping the function
-> 2069 result = f(decorated_item, *args, **kwargs)
2070 # Return a standard dict
2071 return result.data if isinstance(result, LazyDict) else result
<ipython-input-48-ef87f4129e6e> in speech_file_to_array_fn(batch)
3 def speech_file_to_array_fn(batch):
4 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
----> 5 speech_array, sampling_rate = torchaudio.load(batch["path"])
6 batch["speech"] = resampler(speech_array).squeeze().numpy()
7 return batch
/opt/conda/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
150 filepath, frame_offset, num_frames, normalize, channels_first, format)
151 filepath = os.fspath(filepath)
--> 152 return torch.ops.torchaudio.sox_io_load_audio_file(
153 filepath, frame_offset, num_frames, normalize, channels_first, format)
154
RuntimeError: Error loading audio file: failed to open file common_voice_it_17415776.mp3 ```
## Environment info
- `datasets` version: 1.18.4
- Platform: Linux-5.4.0-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 7.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3909/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3908 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3908/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3908/comments | https://api.github.com/repos/huggingface/datasets/issues/3908/events | https://github.com/huggingface/datasets/pull/3908 | 1,168,576,963 | PR_kwDODunzps40Z_9F | 3,908 | Update README.md for SQuAD v2 metric | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3908). All of your documentation changes will be reflected on that endpoint."
] | 1,647,273,190,000 | 1,647,363,851,000 | 1,647,363,851,000 | CONTRIBUTOR | null | Putting "Values from popular papers" as a subsection of "Output values" | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3908/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3908",
"html_url": "https://github.com/huggingface/datasets/pull/3908",
"diff_url": "https://github.com/huggingface/datasets/pull/3908.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3908.patch",
"merged_at": 1647363850000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3907 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3907/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3907/comments | https://api.github.com/repos/huggingface/datasets/issues/3907/events | https://github.com/huggingface/datasets/pull/3907 | 1,168,575,998 | PR_kwDODunzps40Z_vd | 3,907 | Update README.md for SQuAD metric | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3907). All of your documentation changes will be reflected on that endpoint."
] | 1,647,273,151,000 | 1,647,363,860,000 | 1,647,363,859,000 | CONTRIBUTOR | null | Putting "Values from popular papers" as a subsection of "Output values" | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3907/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3907",
"html_url": "https://github.com/huggingface/datasets/pull/3907",
"diff_url": "https://github.com/huggingface/datasets/pull/3907.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3907.patch",
"merged_at": 1647363859000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3906 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3906/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3906/comments | https://api.github.com/repos/huggingface/datasets/issues/3906/events | https://github.com/huggingface/datasets/issues/3906 | 1,168,496,328 | I_kwDODunzps5FpdbI | 3,906 | NonMatchingChecksumError on Spider dataset | {
"login": "kolk",
"id": 9049591,
"node_id": "MDQ6VXNlcjkwNDk1OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9049591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kolk",
"html_url": "https://github.com/kolk",
"followers_url": "https://api.github.com/users/kolk/followers",
"following_url": "https://api.github.com/users/kolk/following{/other_user}",
"gists_url": "https://api.github.com/users/kolk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kolk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kolk/subscriptions",
"organizations_url": "https://api.github.com/users/kolk/orgs",
"repos_url": "https://api.github.com/users/kolk/repos",
"events_url": "https://api.github.com/users/kolk/events{/privacy}",
"received_events_url": "https://api.github.com/users/kolk/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @kolk, thanks for reporting.\r\n\r\nIndeed, Google Drive service recently changed their service and we had to add a fix to our library to cope with that change:\r\n- #3787 \r\n\r\nWe just made patch release last week: 1.18.4 https://github.com/huggingface/datasets/releases/tag/1.18.4\r\n\r\nPlease, feel free to update your local `datasets` version, so that you get the fix:\r\n```shell\r\npip install -U datasets\r\n```"
] | 1,647,269,693,000 | 1,647,328,191,000 | 1,647,328,191,000 | NONE | null | ## Describe the bug
Failure to generate dataset ```spider``` because of checksums error for dataset source files.
## Steps to reproduce the bug
```
from datasets import load_dataset
spider = load_dataset("spider")
```
## Expected results
Checksums should match for files from url ['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0']
## Actual results
```
>>> load_dataset("spider")
load_dataset("spider")
Downloading and preparing dataset spider/spider (download: 95.12 MiB, generated: 5.17 MiB, post-processed: Unknown size, total: 100.29 MiB) to /home/user/.cache/huggingface/datasets/spider/spider/1.0.0/79778ebea87c59b19411f1eb3eda317e9dd5f7788a556d837ef25c3ae6e5e8b7...
Traceback (most recent call last):
File "/home/user/py3_env/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3441, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-5-d4cb54197348>", line 1, in <module>
load_dataset("spider")
File "/home/user/py3_env/lib/python3.8/site-packages/datasets/load.py", line 1702, in load_dataset
builder_instance.download_and_prepare(
File "/home/user/py3_env/lib/python3.8/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/home/user/py3_env/lib/python3.8/site-packages/datasets/builder.py", line 665, in _download_and_prepare
verify_checksums(
File "/home/user/py3_env/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0']
```
## Environment info
datasets version: 1.18.3
Platform: Ubuntu 20 LTS
Python version: 3.8.10
PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3906/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3905 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3905/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3905/comments | https://api.github.com/repos/huggingface/datasets/issues/3905/events | https://github.com/huggingface/datasets/pull/3905 | 1,168,320,568 | PR_kwDODunzps40ZJQJ | 3,905 | Perplexity Metric Card | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3905). All of your documentation changes will be reflected on that endpoint.",
"I'm wondering if we should add that perplexity can be used for analyzing datasets as well",
"Otherwise, looks good! Good job, @emibaylor !"
] | 1,647,261,580,000 | 1,647,459,536,000 | 1,647,459,536,000 | CONTRIBUTOR | null | Add Perplexity metric card
Note that it is currently still missing the citation, but I plan to add it later today. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3905/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3905",
"html_url": "https://github.com/huggingface/datasets/pull/3905",
"diff_url": "https://github.com/huggingface/datasets/pull/3905.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3905.patch",
"merged_at": 1647459536000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3904 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3904/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3904/comments | https://api.github.com/repos/huggingface/datasets/issues/3904/events | https://github.com/huggingface/datasets/issues/3904 | 1,167,730,095 | I_kwDODunzps5FmiWv | 3,904 | CONLL2003 Dataset not available | {
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @omarespejel.\r\n\r\nI'm sorry but I can't reproduce the issue: the loading of the dataset works perfecto for me and I can reach the data URL: https://data.deepai.org/conll2003.zip\r\n\r\nMight it be due to a temporary problem in the data owner site (https://data.deepai.org/) that is fixed now?\r\nCould you please try loading the dataset again and tell if the problem persists?",
"@omarespejel I'm closing this issue. Feel free to reopen it if the problem persists."
] | 1,647,215,175,000 | 1,647,505,292,000 | 1,647,505,292,000 | NONE | null | ## Describe the bug
[CONLL2003](https://huggingface.co/datasets/conll2003) Dataset can no longer reach 'https://data.deepai.org/conll2003.zip'
![image](https://user-images.githubusercontent.com/4755430/158084483-ff83631c-5154-4823-892d-577bf1166db0.png)
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("conll2003")
```
## Expected results
Download the conll2003 dataset.
## Actual results
Error: `ConnectionError: Couldn't reach https://data.deepai.org/conll2003.zip (error 502)`
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3904/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3903 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3903/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3903/comments | https://api.github.com/repos/huggingface/datasets/issues/3903/events | https://github.com/huggingface/datasets/pull/3903 | 1,167,521,627 | PR_kwDODunzps40WnkI | 3,903 | Add Biwi Kinect Head Pose dataset. | {
"login": "dnaveenr",
"id": 17746528,
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaveenr",
"html_url": "https://github.com/dnaveenr",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the detailed explanation of the structure!\r\n\r\n1. IMO it makes the most sense to yield one example for each person (so the total of 24 examples), so the features dict should be similar to this:\r\n \r\n ```python\r\n features = Features({\r\n \"rgb\": Sequence(Image()), # for the png frames\r\n \"rgb_cal\": {\"intrisic_mat\": Array2D(shape=(3, 3), dtype=\"float32\"), \"extrinsic_mat\": {\"rotation\": Array2D(shape=(3, 3), dtype=\"float32\"), \"translation\": Sequence(Value(\"float32\", length=3)}},\r\n \"depth\": Sequence(Value(\"string\")), # for the depth frames\r\n \"depth_cal\": the same as \"rgb_cal\",\r\n \"head_pose_gt\": Sequence({\"center\": Sequence(Value(\"float32\", length=3), \"rotation\": Array2D(shape=(3, 3), dtype=\"float32\")}),\r\n \"head_template\": Value(\"string\"), # for the person's obj file\r\n\r\n })\r\n ```\r\n We can add a \"Data Processing\" section to the card to explain how to parse the files.\r\n\r\n\r\n2. Yes, it's ok to parse the files as long as it doesn't take too much time/memory (e.g., it's ok to parse the `*_pose.txt` or `*.cal` files, but it's better to leave the `*_depth.bin` or `*.obj` files unprocessed and yield the paths to them)",
"Thanks for the suggestions @mariosasko, yielding one example for each person would make things much easier.\r\nOkay. I'll look at parsing the files and then displaying the information.",
"Added the following : \r\n- Features, I have included sequence_number and subject_id along with the features you had suggested.\r\n- Tested loading of the dataset along with dummy_data and full_data tests.\r\n- Created the dataset_infos.json file.\r\n\r\nTo-Do :\r\n- [x] Update Dataset Cards with more details.\r\n- [x] \"Data Processing\" section\r\n\r\nAny inputs on what to include in the \"Data Processing\" section ?\r\n",
"@mariosasko Please could you review this when you get time. Thank you.",
"In the Data Processing section, I've added example code for a compressed binary depth image file. Updated the Readme as well. ",
"@mariosasko / @lhoestq , Please could you review this when you get time. Thank you.",
"Created an issue here: https://github.com/huggingface/datasets/issues/4152",
"Got it. Thanks for the comments. I've collapsed the C++ code in the readme and added the suggestions.",
"Hi ! The `AttributeError ` bug has been fixed, feel free to merge `master` into your branch ;)",
"I haven't been able to figure out why CI is failing, the error shown is : \r\n\r\n```\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE README Parsing:\r\nE list index out of range\r\nE The following issues have been found in the dataset cards:\r\nE README Validation:\r\nE list index out of range\r\n```\r\n\r\nAny inputs would be helpful.",
"I think it's because there are tabulations in the c++ code, can you replace them with regular spaces please ?\r\n\r\n(then in another PR we can maybe fix the Readme parser to support text indented with tabulations)",
"@lhoestq , initially the idea was to have one example = one image with an additional field mentioning the frame_number. But each subject, we had a head template, calibration information for the depth and the color camera which was common to all the examples for that subject. Also, the images were continuous frames.\r\n@mariosasko suggested this structure and it made sense to group the images together for a particular subject.",
"> Don't you think it would be more practical to have one example = one image in this dataset ?\r\n\r\nHaving one example = one image would be good but since we have a head template, calibration information for the depth and the color camera which is common to all the images for that subject and the images being continuous frames, I think it makes sense to group the images together for each subject. This will make the feature representation easier.\r\n\r\n",
"Ok I see, sounds good then. Users can still separate the images if they want to",
"The CI fails are unrelated to this PR and fixed on master, merging !",
"Great. Thanks @lhoestq , I think we can close this issue now. ( #3822 )"
] | 1,647,161,961,000 | 1,654,016,539,000 | 1,653,999,358,000 | CONTRIBUTOR | null | This PR adds the Biwi Kinect Head Pose dataset.
Dataset Request : Add Biwi Kinect Head Pose Database [#3822](https://github.com/huggingface/datasets/issues/3822)
The Biwi Kinect Head Pose Database is acquired with the Microsoft Kinect sensor, a structured IR light device.It contains 15K images of 20 people with 6 females and 14 males where 4 people were recorded twice.
For each frame, there is :
- a depth image, (.bin file)
- a corresponding rgb image (both 640x480 pixels),
- annotation ( present inside a .txt file)
The ground truth is the 3D location of the head and its rotation.
The dataset structure is as follows :
```
- 01.obj
- 01
- frame_00003_depth.bin
- frame_00003_pose.txt
- frame_00003_rgb.png
.
.
.
- 02.obj
- 02
- frame_00003_depth.bin
- frame_00003_pose.txt
- frame_00003_rgb.png
.
.
.
```
Preview of frame_00003_pose.txt :
```
0.988397 0.0731349 0.133128
-0.0441539 0.976945 -0.208876
-0.145334 0.200575 0.968838
126.665 40.4515 876.198
```
I have used the following dataset features :
```
features=datasets.Features(
{
"person_id": datasets.Value("string"),
"frame_number": datasets.Value("string"),
"depth_image": datasets.Value("string"),
"rgb_image": datasets.Image(),
"3D_head_center": datasets.Array2D(shape=(3, 3), dtype="float"),
"3D_head_rotation": datasets.Value("float"),
}
```
I am giving the path to the depth_image here.
I need some inputs for the following :
1. For each person, the dataset has the following additional information :
```
For each sequence, the corresponding .obj file represents a head template deformed to match the neutral face of that specific person. [*.obj file]
In each folder, two .cal files contain calibration information for the depth and the color camera, e.g., the intrinsic camera matrix of the depth camera and the global rotation and translation to the rgb camera.
```
Wanted to know how we can represent these features ?
2. For _generate_examples , do I parse the directories and fetch the required information ? This would mean reading the .txt file to obtain the "3D_head_center" and "3D_head_rotation" details. We could precompute the features information and have a metadata file and use the metadata file to yield information in _generate_examples ? Wanted your thoughts for the best approach for this ?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3903/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3903",
"html_url": "https://github.com/huggingface/datasets/pull/3903",
"diff_url": "https://github.com/huggingface/datasets/pull/3903.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3903.patch",
"merged_at": 1653999358000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3902 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3902/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3902/comments | https://api.github.com/repos/huggingface/datasets/issues/3902/events | https://github.com/huggingface/datasets/issues/3902 | 1,167,403,377 | I_kwDODunzps5FlSlx | 3,902 | Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils' | {
"login": "arunasank",
"id": 3166852,
"node_id": "MDQ6VXNlcjMxNjY4NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3166852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arunasank",
"html_url": "https://github.com/arunasank",
"followers_url": "https://api.github.com/users/arunasank/followers",
"following_url": "https://api.github.com/users/arunasank/following{/other_user}",
"gists_url": "https://api.github.com/users/arunasank/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arunasank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arunasank/subscriptions",
"organizations_url": "https://api.github.com/users/arunasank/orgs",
"repos_url": "https://api.github.com/users/arunasank/repos",
"events_url": "https://api.github.com/users/arunasank/events{/privacy}",
"received_events_url": "https://api.github.com/users/arunasank/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Update: `\"python3 -c \"from from datasets import Dataset, DatasetDict\"` works, but not if I import without the `python3 -c`",
"Hi @arunasank, thanks for reporting.\r\n\r\nIt seems that this can be caused because you are using an old version of `fsspec`: the reason why it works if you run `python3` seems to be that `python3` runs in a Python virtual env (with an updated version of `fsspec`); whereas the error arises when you run the import from other Python virtual env (with an old version of `fsspec`).\r\n\r\nIn order to fix this, you should update `fsspec` from within the \"problematic\" Python virtual env:\r\n```\r\npip install -U \"fsspec[http]>=2021.05.0\"",
"I'm closing this issue, @arunasank.\r\n\r\nFeel free to re-open it if the problem persists. "
] | 1,647,120,123,000 | 1,647,933,042,000 | 1,647,933,041,000 | NONE | null | ## Describe the bug
Unable to import datasets
## Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
```
## Expected results
The import works without errors
## Actual results
```
AttributeError Traceback (most recent call last)
<ipython-input-37-c8cfcbe62127> in <module>
11 # from tqdm import tqdm
12 # import torch
---> 13 from datasets import Dataset
14 # from transformers import Trainer, TrainingArguments, AutoModel, AutoTokenizer, AutoModelForMaskedLM, DataCollatorForLanguageModeling
15 # from sentence_transformers import SentenceTransformer
~/.local/lib/python3.8/site-packages/datasets/__init__.py in <module>
31 )
32
---> 33 from .arrow_dataset import Dataset, concatenate_datasets
34 from .arrow_reader import ArrowReader, ReadInstruction
35 from .arrow_writer import ArrowWriter
~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in <module>
46 )
47
---> 48 import fsspec
49 import numpy as np
50 import pandas as pd
~/.local/lib/python3.8/site-packages/fsspec/__init__.py in <module>
10 from . import _version, caching
11 from .callbacks import Callback
---> 12 from .core import get_fs_token_paths, open, open_files, open_local
13 from .exceptions import FSTimeoutError
14 from .mapping import FSMap, get_mapper
~/.local/lib/python3.8/site-packages/fsspec/core.py in <module>
16 caches,
17 )
---> 18 from .compression import compr
19 from .registry import filesystem, get_filesystem_class
20 from .utils import (
~/.local/lib/python3.8/site-packages/fsspec/compression.py in <module>
68
69
---> 70 register_compression("zip", unzip, "zip")
71 register_compression("bz2", BZ2File, "bz2")
72
~/.local/lib/python3.8/site-packages/fsspec/compression.py in register_compression(name, callback, extensions, force)
44
45 for ext in extensions:
---> 46 if ext in fsspec.utils.compressions and not force:
47 raise ValueError(
48 "Duplicate compression file extension: %s (%s)" % (ext, name)
AttributeError: partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: Jupyter notebook
- Python version: 3.8.10
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3902/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3901 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3901/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3901/comments | https://api.github.com/repos/huggingface/datasets/issues/3901/events | https://github.com/huggingface/datasets/issues/3901 | 1,167,339,773 | I_kwDODunzps5FlDD9 | 3,901 | Dataset viewer issue for IndicParaphrase- the preview doesn't show | {
"login": "ratishsp",
"id": 3006607,
"node_id": "MDQ6VXNlcjMwMDY2MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3006607?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ratishsp",
"html_url": "https://github.com/ratishsp",
"followers_url": "https://api.github.com/users/ratishsp/followers",
"following_url": "https://api.github.com/users/ratishsp/following{/other_user}",
"gists_url": "https://api.github.com/users/ratishsp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ratishsp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ratishsp/subscriptions",
"organizations_url": "https://api.github.com/users/ratishsp/orgs",
"repos_url": "https://api.github.com/users/ratishsp/repos",
"events_url": "https://api.github.com/users/ratishsp/events{/privacy}",
"received_events_url": "https://api.github.com/users/ratishsp/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"It seems to have been fixed:\r\n\r\n<img width=\"1534\" alt=\"Capture dβeΜcran 2022-04-12 aΜ 14 10 07\" src=\"https://user-images.githubusercontent.com/1676121/162959599-6b7fef7c-8411-4e03-8f00-90040a658079.png\">\r\n"
] | 1,647,104,165,000 | 1,649,765,450,000 | 1,649,765,449,000 | NONE | null | ## Dataset viewer issue for '*IndicParaphrase*'
**Link:** *[IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase/viewer/hi/validation)*
*The preview of the dataset doesn't come up.
The error on the console is:
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/home/hf/datasets-preview-backend/hi_IndicParaphrase_v1.0.tar'*
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3901/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3900 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3900/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3900/comments | https://api.github.com/repos/huggingface/datasets/issues/3900/events | https://github.com/huggingface/datasets/pull/3900 | 1,167,224,903 | PR_kwDODunzps40VxRh | 3,900 | Add MetaShift dataset | {
"login": "dnaveenr",
"id": 17746528,
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaveenr",
"html_url": "https://github.com/dnaveenr",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq Please could you review this when you get time. Thank you.",
"Thanks a lot for your inputs @mariosasko .\r\n> Maybe we can add the generated meta-graphs to the card as images (with attributions)?\r\n\r\nYes. We can do this for the default set of classes. Will add this.\r\n\r\n> Would be cool if we could have them as additional configs. Also, maybe we could have configs that expose [image metadata](https://github.com/Weixin-Liang/MetaShift/tree/main/dataset/meta_data) from the https://nlp.stanford.edu/data/gqa/sceneGraphs.zip file (this file is downloaded in the script but not used).\r\n\r\nI'll try adding the bonus section as additional config. \r\nRegarding exposing the image metadata with a config parameter, how will we showcase/display this information ?\r\n",
"> Regarding exposing the image metadata with a config parameter, how will we showcase/display this information ?\r\n\r\nOh, I forgot to mention that. Let's add a `Dataset Usage` section to the card to document the params (similar to this: https://huggingface.co/datasets/electricity_load_diagrams#dataset-usage). Also, feel free to add the constants that can be tuned as config params (e.g. `IMAGE_SUBSET_SIZE_THRESHOLD` or the `5` in `len(subject_data) <= 5`).",
"Okay. Got it. Will add these and constants as config parameters.\r\n\r\nThe image metadata from scene graphs looks like this : \r\n```json\r\n{\r\n \"2407890\": {\r\n \"width\": 640,\r\n \"height\": 480,\r\n \"location\": \"living room\",\r\n \"weather\": none,\r\n \"objects\": {\r\n \"271881\": {\r\n \"name\": \"chair\",\r\n \"x\": 220,\r\n \"y\": 310,\r\n \"w\": 50,\r\n \"h\": 80,\r\n \"attributes\": [\"brown\", \"wooden\", \"small\"],\r\n \"relations\": {\r\n \"32452\": {\r\n \"name\": \"on\",\r\n \"object\": \"275312\"\r\n },\r\n \"32452\": {\r\n \"name\": \"near\",\r\n \"object\": \"279472\"\r\n } \r\n }\r\n }\r\n }\r\n }\r\n}\r\n```\r\n``load_dataset(\"metashift\", selected_classes=[\"cat\", \"dog\", ...], image_metadata=True)``\r\nHow do we showcase/display the image metadata(json) information ?\r\n",
"> How do we showcase/display the image metadata(json) information ?\r\n\r\nWe can add the JSON fields as keys to the features dict:\r\n```python\r\n if self.config.image_metadata:\r\n features.update({\"width\": Value(\"int\"), \"height\": Value(\"int\"), \"location\": Value(\"string\"), ...}) \r\n```\r\n\r\nP.S. Would rename `image_metadata` to `with_image_metadata` ",
"I have added the following : \r\n- Added the meta-graphs to the card as images under the Section \"Dataset Meta-Graphs\".\r\n- Generate the Attributes-Dataset using config parameter. [ [Link](https://github.com/Weixin-Liang/MetaShift#bonus-generate-the-metashift-attributes-dataset-subsets-defined-by-subject-attributes) ]\r\n- Expose image metadata using config parameter.\r\nFormat of the image metadata is as follows : [Link](https://cs.stanford.edu/people/dorarad/gqa/download.html)\r\nI have modified the \"Objects\" which is dict to a list of dicts with an additional parameter named object_id. \r\nI have defined the structure as follows : \r\n```\r\n{\r\n \"width\": datasets.Value(\"int64\"),\r\n \"height\": datasets.Value(\"int64\"),\r\n \"location\": datasets.Value(\"string\"),\r\n \"weather\": datasets.Value(\"string\"),\r\n \"objects\": datasets.Sequence(\r\n {\r\n \"object_id\": datasets.Value(\"string\"),\r\n \"name\": datasets.Value(\"string\"),\r\n \"x\": datasets.Value(\"int64\"),\r\n \"y\": datasets.Value(\"int64\"),\r\n \"w\": datasets.Value(\"int64\"),\r\n \"h\": datasets.Value(\"int64\"),\r\n \"attributes\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"relations\": datasets.Sequence(\r\n {\r\n \"name\": datasets.Value(\"string\"),\r\n \"object\": datasets.Value(\"string\"),\r\n }\r\n ),\r\n }\r\n ),\r\n}\r\n```\r\nProblem is that objects is not being shown as list of dicts. The output looks as follows : \r\n\r\n> metashift_dataset['train'][0]\r\n\r\n```json \r\n{'image_id': '2338755', 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=500x281 at 0x7F066C5A49D0>, 'label': 0, 'context': 'ground', 'width': 500, 'height': 281, 'location': None, 'weather': None, 'objects': {'object_id': ['3070704', '3070705', '3070706', '2416713', '3070702', '2790660', '3063157', '2354960', '2037127', '2392939', '2912743', '2125407', '2735257', '3260906', '2351018', '3288269', '3699852', '2734378', '3421201', '2863115'], 'name': ['bicycle', 'bicycle', 'bicycle', 'boot', 'bicycle', 'motorcycle', 'pepperoni', 'head', 'building', 'wall', 'shorts', 'people', 'wheel', 'bricks', 'man', 'cat', 'boot', 'door', 'ground', 'building'], 'x': [137, 371, 458, 215, 468, 399, 368, 245, 0, 140, 260, 284, 138, 451, 339, 187, 210, 26, 0, 313], 'y': [116, 86, 94, 150, 91, 80, 107, 22, 0, 44, 109, 69, 145, 226, 69, 22, 230, 0, 119, 0], 'w': [197, 27, 15, 73, 24, 53, 9, 37, 289, 46, 43, 30, 74, 28, 35, 116, 53, 107, 500, 55], 'h': [126, 25, 38, 128, 43, 50, 16, 44, 158, 73, 51, 52, 97, 15, 73, 252, 46, 147, 162, 77], 'attributes': [[], [], [], ['white'], [], [], [], [], [], [], [], [], [], [], [], ['white'], ['white'], ['large', 'black'], ['brick'], []], 'relations': [{'name': ['to the left of'], 'object': ['3260906']}, {'name': ['to the left of', 'to the right of', 'to the right of', 'to the left of', 'to the right of', 'to the left of', 'to the right of'], 'object': ['3070706', '2351018', '2125407', '2790660', '2037127', '3070702', '3288269']}, {'name': ['to the right of', 'to the right of', 'to the left of', 'to the right of', 'to the right of'], 'object': ['2351018', '3070705', '3070702', '2790660', '3063157']}, {'name': ['to the right of'], 'object': ['2735257']}, {'name': ['to the right of', 'to the right of', 'to the right of', 'to the right of', 'to the right of'], 'object': ['2351018', '2790660', '3070706', '3070705', '3063157']}, {'name': ['to the right of', 'to the right of', 'to the left of', 'to the left of', 'to the right of', 'to the right of', 'to the right of', 'to the right of'], 'object': ['3070705', '2351018', '3070702', '3070706', '3063157', '2125407', '2037127', '3288269']}, {'name': ['to the right of', 'to the left of', 'to the left of', 'to the right of', 'to the right of', 'to the left of', 'to the right of'], 'object': ['2037127', '3070706', '3070702', '2912743', '3288269', '2790660', '2125407']}, {'name': ['to the left of', 'to the right of'], 'object': ['2863115', '2734378']}, {'name': ['to the left of', 'to the left of', 'to the left of', 'to the left of', 'to the left of', 'to the left of'], 'object': ['3070705', '2351018', '3063157', '2125407', '2790660', '2863115']}, {'name': ['to the left of', 'to the right of', 'to the left of'], 'object': ['2125407', '2734378', '3288269']}, {'name': ['to the left of', 'on', 'to the left of'], 'object': ['2351018', '3288269', '3063157']}, {'name': ['to the left of', 'to the left of', 'to the right of', 'to the left of', 'to the right of', 'to the left of'], 'object': ['3063157', '2351018', '2037127', '3070705', '2392939', '2790660']}, {'name': ['to the left of', 'to the left of'], 'object': ['2416713', '3288269']}, {'name': ['to the right of'], 'object': ['3070704']}, {'name': ['to the right of', 'to the left of', 'to the right of', 'to the left of', 'to the left of', 'to the right of', 'to the left of', 'to the right of', 'walking down'], 'object': ['2037127', '2790660', '2125407', '3070705', '3070706', '2912743', '3070702', '3288269', '3421201']}, {'name': ['to the right of', 'to the right of', 'to the left of', 'to the right of', 'to the left of', 'to the left of', 'to the left of', 'to the left of'], 'object': ['2392939', '2734378', '2790660', '2735257', '3063157', '3070705', '2351018', '2863115']}, {'name': [], 'object': []}, {'name': ['of', 'to the left of', 'to the left of', 'to the left of'], 'object': ['2037127', '2354960', '3288269', '2392939']}, {'name': [], 'object': []}, {'name': ['to the right of', 'to the right of', 'to the right of'], 'object': ['2037127', '3288269', '2354960']}]}}\r\n```\r\nExpected output of image_metadata would be : \r\n```\r\n{'height': 281,\r\n 'location': None,\r\n 'objects': [{'attributes': [],\r\n 'h': 126,\r\n 'name': 'bicycle',\r\n 'object_id': '3070704',\r\n 'relations': [{'name': 'to the left of', 'object': '3260906'}],\r\n 'w': 197,\r\n 'x': 137,\r\n 'y': 116},\r\n {'attributes': [],\r\n 'h': 25,\r\n 'name': 'bicycle',\r\n 'object_id': '3070705',\r\n 'relations': [{'name': 'to the left of', 'object': '3070706'},\r\n {'name': 'to the right of', 'object': '2351018'},\r\n {'name': 'to the right of', 'object': '2125407'},\r\n {'name': 'to the left of', 'object': '2790660'},\r\n {'name': 'to the right of', 'object': '2037127'},\r\n {'name': 'to the left of', 'object': '3070702'},\r\n {'name': 'to the right of', 'object': '3288269'}],\r\n 'w': 27,\r\n 'x': 371,\r\n 'y': 86},\r\n {'attributes': ['white'],\r\n 'h': 252,\r\n 'name': 'cat',\r\n 'object_id': '3288269',\r\n 'relations': [{'name': 'to the right of', 'object': '2392939'},\r\n {'name': 'to the right of', 'object': '2734378'},\r\n {'name': 'to the left of', 'object': '2790660'},\r\n {'name': 'to the right of', 'object': '2735257'},\r\n {'name': 'to the left of', 'object': '3063157'},\r\n {'name': 'to the left of', 'object': '3070705'},\r\n {'name': 'to the left of', 'object': '2351018'},\r\n {'name': 'to the left of', 'object': '2863115'}],\r\n 'w': 116,\r\n 'x': 187,\r\n 'y': 22},\r\n {'attributes': ['white'],\r\n 'h': 46,\r\n 'name': 'boot',\r\n 'object_id': '3699852',\r\n 'relations': [],\r\n 'w': 53,\r\n 'x': 210,\r\n 'y': 230},\r\n .\r\n .\r\n .\r\n {'attributes': ['large', 'black'],\r\n 'h': 147,\r\n 'name': 'door',\r\n 'object_id': '2734378',\r\n 'relations': [{'name': 'of', 'object': '2037127'},\r\n {'name': 'to the left of', 'object': '2354960'},\r\n {'name': 'to the left of', 'object': '3288269'},\r\n {'name': 'to the left of', 'object': '2392939'}],\r\n 'w': 107,\r\n 'x': 26,\r\n 'y': 0},\r\n {'attributes': ['brick'],\r\n 'h': 162,\r\n 'name': 'ground',\r\n 'object_id': '3421201',\r\n 'relations': [],\r\n 'w': 500,\r\n 'x': 0,\r\n 'y': 119},\r\n {'attributes': [],\r\n 'h': 77,\r\n 'name': 'building',\r\n 'object_id': '2863115',\r\n 'relations': [{'name': 'to the right of', 'object': '2037127'},\r\n {'name': 'to the right of', 'object': '3288269'},\r\n {'name': 'to the right of', 'object': '2354960'}],\r\n 'w': 55,\r\n 'x': 313,\r\n 'y': 0}],\r\n 'weather': None,\r\n 'width': 500}\r\n\r\n```\r\n\r\nMay I know how to get the list of dicts representation correctly ?\r\n\r\n---\r\nTo-Do : \r\n\r\n- [x] Generate dataset_infos.json file.\r\n- [x] Add βDataset Usageβ section in the cards and write about the config parameters. \r\n- [x] Add the constants that can be tuned as config params.\r\n",
"> Problem is that objects is not being shown as list of dicts. The output looks as follows :\r\n\r\nThat's expected. We convert a sequence of dictionaries to a dictionary of sequences to keep the formatting aligned with Tensorflow Datasets. You could disable this behavior by replacing `\"objects\": datasets.Sequence(object_fields_dict)` with `\"objects\": [object_fields_dict]`, but that's not what we usually do, so let's keep it like that. \r\n\r\nAlso, to limit the size of the dataset repo, please remove the pushed images and pass URLs to the images instead under the `src` attribute (and specify `alt` in case the URLs go down).\r\n\r\nI'll do a proper review again after you are finished with the dummy data.",
"> That's expected.\r\n\r\nOkay. Got it. Thanks. I thought I was doing something wrong.\r\n\r\n> Also, to limit the size of the dataset repo, please remove the pushed images and pass URLs to the images instead under the src attribute (and specify alt in case the URLs go down).\r\n\r\nSure. Where do we host these images ? Can I upload them to any free image hosting platform or is there any particular website you use ?\r\n\r\n> I'll do a proper review again after you are finished with the dummy data.\r\n\r\nSure. Thanks. I'm working on this part. Will update you.\r\n",
"Update : \r\n- I have generated the dataset_infos.json file.\r\n\r\n> I suggest you try to generate the dataset_infos.json file first, and then I can help with the dummy data.\r\n\r\nI am having issues creating the dummy data. I get the following which I use the command : \r\n\r\n`datasets-cli dummy_data datasets/metashift`\r\n\r\n```\r\nDataset metashift with config MetashiftConfig(name='metashift', version=1.0.0, data_dir=None, data_files=None, description=None) seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file dummy_data/full-candidate-subsets.pkl.\r\nTraceback (most recent call last):\r\n File \"datasets-cli\", line 33, in <module>\r\n sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())\r\n File \"/datasets/commands/datasets_cli.py\", line 33, in main\r\n service.run()\r\n File \"/datasets/commands/dummy_data.py\", line 324, in run\r\n dataset_builder=dataset_builder, mock_dl_manager=mock_dl_manager\r\n File \"/datasets/commands/dummy_data.py\", line 407, in _print_dummy_data_instructions\r\n for split in generator_splits:\r\nUnboundLocalError: local variable 'generator_splits' referenced before assignment\r\n```",
"> Feel free to host the images online (on imgur for example) :)\r\n\r\nSure. Will do that.\r\n\r\nThanks for the explanation regarding the dummy data zip files. I will try it out and let you know.",
"Instead of uploading the images to a hosting service, you can directly reference their GitHub URLs (open the image in the MetaShift repo -> click Download -> copy the image URL). For instance, this is the URL of one of the images:`https://raw.githubusercontent.com/Weixin-Liang/MetaShift/main/docs/figures/Cat-MetaGraph.jpg`. Also, feel free to replace `main` with the most recent commit hash in the copied URLs to make them more robust.",
"@mariosasko I've actually created metagraphs for all the default classes other than those present in the GitHub Repo and included all of them. :) The Repo has them only for two classes.\r\n\r\nIn case we want to limit the no.of meta graphs included, we can stick to the github URLs from the repo itself.\r\n",
"Update : \r\n- I could add the dummy data and get the dummy data test to work. Since we have a preprocessing step on the dataset, one of the .pkl file size is on the higher side. This was done for the tests to pass. I hope that is okay. The dummy.zip file size is about 273K.\r\n\r\nTo-Do :\r\n- [x] Update Dataset Structure in the data cards to include Data Instances when config parameters are used.\r\n\r\nPlease could you review when you get time. Thank you.",
"Thanks a lot for your suggestions, Mario. The thing I learnt from the review is that I need to make better sentence formations. I will keep this in mind. :) ",
"Thanks a lot for your support. @mariosasko and @lhoestq .\r\n\r\n> Super impressed by your work on this, congrats :)\r\n\r\nIts my first dataset contribution to the π€ Datasets library, I'm super excited. Thank you. :)\r\n\r\nAlso, I think we can close this request issue now, [#3813](https://github.com/huggingface/datasets/issues/3813)"
] | 1,647,074,658,000 | 1,648,832,388,000 | 1,648,826,190,000 | CONTRIBUTOR | null | This PR adds the MetaShift dataset.
Dataset Request : Add MetaShift dataset [#3813](https://github.com/huggingface/datasets/issues/3813)
@lhoestq As discussed,
- I have copied the preprocessing script and modified it as required to not create new directories and folders and instead yield the images.
- I do the preprocessing in _split_generators to get the required data which is then passed to _generate_examples.
- Beyond the generated MetaShift dataset, the original preprocess script also generates the meta-graphs for each class, I have currently not included this part. [ Ref : [Link](https://github.com/Weixin-Liang/MetaShift#generate-full-metashift) ]
- There is a Bonus section, the authors share. I have currently not included this part. [ Ref : [Link](https://github.com/Weixin-Liang/MetaShift#bonus-generate-the-metashift-attributes-dataset-subsets-defined-by-subject-attributes) ]
- I had a basic test script which downloaded the dataset and tested the basic functionality. Things seems fine.
For real data, I performed the following test :
```
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_metashift
============================================== test session starts ===============================================
platform linux -- Python 3.7.11, pytest-7.0.1, pluggy-1.0.0
rootdir: ./datasets
plugins: hydra-core-1.1.1, datadir-1.3.1, forked-1.4.0, xdist-2.5.0
collected 1 item
tests/test_dataset_common.py . [100%]
========================================= 1 passed in 4821.25s (1:20:21) =========================================
```
- I couldn't get the dummy dataset. Need some inputs here.
Error as follows :
```
Using custom data configuration default
Dataset metashift with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file dummy_data/full-candidate-subsets.pkl.
for split in generator_splits:
UnboundLocalError: local variable 'generator_splits' referenced before assignment
```
To-Do :
- [x] Currently I am using the default _SELECTED_CLASSES. I need to use config option here as suggested
- [x] Complete fields in the Dataset Card.
- [x] Tagging the dataset using the Datasets Tagging app.
Need your help and suggestions for improvement. Thank you
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3900/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3900",
"html_url": "https://github.com/huggingface/datasets/pull/3900",
"diff_url": "https://github.com/huggingface/datasets/pull/3900.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3900.patch",
"merged_at": 1648826190000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3899 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3899/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3899/comments | https://api.github.com/repos/huggingface/datasets/issues/3899/events | https://github.com/huggingface/datasets/pull/3899 | 1,166,931,812 | PR_kwDODunzps40UzR3 | 3,899 | Add exact match metric | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,647,037,300,000 | 1,647,879,003,000 | 1,647,878,735,000 | CONTRIBUTOR | null | Adding the exact match metric and its metric card.
Note: Some of the tests have failed, but I wanted to make a PR anyway so that the rest of the code can be reviewed if anyone has time. I'll look into + work on fixing the failed tests when I'm back online after the weekend | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3899/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3899",
"html_url": "https://github.com/huggingface/datasets/pull/3899",
"diff_url": "https://github.com/huggingface/datasets/pull/3899.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3899.patch",
"merged_at": 1647878734000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3898 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3898/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3898/comments | https://api.github.com/repos/huggingface/datasets/issues/3898/events | https://github.com/huggingface/datasets/pull/3898 | 1,166,778,250 | PR_kwDODunzps40UWG4 | 3,898 | Create README.md for WER metric | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3898). All of your documentation changes will be reflected on that endpoint.",
"For ASR you can probably ping @patrickvonplaten ",
"Ah only noticed now that ` # Values from popular papers` is from a template. @lhoestq @sashavor - not really sure if this section is useful in general really. \r\n\r\nIMO, it's more confusing/misleading than it helps. E.g. a value of 0.03 WER on a fake read-out audio dataset is not better than a WER of 0.3 on a real-world noisy, conversational audio dataset. I think the same holds true for other metrics no? I can think of very little metrics where a metric value is not dataset dependent. E.g. perplexity is super dataset dependent, summarization metrics like ROUGE as well, ...\r\n\r\nAlso, I don't really see what this section tries to achieve - is the idea here to give the reader some papers that use this metric to better understand in which context it is used? Should we maybe rename the section to `Popular papers making use of this metric` or something? \r\n\r\n",
"I put \"Values from popular papers\" as a subsection of \"Output values\" -- I hope that's a compromise that works for everyone :hugs: "
] | 1,647,026,949,000 | 1,647,363,900,000 | 1,647,363,899,000 | CONTRIBUTOR | null | Proposing a draft WER metric card, @lhoestq I'm not very certain about "Values from popular papers" -- I don't know ASR very well, what do you think of the examples I found? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3898/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3898",
"html_url": "https://github.com/huggingface/datasets/pull/3898",
"diff_url": "https://github.com/huggingface/datasets/pull/3898.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3898.patch",
"merged_at": 1647363899000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3897/comments | https://api.github.com/repos/huggingface/datasets/issues/3897/events | https://github.com/huggingface/datasets/pull/3897 | 1,166,715,104 | PR_kwDODunzps40UJH4 | 3,897 | Align tqdm control/cache control with Transformers | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3897). All of your documentation changes will be reflected on that endpoint."
] | 1,647,022,342,000 | 1,647,270,070,000 | 1,647,270,068,000 | CONTRIBUTOR | null | This PR:
* aligns the `tqdm` logic with Transformers (follows https://github.com/huggingface/transformers/pull/15167) by moving the code to `utils/logging.py`, adding `enable_progres_bar`/`disable_progres_bar` and removing `set_progress_bar_enabled` (a note for @lhoestq: I'm not adding `logging.tqdm` to the public namespace in this PR to avoid the situation where `from datasets import *; tqdm` would overshadow the standard `tqdm`
* aligns the cache control with the new `tqdm` logic by adding `enable_caching`/`disable_caching` to the public namespace and deprecating `set_caching_enabled` (not fully removing it because it's used more often than `set_progress_bar_enabled` and has a dedicated example in the old docs)
Fix #3586 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3897/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3897",
"html_url": "https://github.com/huggingface/datasets/pull/3897",
"diff_url": "https://github.com/huggingface/datasets/pull/3897.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3897.patch",
"merged_at": 1647270068000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3896 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3896/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3896/comments | https://api.github.com/repos/huggingface/datasets/issues/3896/events | https://github.com/huggingface/datasets/issues/3896 | 1,166,628,270 | I_kwDODunzps5FiVWu | 3,896 | Missing google file for `multi_news` dataset | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"reported by @abidlabs ",
"related to https://github.com/huggingface/datasets/pull/3843?",
"`datasets` 1.18.4 fixes the issue when you load the dataset with `load_dataset`.\r\n\r\nWhen loading in streaming mode, the fix is indeed on https://github.com/huggingface/datasets/pull/3843 which will be merged soon :)",
"That is. The PR #3843 was just opened a bit later we had made our 1.18.4 patch release...\r\nOnce merged, that will fix this issue. ",
"OK. Should fix the viewer for 50 datasets\r\n\r\n<img width=\"148\" alt=\"Capture dβeΜcran 2022-03-14 aΜ 11 51 02\" src=\"https://user-images.githubusercontent.com/1676121/158157853-6c544a47-2d6d-4ac4-964a-6f10951ec36b.png\">\r\n"
] | 1,647,016,690,000 | 1,647,347,423,000 | 1,647,347,423,000 | CONTRIBUTOR | null | ## Dataset viewer issue for '*multi_news*'
**Link:** https://huggingface.co/datasets/multi_news
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: https://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C/multi-news-original/train.src
```
Am I the one who added this dataset ? No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3896/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3896/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3895/comments | https://api.github.com/repos/huggingface/datasets/issues/3895/events | https://github.com/huggingface/datasets/pull/3895 | 1,166,619,182 | PR_kwDODunzps40T1C8 | 3,895 | Fix code examples indentation | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3895). All of your documentation changes will be reflected on that endpoint.",
"Still not rendered properly: https://moon-ci-docs.huggingface.co/docs/datasets/pr_3895/en/package_reference/main_classes#datasets.Dataset.align_labels_with_mapping",
"My last commit should have fixed it, I don't know why the dev doc build is not showing my last changes",
"Let me merge this and we can see on `master` how it renders, until the dev doc build is fixed"
] | 1,647,016,144,000 | 1,647,020,070,000 | 1,647,020,069,000 | MEMBER | null | Some code examples are currently not rendered correctly. I think this is because they are over-indented
cc @mariosasko | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3895/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3895",
"html_url": "https://github.com/huggingface/datasets/pull/3895",
"diff_url": "https://github.com/huggingface/datasets/pull/3895.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3895.patch",
"merged_at": 1647020069000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3894/comments | https://api.github.com/repos/huggingface/datasets/issues/3894/events | https://github.com/huggingface/datasets/pull/3894 | 1,166,611,270 | PR_kwDODunzps40TzXW | 3,894 | [docs] make dummy data creation optional | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3894). All of your documentation changes will be reflected on that endpoint.",
"The dev doc build rendering doesn't seem to be updated with my last commit for some reason",
"Merging it anyway since I'd like to share this page with users π "
] | 1,647,015,694,000 | 1,647,019,676,000 | 1,647,019,675,000 | MEMBER | null | Related to #3507 : dummy data for datasets created on the Hugging Face Hub are optional.
We can discuss later to make them optional for datasets in this repository as well | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3894/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3894",
"html_url": "https://github.com/huggingface/datasets/pull/3894",
"diff_url": "https://github.com/huggingface/datasets/pull/3894.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3894.patch",
"merged_at": 1647019675000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3893 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3893/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3893/comments | https://api.github.com/repos/huggingface/datasets/issues/3893/events | https://github.com/huggingface/datasets/pull/3893 | 1,166,551,684 | PR_kwDODunzps40TmxB | 3,893 | Add default branch for doc building | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3893). All of your documentation changes will be reflected on that endpoint.",
"Yes! And when we discovered on the Transformers side that this check fails on the GitHub actions, we added a config attribute to have a default. Setting in Transformers fixed the issue of the doc being deployed to main, so porting the fix here too :-)"
] | 1,647,012,267,000 | 1,647,012,875,000 | 1,647,012,874,000 | MEMBER | null | Since other libraries use `main` as their default branch and it's now the standard default, you have to specify a different name in the doc config if you're using `master` like datasets (`doc-builder` tries to guess it, but in the job, we have weird checkout of merge commits so it doesn't always manage to get it right).
This PR makes sure it will always use master for the dev doc (until you decide to switchto main) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3893/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3893",
"html_url": "https://github.com/huggingface/datasets/pull/3893",
"diff_url": "https://github.com/huggingface/datasets/pull/3893.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3893.patch",
"merged_at": 1647012874000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3892/comments | https://api.github.com/repos/huggingface/datasets/issues/3892/events | https://github.com/huggingface/datasets/pull/3892 | 1,166,227,003 | PR_kwDODunzps40ShYB | 3,892 | Fix CLI test checksums | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3892). All of your documentation changes will be reflected on that endpoint.",
"Feel free to merge if it's good for you :)",
"I've added a test @lhoestq. Once all green, I'll merge. ",
"Last failing tests do not have nothing to do with this PR."
] | 1,646,993,044,000 | 1,647,347,304,000 | 1,647,347,303,000 | MEMBER | null | Previous PR:
- #3796
introduced a side effect: `datasets-cli test` generates `dataset_infos.json` with `None` checksum values.
See:
- #3805
This PR introduces a way for `datasets-cli test` to force to record infos, even if `verify_infos=False`
Close #3848.
CC: @craffel | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3892/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3892/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3892",
"html_url": "https://github.com/huggingface/datasets/pull/3892",
"diff_url": "https://github.com/huggingface/datasets/pull/3892.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3892.patch",
"merged_at": 1647347303000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3891 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3891/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3891/comments | https://api.github.com/repos/huggingface/datasets/issues/3891/events | https://github.com/huggingface/datasets/pull/3891 | 1,165,503,732 | PR_kwDODunzps40QKIG | 3,891 | Fix race condition in doc build | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3891). All of your documentation changes will be reflected on that endpoint."
] | 1,646,932,630,000 | 1,646,932,980,000 | 1,646,932,650,000 | MEMBER | null | Following https://github.com/huggingface/datasets/runs/5499386744 it seems that race conditions that create issues when updating the doc. I took the same approach as in `transformers` to fix race conditions | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3891/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3891",
"html_url": "https://github.com/huggingface/datasets/pull/3891",
"diff_url": "https://github.com/huggingface/datasets/pull/3891.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3891.patch",
"merged_at": 1646932650000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3890 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3890/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3890/comments | https://api.github.com/repos/huggingface/datasets/issues/3890/events | https://github.com/huggingface/datasets/pull/3890 | 1,165,502,838 | PR_kwDODunzps40QJ8V | 3,890 | Update beans download urls | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3890). All of your documentation changes will be reflected on that endpoint.",
"@albertvillanova Thanks for investigating and fixing that issue. I regenerated the `dataset_infos.json` file."
] | 1,646,932,576,000 | 1,647,362,850,000 | 1,647,358,008,000 | CONTRIBUTOR | null | Replace the old URLs with the Hub [URLs](https://huggingface.co/datasets/beans/tree/main/data).
Also reported by @stevhliu.
Fix #3889 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3890/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3890/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3890",
"html_url": "https://github.com/huggingface/datasets/pull/3890",
"diff_url": "https://github.com/huggingface/datasets/pull/3890.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3890.patch",
"merged_at": 1647358007000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3889 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3889/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3889/comments | https://api.github.com/repos/huggingface/datasets/issues/3889/events | https://github.com/huggingface/datasets/issues/3889 | 1,165,456,083 | I_kwDODunzps5Fd3LT | 3,889 | Cannot load beans dataset (Couldn't reach the dataset) | {
"login": "ivsanro1",
"id": 30293331,
"node_id": "MDQ6VXNlcjMwMjkzMzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/30293331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivsanro1",
"html_url": "https://github.com/ivsanro1",
"followers_url": "https://api.github.com/users/ivsanro1/followers",
"following_url": "https://api.github.com/users/ivsanro1/following{/other_user}",
"gists_url": "https://api.github.com/users/ivsanro1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivsanro1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivsanro1/subscriptions",
"organizations_url": "https://api.github.com/users/ivsanro1/orgs",
"repos_url": "https://api.github.com/users/ivsanro1/repos",
"events_url": "https://api.github.com/users/ivsanro1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivsanro1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Hi ! A pull request is open to fix the dataset, we'll release a patch soon with a new release of `datasets` :)"
] | 1,646,930,048,000 | 1,647,358,007,000 | 1,647,358,007,000 | NONE | null | ## Describe the bug
The beans dataset is unavailable to download.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('beans')
```
## Expected results
The dataset would be downloaded with no issue.
## Actual results
```
ConnectionError: Couldn't reach https://storage.googleapis.com/ibeans/train.zip (error 403)
```
[It looks like the billing of this project has been disabled because it is associated with a delinquent account.](https://storage.googleapis.com/ibeans/train.zip )
## Environment info
Google Colab
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3889/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3888 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3888/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3888/comments | https://api.github.com/repos/huggingface/datasets/issues/3888/events | https://github.com/huggingface/datasets/issues/3888 | 1,165,435,529 | I_kwDODunzps5FdyKJ | 3,888 | IterableDataset columns and feature types | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
},
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [] | 1,646,929,152,000 | 1,646,929,172,000 | null | MEMBER | null | Right now, an IterableDataset (e.g. when streaming a dataset) doesn't require to know the list of columns it contains, nor their types: `my_iterable_dataset.features` may be `None`
However it's often interesting to know the column types and types. This helps knowing what's inside your dataset without having to manually check a few examples, and this is useful to prepare a processing pipeline or to train models.
Here are a few cases that lead to `features` being `None`:
1. when loading a dataset with `load_dataset` on CSV, JSON Lines, etc. files: type inference is only done when iterating over the dataset
2. when calling `map`, because we don't know in advance what's the output of the user's function passed to `map`
3. when calling `rename_columns`, `remove_columns`, etc. because they rely on `map`
Things we can consider, for each point above:
1.a infer the type automatically from the first samples on the dataset using prefetching, when the dataset builder doesn't provide the `features`
2.a allow the user to specify the `features` as an argument to `map` (this would be consistent with the non-streaming API)
2.b prefetch the first output value to infer the type
3.a don't rely on `map` directly and reuse the previous `features` and rename/remove the corresponding ones
The thing is that prefetching can take a few seconds, while the operations above are instantaneous since no data are downloaded. Therefore I'm not sure whether this solution may be worth it. Maybe prefetching could also be done when explicitly asked by the user
cc @mariosasko @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3888/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3887/comments | https://api.github.com/repos/huggingface/datasets/issues/3887/events | https://github.com/huggingface/datasets/pull/3887 | 1,165,380,852 | PR_kwDODunzps40PwqT | 3,887 | ImageFolder improvements | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3887). All of your documentation changes will be reflected on that endpoint."
] | 1,646,926,486,000 | 1,647,011,171,000 | 1,647,011,171,000 | CONTRIBUTOR | null | This PR adds the following improvements to the `imagefolder` dataset:
* skip the extract step for image files (as discussed in https://github.com/huggingface/datasets/pull/2830#discussion_r816817919)
* option to drop labels by setting `drop_labels=True` (useful for image pretraining cc @NielsRogge). This is faster than loading a dataset and removing the `label` column because we don't need to iterate over the files to infer class labels. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3887/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3887",
"html_url": "https://github.com/huggingface/datasets/pull/3887",
"diff_url": "https://github.com/huggingface/datasets/pull/3887.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3887.patch",
"merged_at": 1647011171000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3886 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3886/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3886/comments | https://api.github.com/repos/huggingface/datasets/issues/3886/events | https://github.com/huggingface/datasets/pull/3886 | 1,165,223,319 | PR_kwDODunzps40PO6W | 3,886 | Retry HfApi call inside push_to_hub when 504 error | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3886). All of your documentation changes will be reflected on that endpoint.",
"I made it more robust by increasing the wait time, and I also added some logs when a request is retried. Let me know if it's ok for you",
"At the end you did not set the agreed max value of 60s. \r\n\r\nMoreover, with the new numbers, there is a slight contradiction: although you set max_retries=5, we will only make 4 retries at most because of the combined values of `base_wait_time` and `max_wait_time`.",
"Yea I thought that in total we could wait 1min, but if we have a max_wait_time of 20sec between each request it's fine IMO\r\n\r\n> Moreover, with the new numbers, there is a slight contradiction: although you set max_retries=5, we will only make 4 retries at most because of the combined values of base_wait_time and max_wait_time.\r\n\r\nWhat makes you think this ? If the exponential wait time becomes bigger than `max_wait_time` then it still does the retry, but after a wait time of `max_wait_time`",
"Sorry, I meant 4 retries **with exponential backoff**; the fifth one is with constant backoff.",
"OK, and one question: do you think that the retries do not affect the time the server needs to be operational again and able to process the request? I guess that if does not affect, then the cause are other users' requests, or others; not our specific request.\r\n\r\nJust to be sure: \r\n- Then 20s at most between consecutive requests do not impact the server.\r\n- And we expect after a total of 5 retries (within a total 50s of wait time + request processing/uploading time), the server should be able to come back to normality.",
"> do you think that the retries do not affect the time the server needs for being able to process the request (I guess in this case the cause are other users' requests, or other causes; not our specific request).\r\n\r\nYes I don't think the retries would affect the server, I think the cause of the 504 errors is elsewhere\r\n\r\n> Just to be sure:\r\n>\r\n> Then 20s at most between consecutive requests do not impact the server.\r\n> And we expect after a total of 5 retries (within a total 50s of wait time + request processing/uploading time), the server should be able to come back to normality.\r\n\r\nYes I think it's fine for now, we can still adapt this later if needed",
"Will be curious to see the impact of this in terms of upload reliability! Don't forget to let us know when you have more data. cc @huggingface/moon-landing-back "
] | 1,646,918,680,000 | 1,647,421,256,000 | 1,647,361,190,000 | MEMBER | null | Ass suggested by @lhoestq in #3872, this PR:
- Implements a retry function
- Retries HfApi call inside `push_to_hub` when 504 error. To be agreed:
- max_retries = 2 (at 0.5 and 1 seconds)
Fix #3872. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3886/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3886",
"html_url": "https://github.com/huggingface/datasets/pull/3886",
"diff_url": "https://github.com/huggingface/datasets/pull/3886.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3886.patch",
"merged_at": 1647361190000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3885 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3885/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3885/comments | https://api.github.com/repos/huggingface/datasets/issues/3885/events | https://github.com/huggingface/datasets/pull/3885 | 1,165,102,209 | PR_kwDODunzps40O00Z | 3,885 | Fix some shuffle docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3885). All of your documentation changes will be reflected on that endpoint."
] | 1,646,911,755,000 | 1,646,921,789,000 | 1,646,921,788,000 | MEMBER | null | Following #3842 some docs were still outdated (with `buffer_size` as the first argument) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3885/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3885",
"html_url": "https://github.com/huggingface/datasets/pull/3885",
"diff_url": "https://github.com/huggingface/datasets/pull/3885.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3885.patch",
"merged_at": 1646921788000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3884 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3884/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3884/comments | https://api.github.com/repos/huggingface/datasets/issues/3884/events | https://github.com/huggingface/datasets/pull/3884 | 1,164,924,314 | PR_kwDODunzps40OPM9 | 3,884 | Fix bug in METEOR metric due to nltk version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3884). All of your documentation changes will be reflected on that endpoint."
] | 1,646,901,860,000 | 1,646,903,020,000 | 1,646,903,019,000 | MEMBER | null | Fix #3883. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3884/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3884",
"html_url": "https://github.com/huggingface/datasets/pull/3884",
"diff_url": "https://github.com/huggingface/datasets/pull/3884.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3884.patch",
"merged_at": 1646903019000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3883/comments | https://api.github.com/repos/huggingface/datasets/issues/3883/events | https://github.com/huggingface/datasets/issues/3883 | 1,164,663,229 | I_kwDODunzps5Fa1m9 | 3,883 | The metric Meteor doesn't work for nltk ==3.6.4 | {
"login": "zhaowei-wang98",
"id": 22047467,
"node_id": "MDQ6VXNlcjIyMDQ3NDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/22047467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhaowei-wang98",
"html_url": "https://github.com/zhaowei-wang98",
"followers_url": "https://api.github.com/users/zhaowei-wang98/followers",
"following_url": "https://api.github.com/users/zhaowei-wang98/following{/other_user}",
"gists_url": "https://api.github.com/users/zhaowei-wang98/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhaowei-wang98/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhaowei-wang98/subscriptions",
"organizations_url": "https://api.github.com/users/zhaowei-wang98/orgs",
"repos_url": "https://api.github.com/users/zhaowei-wang98/repos",
"events_url": "https://api.github.com/users/zhaowei-wang98/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhaowei-wang98/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @zhaowei-wang98, thanks for reporting.\r\n\r\nWe are fixing it... "
] | 1,646,879,307,000 | 1,646,903,019,000 | 1,646,903,019,000 | NONE | null | ## Describe the bug
Using the metric Meteor with nltk == 3.6.4 gives a TypeError:
TypeError: descriptor 'lower' for 'str' objects doesn't apply to a 'list' object
## Steps to reproduce the bug
```python
import datasets
metric = datasets.load_metric("meteor")
predictions = ["hello world"]
references = ["hello world"]
metric.compute(predictions=predictions, references=references)
```
## Expected results
TypeError: descriptor 'lower' for 'str' objects doesn't apply to a 'list' object
I think this TypeError exists because input sentences are tokenized into lists of tokens and the str.lower() is applied to this list of tokens.
## Actual results
No error but a meteor score
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: linux
- Python version: 3.8.12
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3883/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3882 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3882/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3882/comments | https://api.github.com/repos/huggingface/datasets/issues/3882/events | https://github.com/huggingface/datasets/pull/3882 | 1,164,595,388 | PR_kwDODunzps40NKz7 | 3,882 | Image process doc | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3882). All of your documentation changes will be reflected on that endpoint."
] | 1,646,872,330,000 | 1,647,357,856,000 | 1,647,357,849,000 | MEMBER | null | This PR is a first draft of how to process image data. It adds:
- Load an image dataset with `image` and `path` (adds tip about `decode=False` param to access the path and bytes, thanks to @mariosasko).
- Load an image using the `ImageFolder` builder. I know there is an [example](https://huggingface.co/docs/datasets/master/en/loading#image-folders) of this already, but I also wanted to add it here so users don't miss it. This doc seems important for centralizing all of the image-related things so far. Datasets has grown so quickly π now that I think maybe splitting up the How-to guides by modality may be better since working with vision/audio data is slightly different from what users have seen up until now. This way we can continue to scale the docs to better accommodate vision/audio things.
- Add a data augmentation with `set_transform`. There is only 1 example here so far, but we can certainly add more.
Todo:
- [x] Couldn't figure out why my augmentation function works with `set_transform` but not `map` π₯². Working with @mariosasko on this! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3882/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3882/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3882",
"html_url": "https://github.com/huggingface/datasets/pull/3882",
"diff_url": "https://github.com/huggingface/datasets/pull/3882.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3882.patch",
"merged_at": 1647357849000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3881 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3881/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3881/comments | https://api.github.com/repos/huggingface/datasets/issues/3881/events | https://github.com/huggingface/datasets/issues/3881 | 1,164,452,005 | I_kwDODunzps5FaCCl | 3,881 | How to use Image folder | {
"login": "INF800",
"id": 45640029,
"node_id": "MDQ6VXNlcjQ1NjQwMDI5",
"avatar_url": "https://avatars.githubusercontent.com/u/45640029?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/INF800",
"html_url": "https://github.com/INF800",
"followers_url": "https://api.github.com/users/INF800/followers",
"following_url": "https://api.github.com/users/INF800/following{/other_user}",
"gists_url": "https://api.github.com/users/INF800/gists{/gist_id}",
"starred_url": "https://api.github.com/users/INF800/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/INF800/subscriptions",
"organizations_url": "https://api.github.com/users/INF800/orgs",
"repos_url": "https://api.github.com/users/INF800/repos",
"events_url": "https://api.github.com/users/INF800/events{/privacy}",
"received_events_url": "https://api.github.com/users/INF800/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | closed | false | null | [] | null | [
"Even this from docs throw same error\r\n```\r\ndataset = load_dataset(\"imagefolder\", data_files=\"https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip\", split=\"train\")\r\n\r\n```",
"Hi @INF800,\r\n\r\nPlease note that the `imagefolder` feature enhancement was just recently merged to our master branch (https://github.com/huggingface/datasets/commit/207be676bffe9d164740a41a883af6125edef135), but has not yet been released.\r\n\r\nWe are planning to make the 2.0 release of our library in the coming days and then that feature will be available by updating your `datasets` library from PyPI.\r\n\r\nIn the meantime, you can incorporate that feature if you install our library from our GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\n\r\nThen:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ds = load_dataset(\"imagefolder\", data_files=\"https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip\", split=\"train\")\r\nUsing custom data configuration default-7eb4e80d960deb18\r\nDownloading and preparing dataset image_folder/default to .../.cache/huggingface/datasets/image_folder/default-7eb4e80d960deb18/0.0.0/8de8dc6d68ce3c81cc102b93cc82ede27162b5d30cd003094f935942c8294f60...\r\nDownloading data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 690.19it/s]\r\nExtracting data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 852.85it/s]\r\nDataset image_folder downloaded and prepared to .../.cache/huggingface/datasets/image_folder/default-7eb4e80d960deb18/0.0.0/8de8dc6d68ce3c81cc102b93cc82ede27162b5d30cd003094f935942c8294f60. Subsequent calls will reuse this data.\r\n\r\nIn [2]: ds\r\nOut[2]: \r\nDataset({\r\n features: ['image', 'label'],\r\n num_rows: 25000\r\n})\r\n```",
"Hey @albertvillanova. Does this load entire dataset in memory? Because I am facing huge trouble with loading very big datasets (OOM errors)",
"Can you provide the error stack trace? The loader only stores the `data_files` dict, which can get big after globbing. Then, the OOM error would mean you don't have enough memory to keep all the paths to the image files. You can circumvent this by generating an archive and loading the dataset from there. Maybe we can optimize the globbing part in our data files resolution at some point, cc @lhoestq for visibility.",
"Hey, memory error is resolved. It was fluke.\r\n\r\nBut there is another issue. Currently `load_dataset(\"imagefolder\", data_dir=\"./path/to/train\",)` takes only `train` as arg to `split` parameter.\r\n\r\nI am creating vaildation dataset using\r\n\r\n```\r\nds_valid = datasets.DatasetDict(valid=load_dataset(\"imagefolder\", data_dir=\"./path/to/valid\",)['train'])\r\n```",
"`data_dir=\"path/to/folder\"` is a shorthand syntax fox `data_files={\"train\": \"path/to/folder/**\"}`, so use `data_files` in that case instead:\r\n```python\r\nds = load_dataset(\"imagefolder\", data_files={\"train\": \"path/to/train/**\", \"test\": \"path/to/test/**\", \"valid\": \"path/to/valid/**\"})\r\n```",
"And there was another issue. I loaded black and white images (jpeg file). Using load dataset. It reads it as PIL jpeg data format. But instead of converting it into 3 channel tensor, input to collator function is coming as a single channel tensor.",
"We don't apply any additional preprocessing on top of `PIL.Image.open(image_file)`, so you need to do the conversion yourself:\r\n\r\n```python\r\ndef to_rgb(batch):\r\n batch[\"image\"] = [img.convert(\"RGB\") for img in batch[\"image\"]]\r\n return batch\r\n\r\nds_rgb = ds.map(to_rgb, batched=True)\r\n```\r\n\r\nPlease use our Forum for questions of this kind in the future."
] | 1,646,860,732,000 | 1,646,988,352,000 | 1,646,988,352,000 | NONE | null | Ran this code
```
load_dataset("imagefolder", data_dir="./my-dataset")
```
`https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py` missing
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/tmp/ipykernel_33/1648737256.py in <module>
----> 1 load_dataset("imagefolder", data_dir="./my-dataset")
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1684 revision=revision,
1685 use_auth_token=use_auth_token,
-> 1686 **config_kwargs,
1687 )
1688
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, script_version, **config_kwargs)
1511 download_config.use_auth_token = use_auth_token
1512 dataset_module = dataset_module_factory(
-> 1513 path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files
1514 )
1515
/opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_files, **download_kwargs)
1200 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1201 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
-> 1202 ) from None
1203 raise e1 from None
1204 else:
FileNotFoundError: Couldn't find a dataset script at /kaggle/working/imagefolder/imagefolder.py or any data file in the same directory. Couldn't find 'imagefolder' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/imagefolder/imagefolder.py
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3881/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3880 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3880/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3880/comments | https://api.github.com/repos/huggingface/datasets/issues/3880/events | https://github.com/huggingface/datasets/pull/3880 | 1,164,406,008 | PR_kwDODunzps40MjM3 | 3,880 | Change the framework switches to the new syntax | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3880). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3880). All of your documentation changes will be reflected on that endpoint."
] | 1,646,857,750,000 | 1,647,353,608,000 | 1,647,353,607,000 | MEMBER | null | This PR updates the syntax of the framework-specific code samples. With this new syntax, you'll be able to:
- have paragraphs of text be framework-specific instead of just code samples
- have support for Flax code samples if you want.
This should be merged after https://github.com/huggingface/doc-builder/pull/63 and https://github.com/huggingface/doc-builder/pull/130 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3880/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3880",
"html_url": "https://github.com/huggingface/datasets/pull/3880",
"diff_url": "https://github.com/huggingface/datasets/pull/3880.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3880.patch",
"merged_at": 1647353607000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3879 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3879/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3879/comments | https://api.github.com/repos/huggingface/datasets/issues/3879/events | https://github.com/huggingface/datasets/pull/3879 | 1,164,311,612 | PR_kwDODunzps40MP7f | 3,879 | SQuAD v2 metric: create README.md | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3879). All of your documentation changes will be reflected on that endpoint."
] | 1,646,851,676,000 | 1,646,930,939,000 | 1,646,930,939,000 | CONTRIBUTOR | null | Proposing SQuAD v2 metric card | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3879/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3879",
"html_url": "https://github.com/huggingface/datasets/pull/3879",
"diff_url": "https://github.com/huggingface/datasets/pull/3879.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3879.patch",
"merged_at": 1646930938000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3878 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3878/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3878/comments | https://api.github.com/repos/huggingface/datasets/issues/3878/events | https://github.com/huggingface/datasets/pull/3878 | 1,164,305,335 | PR_kwDODunzps40MOpn | 3,878 | Update cats_vs_dogs size | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3878). All of your documentation changes will be reflected on that endpoint.",
"Maybe `NonMatchingSplitsSizesError` errors should also tell the user to try using a more recent version of the dataset to get the fixes ?",
"@lhoestq Good idea. Will open a new PR to improve the error messages of NonMatchingSplitsSizesError, NonMatchingChecksumsError, ..."
] | 1,646,851,256,000 | 1,646,922,084,000 | 1,646,922,083,000 | CONTRIBUTOR | null | It seems like 12 new examples have been added to the `cats_vs_dogs`. This PR updates the size in the card and the info file to avoid a verification error (reported by @stevhliu). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3878/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3878/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3878",
"html_url": "https://github.com/huggingface/datasets/pull/3878",
"diff_url": "https://github.com/huggingface/datasets/pull/3878.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3878.patch",
"merged_at": 1646922083000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3877 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3877/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3877/comments | https://api.github.com/repos/huggingface/datasets/issues/3877/events | https://github.com/huggingface/datasets/issues/3877 | 1,164,146,311 | I_kwDODunzps5FY3aH | 3,877 | Align metadata to DCAT/DCAT-AP | {
"login": "EmidioStani",
"id": 278367,
"node_id": "MDQ6VXNlcjI3ODM2Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/278367?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EmidioStani",
"html_url": "https://github.com/EmidioStani",
"followers_url": "https://api.github.com/users/EmidioStani/followers",
"following_url": "https://api.github.com/users/EmidioStani/following{/other_user}",
"gists_url": "https://api.github.com/users/EmidioStani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/EmidioStani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EmidioStani/subscriptions",
"organizations_url": "https://api.github.com/users/EmidioStani/orgs",
"repos_url": "https://api.github.com/users/EmidioStani/repos",
"events_url": "https://api.github.com/users/EmidioStani/events{/privacy}",
"received_events_url": "https://api.github.com/users/EmidioStani/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,646,842,345,000 | 1,646,843,622,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
Align to DCAT metadata to describe datasets
**Describe the solution you'd like**
Reuse terms and structure from DCAT in the metadata file, ideally generate a json-ld file dcat compliant
**Describe alternatives you've considered**
**Additional context**
DCAT is a W3C standard extended in Europe with DCAT-AP, an example is data.europa.eu publishing datasets metadata in DCAT-AP
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3877/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3876 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3876/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3876/comments | https://api.github.com/repos/huggingface/datasets/issues/3876/events | https://github.com/huggingface/datasets/pull/3876 | 1,164,045,075 | PR_kwDODunzps40LYC8 | 3,876 | Fix download_mode in dataset_module_factory | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3876). All of your documentation changes will be reflected on that endpoint."
] | 1,646,837,673,000 | 1,646,902,020,000 | 1,646,902,019,000 | MEMBER | null | Fix `download_mode` value set in `dataset_module_factory`.
Before the fix, it was set to `bool` (default to `False`).
Also set properly its default value in all public functions. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3876/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3876",
"html_url": "https://github.com/huggingface/datasets/pull/3876",
"diff_url": "https://github.com/huggingface/datasets/pull/3876.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3876.patch",
"merged_at": 1646902019000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3875 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3875/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3875/comments | https://api.github.com/repos/huggingface/datasets/issues/3875/events | https://github.com/huggingface/datasets/pull/3875 | 1,164,029,673 | PR_kwDODunzps40LUuw | 3,875 | Module namespace cleanup for v2.0 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"will it solve https://github.com/huggingface/datasets-preview-backend/blob/4c542a74244045929615640ccbba5a902c344c5a/pyproject.toml#L85-L89?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3875). All of your documentation changes will be reflected on that endpoint.",
"@severo No, this PR doesn't fix that issue in the current state. We can fix it by adding `__all__` to `datasets/__init__.py` and `datasets/formatting/__init__.py`. However, this would require updating `__all__` for each new function/class definition, which could become cumbersome, and we can't do this dynamically because `mypy` is a static type checker.\r\n\r\n@lhoestq @albertvillanova WDYT?",
"Feel free to merge this one if it's good for you :)"
] | 1,646,836,987,000 | 1,647,013,326,000 | 1,647,013,325,000 | CONTRIBUTOR | null | This is an attempt to make the user-facing `datasets`' submodule namespace cleaner:
In particular, this PR does the following:
* removes the unused `zip_nested` and `flatten_nest_dict` and their accompanying tests
* removes `pyarrow` from the top-level namespace
* properly uses `__all__` and the `from <module> import *` syntax to avoid importing the `<module>`'s submodules
* cleans up the `utils` namespace
* moves the `temp_seed` context manage from `datasets/utils/file_utils.py` to `datasets/utils/py_utils.py` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3875/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3875",
"html_url": "https://github.com/huggingface/datasets/pull/3875",
"diff_url": "https://github.com/huggingface/datasets/pull/3875.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3875.patch",
"merged_at": 1647013325000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3874/comments | https://api.github.com/repos/huggingface/datasets/issues/3874/events | https://github.com/huggingface/datasets/pull/3874 | 1,164,013,511 | PR_kwDODunzps40LRYD | 3,874 | add MSE and MAE metrics - V2 | {
"login": "dnaveenr",
"id": 17746528,
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaveenr",
"html_url": "https://github.com/dnaveenr",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@mariosasko New PR here. I'm not sure how to add you as a co-author here. Also I see flake8 tests are failing, any inputs on how to resolve this ?\r\nAlso, let me know if any other changes are required. Thank you.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3874). All of your documentation changes will be reflected on that endpoint.",
"Great. Thank you.",
"Thanks so much for this π π― "
] | 1,646,836,216,000 | 1,646,846,442,000 | 1,646,846,300,000 | CONTRIBUTOR | null | Created a new pull request to resolve unrelated changes in PR caused due to rebasing.
Ref Older PR : [#3845](https://github.com/huggingface/datasets/pull/3845)
Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3874/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3874",
"html_url": "https://github.com/huggingface/datasets/pull/3874",
"diff_url": "https://github.com/huggingface/datasets/pull/3874.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3874.patch",
"merged_at": 1646846300000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3873 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3873/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3873/comments | https://api.github.com/repos/huggingface/datasets/issues/3873/events | https://github.com/huggingface/datasets/pull/3873 | 1,163,961,578 | PR_kwDODunzps40LGoV | 3,873 | Create SQuAD metric README.md | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3873). All of your documentation changes will be reflected on that endpoint.",
"Oh one last thing I almost forgot, I think I would add a section \"Examples\" with examples of inputs and outputs and in particular: an example giving maximal values, an examples giving minimal values and maybe a standard examples from SQuAD. What do you think?"
] | 1,646,833,628,000 | 1,646,930,757,000 | 1,646,930,757,000 | CONTRIBUTOR | null | Proposal for a metrics card structure (with an example based on the SQuAD metric).
@thomwolf @lhoestq @douwekiela @lewtun -- feel free to comment on structure or content (it's an initial draft, so I realize there's stuff missing!). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3873/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3873",
"html_url": "https://github.com/huggingface/datasets/pull/3873",
"diff_url": "https://github.com/huggingface/datasets/pull/3873.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3873.patch",
"merged_at": 1646930757000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3872 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3872/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3872/comments | https://api.github.com/repos/huggingface/datasets/issues/3872/events | https://github.com/huggingface/datasets/issues/3872 | 1,163,853,026 | I_kwDODunzps5FXvzi | 3,872 | HTTP error 504 Server Error: Gateway Time-out | {
"login": "illiyas-sha",
"id": 83509215,
"node_id": "MDQ6VXNlcjgzNTA5MjE1",
"avatar_url": "https://avatars.githubusercontent.com/u/83509215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/illiyas-sha",
"html_url": "https://github.com/illiyas-sha",
"followers_url": "https://api.github.com/users/illiyas-sha/followers",
"following_url": "https://api.github.com/users/illiyas-sha/following{/other_user}",
"gists_url": "https://api.github.com/users/illiyas-sha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/illiyas-sha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/illiyas-sha/subscriptions",
"organizations_url": "https://api.github.com/users/illiyas-sha/orgs",
"repos_url": "https://api.github.com/users/illiyas-sha/repos",
"events_url": "https://api.github.com/users/illiyas-sha/events{/privacy}",
"received_events_url": "https://api.github.com/users/illiyas-sha/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"is pushing directly with git (and git-lfs) an option for you?",
"I have installed git-lfs and doing this push with that\r\n",
"yes but is there any way you could try pushing with `git` command line directly instead of `push_to_hub`?",
"Okay. I didnt saved the dataset to my local machine. So, I processed the dataset and pushed it directly to the hub. I think I should try saving those dataset to my local machine by `save_to_disk` and then push it with git command line",
"cc @lhoestq @albertvillanova @LysandreJik because maybe I'm giving dumb advice here π
",
"`push_to_hub` is the preferred way of uploading a dataset to the Hub, which can then be reloaded with `load_dataset`. Feel free to try again and see if the server is working as expected now. Maybe we can add a retry mechanism in the meantime to workaround 504 errors.\r\n\r\nRegarding `save_to_disk`, this must only be used for local serialization (because it's uncompressed and compatible with memory-mapping). If you upload a dataset saved with `save_to_disk` to the Hub, then to reload it you will have to download/clone the repository locally by yourself and use `load_from_disk`."
] | 1,646,827,417,000 | 1,647,361,190,000 | 1,647,361,190,000 | NONE | null | I am trying to push a large dataset(450000+) records with the help of `push_to_hub()`
While pushing, it gives some error like this.
```
Traceback (most recent call last):
File "data_split_speech.py", line 159, in <module>
data_new_2.push_to_hub("user-name/dataset-name",private=True)
File "/opt/conda/lib/python3.8/site-packages/datasets/dataset_dict.py", line 951, in push_to_hub
repo_id, split, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3556, in _push_parquet_shards_to_hub
api.upload_file(
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1017, in upload_file
raise err
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1008, in upload_file
r.raise_for_status()
File "/opt/conda/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/user-name/dataset-name/upload/main/data/train2-00041-of-00064.parquet
```
Can anyone help me to resolve this issue.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3872/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3871/comments | https://api.github.com/repos/huggingface/datasets/issues/3871/events | https://github.com/huggingface/datasets/pull/3871 | 1,163,714,113 | PR_kwDODunzps40KRcM | 3,871 | add pandas to env command | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3871). All of your documentation changes will be reflected on that endpoint.",
"Think failures are unrelated - feel free to merge whenever you want :-)"
] | 1,646,819,331,000 | 1,646,824,898,000 | 1,646,824,897,000 | MEMBER | null | Pandas is a required packages and used quite a bit. I don't see any downside with adding its version to the `datasets-cli env` command. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3871/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3871",
"html_url": "https://github.com/huggingface/datasets/pull/3871",
"diff_url": "https://github.com/huggingface/datasets/pull/3871.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3871.patch",
"merged_at": 1646824897000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3870 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3870/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3870/comments | https://api.github.com/repos/huggingface/datasets/issues/3870/events | https://github.com/huggingface/datasets/pull/3870 | 1,163,633,239 | PR_kwDODunzps40KAYy | 3,870 | Add wikitablequestions dataset | {
"login": "SivilTaram",
"id": 10275209,
"node_id": "MDQ6VXNlcjEwMjc1MjA5",
"avatar_url": "https://avatars.githubusercontent.com/u/10275209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SivilTaram",
"html_url": "https://github.com/SivilTaram",
"followers_url": "https://api.github.com/users/SivilTaram/followers",
"following_url": "https://api.github.com/users/SivilTaram/following{/other_user}",
"gists_url": "https://api.github.com/users/SivilTaram/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SivilTaram/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SivilTaram/subscriptions",
"organizations_url": "https://api.github.com/users/SivilTaram/orgs",
"repos_url": "https://api.github.com/users/SivilTaram/repos",
"events_url": "https://api.github.com/users/SivilTaram/events{/privacy}",
"received_events_url": "https://api.github.com/users/SivilTaram/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq Would you mind reviewing it when you're available? Thanks!\r\n",
"> Awesome thanks for adding this dataset ! :) The dataset script and dataset cards look pretty good\r\n> \r\n> It looks like your `dummy_data.zip` files are quite big though (>1MB each), do you think we can reduce their sizes ? This way this git repository doesn't become too big\r\n\r\nI have manually reduced the `dummy_data.zip` and its current size is about 54KB. Hope it is fine for you!",
"@lhoestq I think the dataset is ready to merge now. Any follow-up question is welcome :-D",
"> Thanks ! It looks all good now :)\r\n\r\nAwesome! Thanks for your quick response!"
] | 1,646,814,463,000 | 1,647,256,764,000 | 1,647,256,579,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3870/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3870",
"html_url": "https://github.com/huggingface/datasets/pull/3870",
"diff_url": "https://github.com/huggingface/datasets/pull/3870.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3870.patch",
"merged_at": 1647256579000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3869/comments | https://api.github.com/repos/huggingface/datasets/issues/3869/events | https://github.com/huggingface/datasets/issues/3869 | 1,163,434,800 | I_kwDODunzps5FWJsw | 3,869 | Making the Hub the place for datasets in Portuguese | {
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [
"Hi @omarespejel! I think the philosophy for `datasets` issues is to create concrete issues with proposals to add a specific, individual dataset rather than umbrella issues for things such as datasets for a language, since we could end up with hundreds of issues (one per language). I see NILC - USP has many datasets, I would suggest to either create an issue for their datasets, or even better, we are trying to push to upload datasets as community datasets instead of adding them to the core library as guided in https://huggingface.co/docs/datasets/share. That would have the additional benefit that the dataset would live under the NILC organization.\r\n\r\n@lhoestq correct me if I'm wrong please π "
] | 1,646,795,178,000 | 1,646,816,649,000 | null | NONE | null | Let's make Hugging Face Datasets the central hub for datasets in Portuguese :)
**Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the Portuguese speaking community.
What are some datasets in Portuguese worth integrating into the Hugging Face hub?
Special thanks to @augusnunes for his collaboration on identifying the first ones:
- [NILC - USP](http://www.nilc.icmc.usp.br/nilc/index.php/tools-and-resources).
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
cc @osanseviero
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3869/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3868 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3868/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3868/comments | https://api.github.com/repos/huggingface/datasets/issues/3868/events | https://github.com/huggingface/datasets/pull/3868 | 1,162,914,114 | PR_kwDODunzps40HnWA | 3,868 | Ignore duplicate keys if `ignore_verifications=True` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3868). All of your documentation changes will be reflected on that endpoint.",
"Cool thanks ! Could you add a test please ?"
] | 1,646,759,696,000 | 1,646,833,845,000 | 1,646,833,844,000 | CONTRIBUTOR | null | Currently, it's impossible to generate a dataset if some keys from `_generate_examples` are duplicated. This PR allows skipping the check for duplicate keys if `ignore_verifications` is set to `True`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3868/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3868",
"html_url": "https://github.com/huggingface/datasets/pull/3868",
"diff_url": "https://github.com/huggingface/datasets/pull/3868.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3868.patch",
"merged_at": 1646833844000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3867 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3867/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3867/comments | https://api.github.com/repos/huggingface/datasets/issues/3867/events | https://github.com/huggingface/datasets/pull/3867 | 1,162,896,605 | PR_kwDODunzps40Hjrk | 3,867 | Update for the rename doc-builder -> hf-doc-utils | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"why utils? it's a builder no?",
"~~@julien-c there was a vote π https://huggingface.slack.com/archives/C021H1P1HKR/p1646405136644739~~\r\n\r\noh I see you already commeented in the thread as well",
"Thanks ! It looks all good to me (provided `hf-doc-utils` is the name we keep in the end). I'm fine with this name, and `hf-doc-builder` is also fine IMHO",
"ok, this is definitely not a hill I'll die on =) @mishig25 @sgugger "
] | 1,646,758,705,000 | 1,646,760,645,000 | 1,646,760,645,000 | MEMBER | null | This PR adapts the job to the upcoming change of name of `doc-builder`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3867/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3867",
"html_url": "https://github.com/huggingface/datasets/pull/3867",
"diff_url": "https://github.com/huggingface/datasets/pull/3867.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3867.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3866 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3866/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3866/comments | https://api.github.com/repos/huggingface/datasets/issues/3866/events | https://github.com/huggingface/datasets/pull/3866 | 1,162,833,848 | PR_kwDODunzps40HWcu | 3,866 | Bring back imgs so that forsk dont get broken | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3866). All of your documentation changes will be reflected on that endpoint.",
"I think we just need to keep `datasets_logo_name.jpg` and `course_banner.png` because they appear in the README.md of the forks of `datasets`. The other images can be removed",
"Force pushed those two imgs only"
] | 1,646,755,291,000 | 1,646,761,022,000 | 1,646,761,021,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3866/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3866",
"html_url": "https://github.com/huggingface/datasets/pull/3866",
"diff_url": "https://github.com/huggingface/datasets/pull/3866.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3866.patch",
"merged_at": 1646761021000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3865 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3865/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3865/comments | https://api.github.com/repos/huggingface/datasets/issues/3865/events | https://github.com/huggingface/datasets/pull/3865 | 1,162,821,908 | PR_kwDODunzps40HT9K | 3,865 | Add logo img | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3865). All of your documentation changes will be reflected on that endpoint.",
"Superceded by https://github.com/huggingface/datasets/pull/3866"
] | 1,646,754,659,000 | 1,646,755,319,000 | 1,646,755,319,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3865/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3865",
"html_url": "https://github.com/huggingface/datasets/pull/3865",
"diff_url": "https://github.com/huggingface/datasets/pull/3865.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3865.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3864 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3864/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3864/comments | https://api.github.com/repos/huggingface/datasets/issues/3864/events | https://github.com/huggingface/datasets/pull/3864 | 1,162,804,942 | PR_kwDODunzps40HQZ_ | 3,864 | Update image dataset tags | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3864). All of your documentation changes will be reflected on that endpoint."
] | 1,646,753,792,000 | 1,646,759,087,000 | 1,646,759,086,000 | CONTRIBUTOR | null | Align the existing image datasets' tags with new tags introduced in #3800. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3864/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3864",
"html_url": "https://github.com/huggingface/datasets/pull/3864",
"diff_url": "https://github.com/huggingface/datasets/pull/3864.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3864.patch",
"merged_at": 1646759086000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3863 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3863/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3863/comments | https://api.github.com/repos/huggingface/datasets/issues/3863/events | https://github.com/huggingface/datasets/pull/3863 | 1,162,802,857 | PR_kwDODunzps40HP-A | 3,863 | Update code blocks | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3863). All of your documentation changes will be reflected on that endpoint."
] | 1,646,753,683,000 | 1,646,844,330,000 | 1,646,844,329,000 | MEMBER | null | Following https://github.com/huggingface/datasets/pull/3860#issuecomment-1061756712 and https://github.com/huggingface/datasets/pull/3690 we need to update the code blocks to use markdown instead of sphinx | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3863/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3863",
"html_url": "https://github.com/huggingface/datasets/pull/3863",
"diff_url": "https://github.com/huggingface/datasets/pull/3863.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3863.patch",
"merged_at": 1646844329000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3862 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3862/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3862/comments | https://api.github.com/repos/huggingface/datasets/issues/3862/events | https://github.com/huggingface/datasets/pull/3862 | 1,162,753,733 | PR_kwDODunzps40HFht | 3,862 | Manipulate columns on IterableDataset (rename columns, cast, etc.) | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3862). All of your documentation changes will be reflected on that endpoint.",
"> IIUC we check if columns are present/not present directly in the yielded examples and not in info.features because info.features can be None (after map, for instance)?\r\n\r\nYes exactly\r\n\r\n> We should develop a solution that ensures info.features is never None. For example, one approach would be to infer them from examples in map and make them promotable from Value(\"null\") to a specific type, in case of None values.\r\n\r\nI agree this would be useful. Though inferring the type requires to start streaming some data, which takes a few seconds (compared to being instantaneous right now).\r\n\r\nLet's discuss this in a new issue maybe ?"
] | 1,646,751,237,000 | 1,646,930,422,000 | 1,646,930,421,000 | MEMBER | null | I added:
- add_column
- cast
- rename_column
- rename_columns
related to https://github.com/huggingface/datasets/issues/3444
TODO:
- [x] docs
- [x] tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3862/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3862",
"html_url": "https://github.com/huggingface/datasets/pull/3862",
"diff_url": "https://github.com/huggingface/datasets/pull/3862.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3862.patch",
"merged_at": 1646930421000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3861 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3861/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3861/comments | https://api.github.com/repos/huggingface/datasets/issues/3861/events | https://github.com/huggingface/datasets/issues/3861 | 1,162,702,044 | I_kwDODunzps5FTWzc | 3,861 | big_patent cased version | {
"login": "slvcsl",
"id": 25265140,
"node_id": "MDQ6VXNlcjI1MjY1MTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25265140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slvcsl",
"html_url": "https://github.com/slvcsl",
"followers_url": "https://api.github.com/users/slvcsl/followers",
"following_url": "https://api.github.com/users/slvcsl/following{/other_user}",
"gists_url": "https://api.github.com/users/slvcsl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slvcsl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slvcsl/subscriptions",
"organizations_url": "https://api.github.com/users/slvcsl/orgs",
"repos_url": "https://api.github.com/users/slvcsl/repos",
"events_url": "https://api.github.com/users/slvcsl/events{/privacy}",
"received_events_url": "https://api.github.com/users/slvcsl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [] | 1,646,748,535,000 | 1,646,748,595,000 | null | NONE | null | Hi! I am interested in working with the big_patent dataset.
In Tensorflow, there are a number of versions of the dataset:
- 1.0.0 : lower cased tokenized words
- 2.0.0 : Update to use cased raw strings
- 2.1.2 (default): Fix update to cased raw strings.
The version in the huggingface `datasets` library is the 1.0.0. I would be very interested in using the 2.1.2 cased version (used more, recently, for example in the Pegasus paper), but it does not seem to be supported (I tried using the `revision` parameter in `load_datasets`). Is there a way to already load it, or would it be possible to add that version? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3861/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3860 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3860/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3860/comments | https://api.github.com/repos/huggingface/datasets/issues/3860/events | https://github.com/huggingface/datasets/pull/3860 | 1,162,623,329 | PR_kwDODunzps40GpzZ | 3,860 | Small doc fixes | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3860). All of your documentation changes will be reflected on that endpoint.",
"There are still some `.. code-block:: python` (e.g. see [this](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes#datasets.Dataset.align_labels_with_mapping)) directives in our codebase, so maybe we can remove those as well as part of this PR."
] | 1,646,744,139,000 | 1,646,761,033,000 | 1,646,761,033,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3860/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3860",
"html_url": "https://github.com/huggingface/datasets/pull/3860",
"diff_url": "https://github.com/huggingface/datasets/pull/3860.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3860.patch",
"merged_at": 1646761033000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3859 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3859/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3859/comments | https://api.github.com/repos/huggingface/datasets/issues/3859/events | https://github.com/huggingface/datasets/issues/3859 | 1,162,559,333 | I_kwDODunzps5FSz9l | 3,859 | Unable to dowload big_patent (FileNotFoundError) | {
"login": "slvcsl",
"id": 25265140,
"node_id": "MDQ6VXNlcjI1MjY1MTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25265140?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slvcsl",
"html_url": "https://github.com/slvcsl",
"followers_url": "https://api.github.com/users/slvcsl/followers",
"following_url": "https://api.github.com/users/slvcsl/following{/other_user}",
"gists_url": "https://api.github.com/users/slvcsl/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slvcsl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slvcsl/subscriptions",
"organizations_url": "https://api.github.com/users/slvcsl/orgs",
"repos_url": "https://api.github.com/users/slvcsl/repos",
"events_url": "https://api.github.com/users/slvcsl/events{/privacy}",
"received_events_url": "https://api.github.com/users/slvcsl/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @slvcsl, thanks for reporting.\r\n\r\nYesterday we just made a patch release of our `datasets` library that fixes this issue: version 1.18.4.\r\nhttps://pypi.org/project/datasets/#history\r\n\r\nPlease, feel free to update `datasets` library to the latest version: \r\n```shell\r\npip install -U datasets\r\n```\r\nAnd then you should force redownload of the data file to update your local cache: \r\n```python\r\nds = load_dataset(\"big_patent\", \"g\", split=\"validation\", download_mode=\"force_redownload\")\r\n```\r\n- Note that before the fix, you just downloaded and cached the Google Drive virus scan warning page, instead of the data file\r\n\r\nThis issue was already reported \r\n- #3784\r\n\r\nand its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe already fixed it. See:\r\n- #3787 \r\n"
] | 1,646,740,032,000 | 1,646,744,649,000 | 1,646,744,644,000 | NONE | null | ## Describe the bug
I am trying to download some splits of the big_patent dataset, using the following code:
`ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload")
`
However, this leads to a FileNotFoundError.
FileNotFoundError Traceback (most recent call last)
[<ipython-input-3-8d8a745706a9>](https://localhost:8080/#) in <module>()
1 from datasets import load_dataset
----> 2 ds = load_dataset("big_patent", "g", split="validation", download_mode="force_redownload")
8 frames
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1705 ignore_verifications=ignore_verifications,
1706 try_from_hf_gcs=try_from_hf_gcs,
-> 1707 use_auth_token=use_auth_token,
1708 )
1709
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
593 if not downloaded_from_gcs:
594 self._download_and_prepare(
--> 595 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
596 )
597 # Sync info
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
659 split_dict = SplitDict(dataset_name=self.name)
660 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 661 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
662
663 # Checksums verification
[/root/.cache/huggingface/modules/datasets_modules/datasets/big_patent/bdefa7c0b39fba8bba1c6331b70b738e30d63c8ad4567f983ce315a5fef6131c/big_patent.py](https://localhost:8080/#) in _split_generators(self, dl_manager)
123 split_types = ["train", "val", "test"]
124 extract_paths = dl_manager.extract(
--> 125 {k: os.path.join(dl_path, "bigPatentData", k + ".tar.gz") for k in split_types}
126 )
127 extract_paths = {k: os.path.join(extract_paths[k], k) for k in split_types}
[/usr/local/lib/python3.7/dist-packages/datasets/utils/download_manager.py](https://localhost:8080/#) in extract(self, path_or_paths, num_proc)
282 download_config.extract_compressed_file = True
283 extracted_paths = map_nested(
--> 284 partial(cached_path, download_config=download_config), path_or_paths, num_proc=num_proc, disable_tqdm=False
285 )
286 path_or_paths = NestedDataStructure(path_or_paths)
[/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm)
260 mapped = [
261 _single_map_nested((function, obj, types, None, True))
--> 262 for obj in utils.tqdm(iterable, disable=disable_tqdm)
263 ]
264 else:
[/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in <listcomp>(.0)
260 mapped = [
261 _single_map_nested((function, obj, types, None, True))
--> 262 for obj in utils.tqdm(iterable, disable=disable_tqdm)
263 ]
264 else:
[/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in _single_map_nested(args)
194 # Singleton first to spare some computation
195 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 196 return function(data_struct)
197
198 # Reduce logging to keep things readable in multiprocessing with tqdm
[/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py](https://localhost:8080/#) in cached_path(url_or_filename, download_config, **download_kwargs)
314 elif is_local_path(url_or_filename):
315 # File, but it doesn't exist.
--> 316 raise FileNotFoundError(f"Local file {url_or_filename} doesn't exist")
317 else:
318 # Something unknown
FileNotFoundError: Local file /root/.cache/huggingface/datasets/downloads/extracted/ad068abb3e11f9f2f5440b62e37eb2b03ee515df9de1637c55cd1793b68668b2/bigPatentData/train.tar.gz doesn't exist
I have tried this in a number of machines, including on Colab, so I think this is not environment dependent.
How do I load the bigPatent dataset? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3859/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3858 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3858/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3858/comments | https://api.github.com/repos/huggingface/datasets/issues/3858/events | https://github.com/huggingface/datasets/pull/3858 | 1,162,526,688 | PR_kwDODunzps40GVSq | 3,858 | Udpate index.mdx margins | {
"login": "gary149",
"id": 3841370,
"node_id": "MDQ6VXNlcjM4NDEzNzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3841370?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gary149",
"html_url": "https://github.com/gary149",
"followers_url": "https://api.github.com/users/gary149/followers",
"following_url": "https://api.github.com/users/gary149/following{/other_user}",
"gists_url": "https://api.github.com/users/gary149/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gary149/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gary149/subscriptions",
"organizations_url": "https://api.github.com/users/gary149/orgs",
"repos_url": "https://api.github.com/users/gary149/repos",
"events_url": "https://api.github.com/users/gary149/events{/privacy}",
"received_events_url": "https://api.github.com/users/gary149/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3858). All of your documentation changes will be reflected on that endpoint."
] | 1,646,737,912,000 | 1,646,744,277,000 | 1,646,744,276,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3858/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3858",
"html_url": "https://github.com/huggingface/datasets/pull/3858",
"diff_url": "https://github.com/huggingface/datasets/pull/3858.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3858.patch",
"merged_at": 1646744276000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3857 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3857/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3857/comments | https://api.github.com/repos/huggingface/datasets/issues/3857/events | https://github.com/huggingface/datasets/issues/3857 | 1,162,525,353 | I_kwDODunzps5FSrqp | 3,857 | Order of dataset changes due to glob.glob. | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | null | [] | null | [
"I agree using `glob.glob` alone is bad practice because it's not deterministic. Using `sorted` is a nice solution.\r\n\r\nNote that the `xglob` function you are referring to in the `streaming_download_manager.py` code just extends `glob.glob` for URLs - we don't change its behavior. That's why it has no `sorted()`"
] | 1,646,737,830,000 | 1,647,256,102,000 | null | MEMBER | null | ## Describe the bug
After discussion with @lhoestq, just want to mention here that `glob.glob(...)` should always be used in combination with `sorted(...)` to make sure the list of files returned by `glob.glob(...)` doesn't change depending on the OS system.
There are currently multiple datasets that use `glob.glob()` without making use of `sorted(...)` even the streaming download manager (if I'm not mistaken):
https://github.com/huggingface/datasets/blob/c14bfeb4af89da14f870de5ddaa584b08aa08eeb/src/datasets/utils/streaming_download_manager.py#L483 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3857/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3856 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3856/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3856/comments | https://api.github.com/repos/huggingface/datasets/issues/3856/events | https://github.com/huggingface/datasets/pull/3856 | 1,162,522,034 | PR_kwDODunzps40GUSf | 3,856 | Fix push_to_hub with null images | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3856). All of your documentation changes will be reflected on that endpoint."
] | 1,646,737,629,000 | 1,646,752,937,000 | 1,646,752,936,000 | MEMBER | null | This code currently raises an error because of the null image:
```python
import datasets
dataset_dict = { 'name': ['image001.jpg', 'image002.jpg'], 'image': ['cat.jpg', None] }
features = datasets.Features({
'name': datasets.Value('string'),
'image': datasets.Image(),
})
dataset = datasets.Dataset.from_dict(dataset_dict, features)
dataset.push_to_hub("username/dataset") # this line produces an error: 'NoneType' object is not subscriptable
```
I fixed this in this PR
TODO:
- [x] add a test | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3856/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3856",
"html_url": "https://github.com/huggingface/datasets/pull/3856",
"diff_url": "https://github.com/huggingface/datasets/pull/3856.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3856.patch",
"merged_at": 1646752936000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3855 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3855/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3855/comments | https://api.github.com/repos/huggingface/datasets/issues/3855/events | https://github.com/huggingface/datasets/issues/3855 | 1,162,448,589 | I_kwDODunzps5FSY7N | 3,855 | Bad error message when loading private dataset | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
},
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"We raise the error β FileNotFoundError: canβt find the datasetβ mainly to follow best practice in security (otherwise users could be able to guess what private repositories users/orgs may have)\r\n\r\nWe can indeed reformulate this and add the \"If this is a private repository,...\" part !"
] | 1,646,733,317,000 | 1,646,759,009,000 | null | MEMBER | null | ## Describe the bug
A pretty common behavior of an interaction between the Hub and datasets is the following.
An organization adds a dataset in private mode and wants to load it afterward.
```python
from transformers import load_dataset
ds = load_dataset("NewT5/dummy_data", "dummy")
```
This command then fails with:
```bash
FileNotFoundError: Couldn't find a dataset script at /home/patrick/NewT5/dummy_data/dummy_data.py or any data file in the same directory. Couldn't find 'NewT5/dummy_data' on the Hugging Face Hub either: FileNotFoundError: Dataset 'NewT5/dummy_data' doesn't exist on the Hub
```
**even though** the user has access to the website `NewT5/dummy_data` since she/he is part of the org.
We need to improve the error message here similar to how @sgugger, @LysandreJik and @julien-c have done it for transformers IMO.
## Steps to reproduce the bug
E.g. execute the following code to see the different error messages between `transformes` and `datasets`.
1. Transformers
```python
from transformers import BertModel
BertModel.from_pretrained("NewT5/dummy_model")
```
The error message is clearer here - it gives:
```
OSError: patrickvonplaten/gpt2-xl is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
```
Let's maybe do the same for datasets? The PR was introduced to `transformers` here:
https://github.com/huggingface/transformers/pull/15261
## Expected results
Better error message
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4.dev0
- Platform: Linux-5.15.15-76051515-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3855/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3854 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3854/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3854/comments | https://api.github.com/repos/huggingface/datasets/issues/3854/events | https://github.com/huggingface/datasets/issues/3854 | 1,162,434,199 | I_kwDODunzps5FSVaX | 3,854 | load only England English dataset from common voice english dataset | {
"login": "amanjaiswal777",
"id": 36677001,
"node_id": "MDQ6VXNlcjM2Njc3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/36677001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/amanjaiswal777",
"html_url": "https://github.com/amanjaiswal777",
"followers_url": "https://api.github.com/users/amanjaiswal777/followers",
"following_url": "https://api.github.com/users/amanjaiswal777/following{/other_user}",
"gists_url": "https://api.github.com/users/amanjaiswal777/gists{/gist_id}",
"starred_url": "https://api.github.com/users/amanjaiswal777/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/amanjaiswal777/subscriptions",
"organizations_url": "https://api.github.com/users/amanjaiswal777/orgs",
"repos_url": "https://api.github.com/users/amanjaiswal777/repos",
"events_url": "https://api.github.com/users/amanjaiswal777/events{/privacy}",
"received_events_url": "https://api.github.com/users/amanjaiswal777/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @amanjaiswal777,\r\n\r\nFirst note that the dataset you are trying to load is deprecated: it was the Common Voice dataset release as of Dec 2020.\r\n\r\nCurrently, Common Voice dataset releases are directly hosted on the Hub, under the Mozilla Foundation organization: https://huggingface.co/mozilla-foundation\r\n\r\nFor example, to get their latest Common Voice relase (8.0):\r\n- Go to the dataset page and request access permission (Mozilla Foundation requires this for people willing to use their datasets): https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0\r\n- Looking at the dataset card, you can check that data instances have, among other fields, the ones you are interested in: \"accent\", \"age\",... \r\n- Then you can load their \"en\" language dataset as usual, besides passing your authentication token (more info on auth token here: https://huggingface.co/docs/hub/security)\r\n ```python\r\n from datasets import load_dataset\r\n ds_en = load_dataset(\"mozilla-foundation/common_voice_8_0\", \"en\", use_auth_token=True)\r\n ```\r\n- Finally, you can filter only the data instances you are interested in (more info on `filter` here: https://huggingface.co/docs/datasets/process#select-and-filter):\r\n ```python\r\n ds_england_en = ds_en.filter(lambda item: item[\"accent\"] == \"England English\")\r\n ```\r\n\r\nFeel free to reopen this issue if you need further assistance."
] | 1,646,732,452,000 | 1,646,813,613,000 | 1,646,813,613,000 | NONE | null | training_data = load_dataset("common_voice", "en",split='train[:250]+validation[:250]')
testing_data = load_dataset("common_voice", "en", split="test[:200]")
I'm trying to load only 8% of the English common voice data with accent == "England English." Can somebody assist me with this?
**Typical Voice Accent Proportions:**
- 24% United States English
- 8% England English
- 5% India and South Asia (India, Pakistan, Sri Lanka)
- 3% Australian English
- 3% Canadian English
- 2% Scottish English
- 1% Irish English
- 1% Southern African (South Africa, Zimbabwe, Namibia)
- 1% New Zealand English
Can we replicate this for Age as well?
**Age proportions of the common voice:-**
- 24% 19 - 29
- 14% 30 - 39
- 10% 40 - 49
- 6% < 19
- 4% 50 - 59
- 4% 60 - 69
- 1% 70 β 79 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3854/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3853/comments | https://api.github.com/repos/huggingface/datasets/issues/3853/events | https://github.com/huggingface/datasets/pull/3853 | 1,162,386,592 | PR_kwDODunzps40F3uN | 3,853 | add ontonotes_conll dataset | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3853). All of your documentation changes will be reflected on that endpoint.",
"The CI fail is unrelated to this dataset, merging :)"
] | 1,646,729,622,000 | 1,647,341,282,000 | 1,647,341,282,000 | CONTRIBUTOR | null | # Introduction of the dataset
OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre,
multilingual corpus manually annotated with syntactic, semantic and discourse information.
This dataset is the version of OntoNotes v5.0 extended and used in the CoNLL-2012 shared task
, includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only).
This dataset is widely used in name entity recognition, coreference resolution, and semantic role labeling.
In dataset loading script, I modify and use the code of [AllenNLP/Ontonotes](https://docs.allennlp.org/models/main/models/common/ontonotes/#ontonotes) to read the special conll files while don't get extra package dependency.
# Some workarounds I did
1. task ids
I add tasks that I can't find anywhere `semantic-role-labeling`, `lemmatization`, and `word-sense-disambiguation` to the task category `structure-prediction`, because they are related to "syntax". I feel there is another good name for the task category since some tasks mentioned aren't related to structure, but I have no good idea.
2. `dl_manage.extract`
Since we'll get another zip after unzip the downloaded zip data, I have to use `dl_manager.extract` directly inside `_generate_examples`. But when testing dummy data, `dl_manager.extract` do nothing. So I make a conditional such that it manually extract data when testing dummy data.
# Help
Don't know how to fix the building doc error. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3853/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3853/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3853",
"html_url": "https://github.com/huggingface/datasets/pull/3853",
"diff_url": "https://github.com/huggingface/datasets/pull/3853.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3853.patch",
"merged_at": 1647341282000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3852 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3852/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3852/comments | https://api.github.com/repos/huggingface/datasets/issues/3852/events | https://github.com/huggingface/datasets/pull/3852 | 1,162,252,337 | PR_kwDODunzps40Fb26 | 3,852 | Redundant add dataset information and dead link. | {
"login": "dnaveenr",
"id": 17746528,
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaveenr",
"html_url": "https://github.com/dnaveenr",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3852). All of your documentation changes will be reflected on that endpoint."
] | 1,646,719,025,000 | 1,646,758,476,000 | 1,646,758,476,000 | CONTRIBUTOR | null | > Alternatively, you can follow the steps to [add a dataset](https://huggingface.co/docs/datasets/add_dataset.html) and [share a dataset](https://huggingface.co/docs/datasets/share_dataset.html) in the documentation.
The "add a dataset link" gives 404 Error, and the share_dataset link has changed. I feel this information is redundant/deprecated now since we have a more detailed guide for "How to add a dataset?". | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3852/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3852",
"html_url": "https://github.com/huggingface/datasets/pull/3852",
"diff_url": "https://github.com/huggingface/datasets/pull/3852.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3852.patch",
"merged_at": 1646758476000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3851 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3851/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3851/comments | https://api.github.com/repos/huggingface/datasets/issues/3851/events | https://github.com/huggingface/datasets/issues/3851 | 1,162,137,998 | I_kwDODunzps5FRNGO | 3,851 | Load audio dataset error | {
"login": "lemoner20",
"id": 31890987,
"node_id": "MDQ6VXNlcjMxODkwOTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/31890987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lemoner20",
"html_url": "https://github.com/lemoner20",
"followers_url": "https://api.github.com/users/lemoner20/followers",
"following_url": "https://api.github.com/users/lemoner20/following{/other_user}",
"gists_url": "https://api.github.com/users/lemoner20/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lemoner20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lemoner20/subscriptions",
"organizations_url": "https://api.github.com/users/lemoner20/orgs",
"repos_url": "https://api.github.com/users/lemoner20/repos",
"events_url": "https://api.github.com/users/lemoner20/events{/privacy}",
"received_events_url": "https://api.github.com/users/lemoner20/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @lemoner20, thanks for reporting.\r\n\r\nI'm sorry but I cannot reproduce your problem:\r\n```python\r\nIn [1]: from datasets import load_dataset, load_metric, Audio\r\n ...: raw_datasets = load_dataset(\"superb\", \"ks\", split=\"train\")\r\n ...: print(raw_datasets[0][\"audio\"])\r\nDownloading builder script: 30.2kB [00:00, 13.0MB/s] \r\nDownloading metadata: 38.0kB [00:00, 16.6MB/s] \r\nDownloading and preparing dataset superb/ks (download: 1.45 GiB, generated: 9.64 MiB, post-processed: Unknown size, total: 1.46 GiB) to .../.cache/huggingface/datasets/superb/ks/1.9.0/fc1f59e1fa54262dfb42de99c326a806ef7de1263ece177b59359a1a3354a9c9...\r\nDownloading data: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.49G/1.49G [00:37<00:00, 39.3MB/s]\r\nDownloading data: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 71.3M/71.3M [00:01<00:00, 36.1MB/s]\r\nDownloading data files: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:41<00:00, 20.67s/it]\r\nExtracting data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:28<00:00, 14.24s/it]\r\nDataset superb downloaded and prepared to .../.cache/huggingface/datasets/superb/ks/1.9.0/fc1f59e1fa54262dfb42de99c326a806ef7de1263ece177b59359a1a3354a9c9. Subsequent calls will reuse this data.\r\n{'path': '.../.cache/huggingface/datasets/downloads/extracted/8571921d3088b48f58f75b2e514815033e1ffbd06aa63fd4603691ac9f1c119f/_background_noise_/doing_the_dishes.wav', 'array': array([ 0. , 0. , 0. , ..., -0.00592041,\r\n -0.00405884, -0.00253296], dtype=float32), 'sampling_rate': 16000}\r\n``` \r\n\r\nWhich version of `datasets` are you using? Could you please fill in the environment info requested in the bug report template? You can run the command `datasets-cli env` and copy-and-paste its output below\r\n## Environment info\r\n<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->\r\n- `datasets` version:\r\n- Platform:\r\n- Python version:\r\n- PyArrow version:",
"@albertvillanova Thanks for your reply. The environment info below\r\n\r\n## Environment info\r\n- `datasets` version: 1.18.3\r\n- Platform: Linux-4.19.91-007.ali4000.alios7.x86_64-x86_64-with-debian-buster-sid\r\n- Python version: 3.6.12\r\n- PyArrow version: 6.0.1",
"Thanks @lemoner20,\r\n\r\nI cannot reproduce your issue in datasets version 1.18.3 either.\r\n\r\nMaybe redownloading the data file may work if you had already cached this dataset previously. Could you please try passing \"force_redownload\"?\r\n```python\r\nraw_datasets = load_dataset(\"superb\", \"ks\", split=\"train\", download_mode=\"force_redownload\")",
"Thanks, @albertvillanova,\r\n\r\nI install the python package of **librosa=0.9.1** again, it works now!\r\n\r\n\r\n",
"Cool!"
] | 1,646,705,764,000 | 1,646,738,905,000 | 1,646,738,406,000 | NONE | null | ## Load audio dataset error
Hi, when I load audio dataset following https://huggingface.co/docs/datasets/audio_process and https://github.com/huggingface/datasets/tree/master/datasets/superb,
```
from datasets import load_dataset, load_metric, Audio
raw_datasets = load_dataset("superb", "ks", split="train")
print(raw_datasets[0]["audio"])
```
following errors occur
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-169-3f8253239fa0> in <module>
----> 1 raw_datasets[0]["audio"]
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in __getitem__(self, key)
1924 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
1925 return self._getitem(
-> 1926 key,
1927 )
1928
/usr/lib/python3.6/site-packages/datasets/arrow_dataset.py in _getitem(self, key, decoded, **kwargs)
1909 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
1910 formatted_output = format_table(
-> 1911 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
1912 )
1913 return formatted_output
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns)
530 python_formatter = PythonFormatter(features=None)
531 if format_columns is None:
--> 532 return formatter(pa_table, query_type=query_type)
533 elif query_type == "column":
534 if key in format_columns:
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type)
279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]:
280 if query_type == "row":
--> 281 return self.format_row(pa_table)
282 elif query_type == "column":
283 return self.format_column(pa_table)
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in format_row(self, pa_table)
310 row = self.python_arrow_extractor().extract_row(pa_table)
311 if self.decoded:
--> 312 row = self.python_features_decoder.decode_row(row)
313 return row
314
/usr/lib/python3.6/site-packages/datasets/formatting/formatting.py in decode_row(self, row)
219
220 def decode_row(self, row: dict) -> dict:
--> 221 return self.features.decode_example(row) if self.features else row
222
223 def decode_column(self, column: list, column_name: str) -> list:
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_example(self, example)
1320 else value
1321 for column_name, (feature, value) in utils.zip_dict(
-> 1322 {key: value for key, value in self.items() if key in example}, example
1323 )
1324 }
/usr/lib/python3.6/site-packages/datasets/features/features.py in <dictcomp>(.0)
1319 if self._column_requires_decoding[column_name]
1320 else value
-> 1321 for column_name, (feature, value) in utils.zip_dict(
1322 {key: value for key, value in self.items() if key in example}, example
1323 )
/usr/lib/python3.6/site-packages/datasets/features/features.py in decode_nested_example(schema, obj)
1053 # Object with special decoding:
1054 elif isinstance(schema, (Audio, Image)):
-> 1055 return schema.decode_example(obj) if obj is not None else None
1056 return obj
1057
/usr/lib/python3.6/site-packages/datasets/features/audio.py in decode_example(self, value)
100 array, sampling_rate = self._decode_non_mp3_file_like(file)
101 else:
--> 102 array, sampling_rate = self._decode_non_mp3_path_like(path)
103 return {"path": path, "array": array, "sampling_rate": sampling_rate}
104
/usr/lib/python3.6/site-packages/datasets/features/audio.py in _decode_non_mp3_path_like(self, path)
143
144 with xopen(path, "rb") as f:
--> 145 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
146 return array, sampling_rate
147
/usr/lib/python3.6/site-packages/librosa/core/audio.py in load(path, sr, mono, offset, duration, dtype, res_type)
110
111 y = []
--> 112 with audioread.audio_open(os.path.realpath(path)) as input_file:
113 sr_native = input_file.samplerate
114 n_channels = input_file.channels
/usr/lib/python3.6/posixpath.py in realpath(filename)
392 """Return the canonical path of the specified filename, eliminating any
393 symbolic links encountered in the path."""
--> 394 filename = os.fspath(filename)
395 path, ok = _joinrealpath(filename[:0], filename, {})
396 return abspath(path)
TypeError: expected str, bytes or os.PathLike object, not _io.BufferedReader
```
## Expected results
```
>>> raw_datasets[0]["audio"]
{'array': array([-0.0005188 , -0.00109863, 0.00030518, ..., 0.01730347,
0.01623535, 0.01724243]),
'path': '/root/.cache/huggingface/datasets/downloads/extracted/bb3a06b491a64aff422f307cd8116820b4f61d6f32fcadcfc554617e84383cb7/bed/026290a7_nohash_0.wav',
'sampling_rate': 16000}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3851/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3850 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3850/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3850/comments | https://api.github.com/repos/huggingface/datasets/issues/3850/events | https://github.com/huggingface/datasets/pull/3850 | 1,162,126,030 | PR_kwDODunzps40FBx9 | 3,850 | [feat] Add tqdm arguments | {
"login": "penguinwang96825",
"id": 28087825,
"node_id": "MDQ6VXNlcjI4MDg3ODI1",
"avatar_url": "https://avatars.githubusercontent.com/u/28087825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/penguinwang96825",
"html_url": "https://github.com/penguinwang96825",
"followers_url": "https://api.github.com/users/penguinwang96825/followers",
"following_url": "https://api.github.com/users/penguinwang96825/following{/other_user}",
"gists_url": "https://api.github.com/users/penguinwang96825/gists{/gist_id}",
"starred_url": "https://api.github.com/users/penguinwang96825/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/penguinwang96825/subscriptions",
"organizations_url": "https://api.github.com/users/penguinwang96825/orgs",
"repos_url": "https://api.github.com/users/penguinwang96825/repos",
"events_url": "https://api.github.com/users/penguinwang96825/events{/privacy}",
"received_events_url": "https://api.github.com/users/penguinwang96825/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,646,704,405,000 | 1,646,733,271,000 | null | NONE | null | In this PR, tqdm arguments can be passed to the map() function and such, in order to be more flexible. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3850/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3850",
"html_url": "https://github.com/huggingface/datasets/pull/3850",
"diff_url": "https://github.com/huggingface/datasets/pull/3850.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3850.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3849 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3849/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3849/comments | https://api.github.com/repos/huggingface/datasets/issues/3849/events | https://github.com/huggingface/datasets/pull/3849 | 1,162,091,075 | PR_kwDODunzps40E6sW | 3,849 | Add "Adversarial GLUE" dataset to datasets library | {
"login": "jxmorris12",
"id": 13238952,
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jxmorris12",
"html_url": "https://github.com/jxmorris12",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq can you review when you have some time?",
"Hi @lhoestq -- thanks so much for your review! I just added the stuff you requested to the README.md, including an example from the dataset, the table of contents, and lots of section headers with \"More Information Needed\" below. Let me know if there's anything else I need to do!",
"Feel free to also merge `master` into your branch to get the latest updates for the tests ;)",
"thanks @lhoestq - just made all the updates you requested!"
] | 1,646,700,431,000 | 1,648,466,234,000 | 1,648,465,924,000 | CONTRIBUTOR | null | Adds the Adversarial GLUE dataset: https://adversarialglue.github.io/
```python
>>> import datasets
>>> >>> datasets.load_dataset('adv_glue')
Using the latest cached version of the module from /home/jxm3/.cache/huggingface/modules/datasets_modules/datasets/adv_glue/26709a83facad2830d72d4419dd179c0be092f4ad3303ad0ebe815d0cdba5cb4 (last modified on Mon Mar 7 19:19:48 2022) since it couldn't be found locally at adv_glue., or remotely on the Hugging Face Hub.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jxm3/random/datasets/src/datasets/load.py", line 1657, in load_dataset
builder_instance = load_dataset_builder(
File "/home/jxm3/random/datasets/src/datasets/load.py", line 1510, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/home/jxm3/random/datasets/src/datasets/builder.py", line 1021, in __init__
super().__init__(*args, **kwargs)
File "/home/jxm3/random/datasets/src/datasets/builder.py", line 258, in __init__
self.config, self.config_id = self._create_builder_config(
File "/home/jxm3/random/datasets/src/datasets/builder.py", line 337, in _create_builder_config
raise ValueError(
ValueError: Config name is missing.
Please pick one among the available configs: ['adv_sst2', 'adv_qqp', 'adv_mnli', 'adv_mnli_mismatched', 'adv_qnli', 'adv_rte']
Example of usage:
`load_dataset('adv_glue', 'adv_sst2')`
>>> datasets.load_dataset('adv_glue', 'adv_sst2')['validation'][0]
Reusing dataset adv_glue (/home/jxm3/.cache/huggingface/datasets/adv_glue/adv_sst2/1.0.0/3719a903f606f2c96654d87b421bc01114c37084057cdccae65cd7bc24b10933)
100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 604.11it/s]
{'sentence': "it 's an uneven treat that bores fun at the democratic exercise while also examining its significance for those who take part .", 'label': 1, 'idx': 0}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3849/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3849",
"html_url": "https://github.com/huggingface/datasets/pull/3849",
"diff_url": "https://github.com/huggingface/datasets/pull/3849.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3849.patch",
"merged_at": 1648465924000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3848 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3848/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3848/comments | https://api.github.com/repos/huggingface/datasets/issues/3848/events | https://github.com/huggingface/datasets/issues/3848 | 1,162,076,902 | I_kwDODunzps5FQ-Lm | 3,848 | NonMatchingChecksumError when checksum is None | {
"login": "jxmorris12",
"id": 13238952,
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jxmorris12",
"html_url": "https://github.com/jxmorris12",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @jxmorris12, thanks for reporting.\r\n\r\nThe objective of `verify_checksums` is to check that both checksums are equal. Therefore if one is None and the other is non-None, they are not equal, and the function accordingly raises a NonMatchingChecksumError. That behavior is expected.\r\n\r\nThe question is: how did you generate the expected checksum? Normally, it should not be None. To properly generate it (it is contained in the `dataset_infos.json` file), you should have runned: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md\r\n```shell\r\ndatasets-cli test <your-dataset-folder> --save_infos --all_configs\r\n```\r\n\r\nOn the other hand, you should take into account that the generation of this file is NOT mandatory for personal/community datasets (we only require it for \"canonical\" datasets, i.e., datasets added to our library GitHub repository: https://github.com/huggingface/datasets/tree/master/datasets). Therefore, other option would be just to delete the `dataset_infos.json` file. If that file is not present, the function `verify_checksums` is not executed.\r\n\r\nFinally, you can circumvent the `verify_checksums` function by passing `ignore_verifications=True` to `load_dataset`:\r\n```python\r\nload_dataset(..., ignore_verifications=True)\r\n``` ",
"Thanks @albertvillanova!\r\n\r\nThat's fine. I did run that command when I was adding a new dataset. Maybe because the command crashed in the middle, the checksum wasn't stored properly. I don't know where the bug is happening. But either (i) `verify_checksums` should properly handle this edge case, where the passed checksum is None or (ii) the `datasets-cli test` shouldn't generate a corrupted dataset_infos.json file.\r\n\r\nJust a more high-level thing, I was trying to follow the instructions for adding a dataset in the CONTRIBUTING.md, so if running that command isn't even necessary, that should probably be mentioned in the document, right? But that's somewhat of a moot point, since something isn't working quite right internally if I was able to get into this corrupted state in the first place, just by following those instructions.",
"Hi @jxmorris12,\r\n\r\nDefinitely, your `dataset_infos.json` was corrupted (and wrongly contains expected None checksum). \r\n\r\nWhile we further investigate how this can happen and fix it, feel free to delete your `dataset_infos.json` file and recreate it with:\r\n```shell\r\ndatasets-cli test <your-dataset-folder> --save_infos --all_configs\r\n```\r\n\r\nAlso note that `verify_checksum` is working as expected: if it receives a None and and a non-None checksums as input pair, it must raise an exception: they are not equal. That is not a bug.",
"At a higher level, also note that we are preparing the release of `datasets` version 2.0, and some docs are being updated...\r\n\r\nIn order to add a dataset, I think the most updated instructions are in our official documentation pages: https://huggingface.co/docs/datasets/share",
"Thanks for the info. Maybe you can update the contributing.md if it's not up-to-date.",
"Hi @jxmorris12, we have discovered the bug why `None` checksums wrongly appeared when generating the `dataset_infos.json` file:\r\n- #3892\r\n\r\nThe fix will be accessible once this PR merged. And we are planning to do our 2.0 release today.\r\n\r\nWe are also working on updating all our docs for our release today.",
"Thanks @albertvillanova - congrats on the release!"
] | 1,646,699,052,000 | 1,647,355,046,000 | 1,647,347,303,000 | CONTRIBUTOR | null | I ran into the following error when adding a new dataset:
```bash
expected_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': None, 'num_bytes': 40662}}
recorded_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': 'efb4cbd3aa4a87bfaffc310ae951981cc0a36c6c71c6425dd74e5b55f2f325c9', 'num_bytes': 40662}}
verification_name = 'dataset source files'
def verify_checksums(expected_checksums: Optional[dict], recorded_checksums: dict, verification_name=None):
if expected_checksums is None:
logger.info("Unable to verify checksums.")
return
if len(set(expected_checksums) - set(recorded_checksums)) > 0:
raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))
if len(set(recorded_checksums) - set(expected_checksums)) > 0:
raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums)))
bad_urls = [url for url in expected_checksums if expected_checksums[url] != recorded_checksums[url]]
for_verification_name = " for " + verification_name if verification_name is not None else ""
if len(bad_urls) > 0:
error_msg = "Checksums didn't match" + for_verification_name + ":\n"
> raise NonMatchingChecksumError(error_msg + str(bad_urls))
E datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
E ['https://adversarialglue.github.io/dataset/dev.zip']
src/datasets/utils/info_utils.py:40: NonMatchingChecksumError
```
## Expected results
The dataset downloads correctly, and there is no error.
## Actual results
Datasets library is looking for a checksum of None, and it gets a non-None checksum, and throws an error. This is clearly a bug. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3848/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3847 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3847/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3847/comments | https://api.github.com/repos/huggingface/datasets/issues/3847/events | https://github.com/huggingface/datasets/issues/3847 | 1,161,856,417 | I_kwDODunzps5FQIWh | 3,847 | Datasets' cache not re-used | {
"login": "gejinchen",
"id": 15106980,
"node_id": "MDQ6VXNlcjE1MTA2OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/15106980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gejinchen",
"html_url": "https://github.com/gejinchen",
"followers_url": "https://api.github.com/users/gejinchen/followers",
"following_url": "https://api.github.com/users/gejinchen/following{/other_user}",
"gists_url": "https://api.github.com/users/gejinchen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gejinchen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gejinchen/subscriptions",
"organizations_url": "https://api.github.com/users/gejinchen/orgs",
"repos_url": "https://api.github.com/users/gejinchen/repos",
"events_url": "https://api.github.com/users/gejinchen/events{/privacy}",
"received_events_url": "https://api.github.com/users/gejinchen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"<s>I think this is because the tokenizer is stateful and because the order in which the splits are processed is not deterministic. Because of that, the hash of the tokenizer may change for certain splits, which causes issues with caching.\r\n\r\nTo fix this we can try making the order of the splits deterministic for map.</s>",
"Actually this is not because of the order of the splits, but most likely because the tokenizer used to process the second split is in a state that has been modified by the first split.\r\n\r\nTherefore after reloading the first split from the cache, then the second split can't be reloaded since the tokenizer hasn't seen the first split (and therefore is considered a different tokenizer).\r\n\r\nThis is a bit trickier to fix, we can explore fixing this next week maybe",
"Sorry didn't have the bandwidth to take care of this yet - will re-assign when I'm diving into it again !"
] | 1,646,682,915,000 | 1,650,388,310,000 | null | NONE | null | ## Describe the bug
For most tokenizers I have tested (e.g. the RoBERTa tokenizer), the data preprocessing cache are not fully reused in the first few runs, although their `.arrow` cache files are in the cache directory.
## Steps to reproduce the bug
Here is a reproducer. The GPT2 tokenizer works perfectly with caching, but not the RoBERTa tokenizer in this example.
```python
from datasets import load_dataset
from transformers import AutoTokenizer
raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1")
# tokenizer = AutoTokenizer.from_pretrained("gpt2")
tokenizer = AutoTokenizer.from_pretrained("roberta-base")
text_column_name = "text"
column_names = raw_datasets["train"].column_names
def tokenize_function(examples):
return tokenizer(examples[text_column_name], return_special_tokens_mask=True)
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
remove_columns=column_names,
load_from_cache_file=True,
desc="Running tokenizer on every text in dataset",
)
```
## Expected results
No tokenization would be required after the 1st run. Everything should be loaded from the cache.
## Actual results
Tokenization for some subsets are repeated at the 2nd and 3rd run. Starting from the 4th run, everything are loaded from cache.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Ubuntu 18.04.6 LTS
- Python version: 3.6.9
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3847/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3846 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3846/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3846/comments | https://api.github.com/repos/huggingface/datasets/issues/3846/events | https://github.com/huggingface/datasets/pull/3846 | 1,161,810,226 | PR_kwDODunzps40D-uh | 3,846 | Update faiss device docstring | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3846). All of your documentation changes will be reflected on that endpoint."
] | 1,646,680,019,000 | 1,646,680,883,000 | 1,646,680,882,000 | MEMBER | null | Following https://github.com/huggingface/datasets/pull/3721 I updated the docstring of the `device` argument of the FAISS related methods of `Dataset` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3846/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3846",
"html_url": "https://github.com/huggingface/datasets/pull/3846",
"diff_url": "https://github.com/huggingface/datasets/pull/3846.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3846.patch",
"merged_at": 1646680882000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3845/comments | https://api.github.com/repos/huggingface/datasets/issues/3845/events | https://github.com/huggingface/datasets/pull/3845 | 1,161,739,483 | PR_kwDODunzps40DvqX | 3,845 | add RMSE and MAE metrics. | {
"login": "dnaveenr",
"id": 17746528,
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaveenr",
"html_url": "https://github.com/dnaveenr",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3845). All of your documentation changes will be reflected on that endpoint.",
"@mariosasko I've reopened it here. Please suggest any changes if required. Thank you.",
"Thanks for suggestions. :) I have added update the KWARGS_DESCRIPTION for the missing params and also changed RMSE to MSE.\r\nWhile testing, I noticed that when the input is a list of lists, we get an error :\r\n`TypeError: float() argument must be a string or a number, not 'list'`\r\nCould you suggest the datasets.Value() attribute to support both list of floats and list of lists containing floats ?\r\n",
"Just add a new config to cover that case. You can do this by replacing the current `features` dict with:\r\n```python\r\nfeatures=datasets.Features(\r\n {\r\n \"predictions\": datasets.Sequence(datasets.Value(\"float\")),\r\n \"references\": datasets.Sequence(datasets.Value(\"float\")),\r\n }\r\n if self.config_name == \"multioutput\"\r\n else {\r\n \"predictions\": datasets.Value(\"float\"),\r\n \"references\": datasets.Value(\"float\"),\r\n }\r\n),\r\n```\r\nFeel free to suggest a better name for the config than `multioutput`",
"Also, could you please move the changes to a new branch and open a PR from there (for the 3rd time π) because the diff shows changes from unrelated PRs (maybe due to rebasing?).",
"Thanks for the input, I have added new config to support multi-dimensional lists and updated the examples as well.\r\n\r\nSure. Will do that and open a new PR for these changes."
] | 1,646,675,604,000 | 1,646,844,603,000 | 1,646,844,603,000 | CONTRIBUTOR | null | This PR adds RMSE - Root Mean Squared Error and MAE - Mean Absolute Error to the metrics API.
Both implementations are based on usage of sciket-learn.
Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608)
Please suggest any changes if required. Thank you. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3845/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3845",
"html_url": "https://github.com/huggingface/datasets/pull/3845",
"diff_url": "https://github.com/huggingface/datasets/pull/3845.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3845.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3844 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3844/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3844/comments | https://api.github.com/repos/huggingface/datasets/issues/3844/events | https://github.com/huggingface/datasets/pull/3844 | 1,161,686,754 | PR_kwDODunzps40DkYL | 3,844 | Add rmse and mae metrics. | {
"login": "dnaveenr",
"id": 17746528,
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaveenr",
"html_url": "https://github.com/dnaveenr",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3844). All of your documentation changes will be reflected on that endpoint.",
"@dnaveenr This PR is in pretty good shape, so feel free to reopen it."
] | 1,646,672,798,000 | 1,646,673,872,000 | 1,646,673,306,000 | CONTRIBUTOR | null | This PR adds RMSE - Root Mean Squared Error and MAE - Mean Absolute Error to the metrics API.
Both implementations are based on usage of sciket-learn.
Feature request here : Add support for continuous metrics (RMSE, MAE) [#3608](https://github.com/huggingface/datasets/issues/3608)
Any suggestions and changes required will be helpful.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3844/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3844",
"html_url": "https://github.com/huggingface/datasets/pull/3844",
"diff_url": "https://github.com/huggingface/datasets/pull/3844.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3844.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3843 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3843/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3843/comments | https://api.github.com/repos/huggingface/datasets/issues/3843/events | https://github.com/huggingface/datasets/pull/3843 | 1,161,397,812 | PR_kwDODunzps40Cm0D | 3,843 | Fix Google Drive URL to avoid Virus scan warning in streaming mode | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3843). All of your documentation changes will be reflected on that endpoint.",
"Cool ! Looks like it breaks `test_streaming_gg_drive_gzipped` for some reason..."
] | 1,646,658,559,000 | 1,647,347,425,000 | 1,647,347,423,000 | CONTRIBUTOR | null | The streaming version of https://github.com/huggingface/datasets/pull/3787.
Fix #3835
CC: @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3843/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3843",
"html_url": "https://github.com/huggingface/datasets/pull/3843",
"diff_url": "https://github.com/huggingface/datasets/pull/3843.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3843.patch",
"merged_at": 1647347423000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3842 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3842/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3842/comments | https://api.github.com/repos/huggingface/datasets/issues/3842/events | https://github.com/huggingface/datasets/pull/3842 | 1,161,336,483 | PR_kwDODunzps40CZvE | 3,842 | Align IterableDataset.shuffle with Dataset.shuffle | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3842). All of your documentation changes will be reflected on that endpoint.",
"We should also add `generator` as a param to `shuffle` to fully align the APIs, no?",
"I added the `generator` argument.\r\n\r\nI had to make a few other adjustments to make it work. In particular when you call `set_epoch()` on a streaming dataset, it updates the underlying random generator by using a new effective seed. The effective seed is generated using the previous generator and the epoch number."
] | 1,646,655,046,000 | 1,646,679,823,000 | 1,646,679,822,000 | MEMBER | null | From #3444 , Dataset.shuffle can have the same API than IterableDataset.shuffle (i.e. in streaming mode).
Currently you can pass an optional seed to both if you want, BUT currently IterableDataset.shuffle always requires a buffer_size, used for approximate shuffling. I propose using a reasonable default value (maybe 1000) instead.
In this PR, I set the default `buffer_size` value to 1,000, and I reorder the `IterableDataset.shuffle` arguments to match `Dataset.shuffle`, i.e. making `seed` the first argument. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3842/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3842",
"html_url": "https://github.com/huggingface/datasets/pull/3842",
"diff_url": "https://github.com/huggingface/datasets/pull/3842.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3842.patch",
"merged_at": 1646679822000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3841/comments | https://api.github.com/repos/huggingface/datasets/issues/3841/events | https://github.com/huggingface/datasets/issues/3841 | 1,161,203,842 | I_kwDODunzps5FNpCC | 3,841 | Pyright reportPrivateImportUsage when `from datasets import load_dataset` | {
"login": "lkhphuc",
"id": 12573521,
"node_id": "MDQ6VXNlcjEyNTczNTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/12573521?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lkhphuc",
"html_url": "https://github.com/lkhphuc",
"followers_url": "https://api.github.com/users/lkhphuc/followers",
"following_url": "https://api.github.com/users/lkhphuc/following{/other_user}",
"gists_url": "https://api.github.com/users/lkhphuc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lkhphuc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lkhphuc/subscriptions",
"organizations_url": "https://api.github.com/users/lkhphuc/orgs",
"repos_url": "https://api.github.com/users/lkhphuc/repos",
"events_url": "https://api.github.com/users/lkhphuc/events{/privacy}",
"received_events_url": "https://api.github.com/users/lkhphuc/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,646,648,644,000 | 1,646,648,694,000 | null | CONTRIBUTOR | null | ## Describe the bug
Pyright complains about module not exported.
## Steps to reproduce the bug
Use an editor/IDE with Pyright Language server with default configuration:
```python
from datasets import load_dataset
```
## Expected results
No complain from Pyright
## Actual results
Pyright complain below:
```
`load_dataset` is not exported from module "datasets"
Import from "datasets.load" instead [reportPrivateImportUsage]
```
Importing from `datasets.load` does indeed solves the problem but I believe importing directly from top level `datasets` is the intended usage per the documentation.
## Environment info
- `datasets` version: 1.18.3
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.10
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3841/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3840/comments | https://api.github.com/repos/huggingface/datasets/issues/3840/events | https://github.com/huggingface/datasets/pull/3840 | 1,161,183,773 | PR_kwDODunzps40B8eu | 3,840 | Pin responses to fix CI for Windows | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3840). All of your documentation changes will be reflected on that endpoint."
] | 1,646,647,613,000 | 1,646,647,956,000 | 1,646,647,644,000 | MEMBER | null | Temporarily fix CI for Windows by pinning `responses`.
See: https://app.circleci.com/pipelines/github/huggingface/datasets/10292/workflows/83de4a55-bff7-43ec-96f7-0c335af5c050/jobs/63355
Fix: #3839 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3840/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3840",
"html_url": "https://github.com/huggingface/datasets/pull/3840",
"diff_url": "https://github.com/huggingface/datasets/pull/3840.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3840.patch",
"merged_at": 1646647644000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3839 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3839/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3839/comments | https://api.github.com/repos/huggingface/datasets/issues/3839/events | https://github.com/huggingface/datasets/issues/3839 | 1,161,183,482 | I_kwDODunzps5FNkD6 | 3,839 | CI is broken for Windows | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,646,647,602,000 | 1,653,056,023,000 | 1,646,647,644,000 | MEMBER | null | ## Describe the bug
See: https://app.circleci.com/pipelines/github/huggingface/datasets/10292/workflows/83de4a55-bff7-43ec-96f7-0c335af5c050/jobs/63355
```
___________________ test_datasetdict_from_text_split[test] ____________________
[gw0] win32 -- Python 3.7.11 C:\tools\miniconda3\envs\py37\python.exe
split = 'test'
text_path = 'C:\\Users\\circleci\\AppData\\Local\\Temp\\pytest-of-circleci\\pytest-0\\popen-gw0\\data6\\dataset.txt'
tmp_path = WindowsPath('C:/Users/circleci/AppData/Local/Temp/pytest-of-circleci/pytest-0/popen-gw0/test_datasetdict_from_text_spl7')
@pytest.mark.parametrize("split", [None, NamedSplit("train"), "train", "test"])
def test_datasetdict_from_text_split(split, text_path, tmp_path):
if split:
path = {split: text_path}
else:
split = "train"
path = {"train": text_path, "test": text_path}
cache_dir = tmp_path / "cache"
expected_features = {"text": "string"}
> dataset = TextDatasetReader(path, cache_dir=cache_dir).read()
tests\io\test_text.py:118:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\io\text.py:43: in read
use_auth_token=use_auth_token,
C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\builder.py:588: in download_and_prepare
self._download_prepared_from_hf_gcs(dl_manager.download_config)
C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\builder.py:630: in _download_prepared_from_hf_gcs
reader.download_from_hf_gcs(download_config, relative_data_dir)
C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\arrow_reader.py:260: in download_from_hf_gcs
downloaded_dataset_info = cached_path(remote_dataset_info.replace(os.sep, "/"))
C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:301: in cached_path
download_desc=download_config.download_desc,
C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:560: in get_from_cache
headers=headers,
C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:476: in http_head
max_retries=max_retries,
C:\tools\miniconda3\envs\py37\lib\site-packages\datasets\utils\file_utils.py:397: in _request_with_retry
response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
C:\tools\miniconda3\envs\py37\lib\site-packages\requests\api.py:61: in request
return session.request(method=method, url=url, **kwargs)
C:\tools\miniconda3\envs\py37\lib\site-packages\requests\sessions.py:529: in request
resp = self.send(prep, **send_kwargs)
C:\tools\miniconda3\envs\py37\lib\site-packages\requests\sessions.py:645: in send
r = adapter.send(request, **kwargs)
C:\tools\miniconda3\envs\py37\lib\site-packages\responses\__init__.py:840: in unbound_on_send
return self._on_request(adapter, request, *a, **kwargs)
C:\tools\miniconda3\envs\py37\lib\site-packages\responses\__init__.py:780: in _on_request
match, match_failed_reasons = self._find_match(request)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <responses.RequestsMock object at 0x000002048AD70588>
request = <PreparedRequest [HEAD]>
def _find_first_match(self, request):
match_failed_reasons = []
> for i, match in enumerate(self._matches):
E AttributeError: 'RequestsMock' object has no attribute '_matches'
C:\tools\miniconda3\envs\py37\lib\site-packages\moto\core\models.py:289: AttributeError
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3839/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3838/comments | https://api.github.com/repos/huggingface/datasets/issues/3838/events | https://github.com/huggingface/datasets/issues/3838 | 1,161,137,406 | I_kwDODunzps5FNYz- | 3,838 | Add a data type for labeled images (image segmentation) | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,646,645,895,000 | 1,649,597,699,000 | null | CONTRIBUTOR | null | It might be a mix of Image and ClassLabel, and the color palette might be generated automatically.
---
### Example
every pixel in the images of the annotation column (in https://huggingface.co/datasets/scene_parse_150) has a value that gives its class, and the dataset itself is associated with a color palette (eg https://github.com/open-mmlab/mmsegmentation/blob/98a353b674c6052d319e7de4e5bcd65d670fcf84/mmseg/datasets/ade.py#L47) that maps every class with a color.
So we might want to render the image as a colored image instead of a black and white one.
<img width="785" alt="156741519-fbae6844-2606-4c28-837e-279d83d00865" src="https://user-images.githubusercontent.com/1676121/157005263-7058c584-2b70-465a-ad94-8a982f726cf4.png">
See https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/features/labeled_image.py for reference in Tensorflow | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3838/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3838/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3837 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3837/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3837/comments | https://api.github.com/repos/huggingface/datasets/issues/3837/events | https://github.com/huggingface/datasets/pull/3837 | 1,161,109,031 | PR_kwDODunzps40BwE1 | 3,837 | Release: 1.18.4 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,646,644,409,000 | 1,646,651,255,000 | 1,646,651,222,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3837/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3837",
"html_url": "https://github.com/huggingface/datasets/pull/3837",
"diff_url": "https://github.com/huggingface/datasets/pull/3837.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3837.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3836 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3836/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3836/comments | https://api.github.com/repos/huggingface/datasets/issues/3836/events | https://github.com/huggingface/datasets/pull/3836 | 1,161,072,531 | PR_kwDODunzps40Bobr | 3,836 | Logo float left | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3836). All of your documentation changes will be reflected on that endpoint.",
"Weird, the logo doesn't seem to be floating on my side (using Chrome) at https://huggingface.co/docs/datasets/master/en/index",
"https://huggingface.co/docs/datasets/index\r\n\r\nThe needed css change from moon-landing just got deployed"
] | 1,646,642,314,000 | 1,646,684,471,000 | 1,646,644,451,000 | CONTRIBUTOR | null | <img width="1000" alt="Screenshot 2022-03-07 at 09 35 29" src="https://user-images.githubusercontent.com/11827707/156996422-339ba43e-932b-4849-babf-9321cb99c922.png">
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3836/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3836/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3836",
"html_url": "https://github.com/huggingface/datasets/pull/3836",
"diff_url": "https://github.com/huggingface/datasets/pull/3836.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3836.patch",
"merged_at": 1646644451000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3835 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3835/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3835/comments | https://api.github.com/repos/huggingface/datasets/issues/3835/events | https://github.com/huggingface/datasets/issues/3835 | 1,161,029,205 | I_kwDODunzps5FM-ZV | 3,835 | The link given on the gigaword does not work | {
"login": "martin6336",
"id": 26357784,
"node_id": "MDQ6VXNlcjI2MzU3Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/26357784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/martin6336",
"html_url": "https://github.com/martin6336",
"followers_url": "https://api.github.com/users/martin6336/followers",
"following_url": "https://api.github.com/users/martin6336/following{/other_user}",
"gists_url": "https://api.github.com/users/martin6336/gists{/gist_id}",
"starred_url": "https://api.github.com/users/martin6336/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/martin6336/subscriptions",
"organizations_url": "https://api.github.com/users/martin6336/orgs",
"repos_url": "https://api.github.com/users/martin6336/repos",
"events_url": "https://api.github.com/users/martin6336/events{/privacy}",
"received_events_url": "https://api.github.com/users/martin6336/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,646,639,802,000 | 1,647,347,423,000 | 1,647,347,423,000 | NONE | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3835/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3834/comments | https://api.github.com/repos/huggingface/datasets/issues/3834/events | https://github.com/huggingface/datasets/pull/3834 | 1,160,657,937 | PR_kwDODunzps40ATVw | 3,834 | Fix dead dataset scripts creation link. | {
"login": "dnaveenr",
"id": 17746528,
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaveenr",
"html_url": "https://github.com/dnaveenr",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,646,585,148,000 | 1,646,655,127,000 | 1,646,655,127,000 | CONTRIBUTOR | null | Previous link gives 404 error. Updated with a new dataset scripts creation link. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3834/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3834",
"html_url": "https://github.com/huggingface/datasets/pull/3834",
"diff_url": "https://github.com/huggingface/datasets/pull/3834.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3834.patch",
"merged_at": 1646655127000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3833 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3833/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3833/comments | https://api.github.com/repos/huggingface/datasets/issues/3833/events | https://github.com/huggingface/datasets/pull/3833 | 1,160,543,713 | PR_kwDODunzps4z_99t | 3,833 | Small typos in How-to-train tutorial. | {
"login": "lkhphuc",
"id": 12573521,
"node_id": "MDQ6VXNlcjEyNTczNTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/12573521?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lkhphuc",
"html_url": "https://github.com/lkhphuc",
"followers_url": "https://api.github.com/users/lkhphuc/followers",
"following_url": "https://api.github.com/users/lkhphuc/following{/other_user}",
"gists_url": "https://api.github.com/users/lkhphuc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lkhphuc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lkhphuc/subscriptions",
"organizations_url": "https://api.github.com/users/lkhphuc/orgs",
"repos_url": "https://api.github.com/users/lkhphuc/repos",
"events_url": "https://api.github.com/users/lkhphuc/events{/privacy}",
"received_events_url": "https://api.github.com/users/lkhphuc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,646,552,989,000 | 1,646,656,533,000 | 1,646,655,197,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3833/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3833",
"html_url": "https://github.com/huggingface/datasets/pull/3833",
"diff_url": "https://github.com/huggingface/datasets/pull/3833.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3833.patch",
"merged_at": 1646655197000
} | true |