url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.26B
| node_id
stringlengths 18
32
| number
int64 1
4.44k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,654B
| updated_at
int64 1,587B
1,654B
| closed_at
int64 1,587B
1,654B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 1
value | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/987/comments | https://api.github.com/repos/huggingface/datasets/issues/987/events | https://github.com/huggingface/datasets/pull/987 | 755,059,469 | MDExOlB1bGxSZXF1ZXN0NTMwODQ0MTQ4 | 987 | Add OPUS DOGC dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"merging since the CI is fixed on master"
] | 1,606,897,832,000 | 1,607,088,461,000 | 1,607,088,461,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/987/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/987",
"html_url": "https://github.com/huggingface/datasets/pull/987",
"diff_url": "https://github.com/huggingface/datasets/pull/987.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/987.patch",
"merged_at": 1607088461000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/986 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/986/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/986/comments | https://api.github.com/repos/huggingface/datasets/issues/986/events | https://github.com/huggingface/datasets/pull/986 | 755,047,470 | MDExOlB1bGxSZXF1ZXN0NTMwODM0MzYx | 986 | Add SciTLDR Dataset | {
"login": "Bharat123rox",
"id": 13381361,
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bharat123rox",
"html_url": "https://github.com/Bharat123rox",
"followers_url": "https://api.github.com/users/Bharat123rox/followers",
"following_url": "https://api.github.com/users/Bharat123rox/following{/other_user}",
"gists_url": "https://api.github.com/users/Bharat123rox/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bharat123rox/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bharat123rox/subscriptions",
"organizations_url": "https://api.github.com/users/Bharat123rox/orgs",
"repos_url": "https://api.github.com/users/Bharat123rox/repos",
"events_url": "https://api.github.com/users/Bharat123rox/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bharat123rox/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"CI failures seem to be unrelated (related to `norwegian_ner`)\r\n```\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_norwegian_ner\r\n```",
"you can just rebase from master to fix the CI :) ",
"can you just rebase from master before we merge ?",
"Sorry, the rebase from master went horribly wrong, I guess I'll just open another PR\r\n\r\nClosing this one due to a mistake in rebasing :(",
"Continued in #1014 "
] | 1,606,896,676,000 | 1,606,934,242,000 | 1,606,932,179,000 | CONTRIBUTOR | null | Adds the SciTLDR Dataset by AI2
Added README card with tags to the best of my knowledge
Multi-target summaries or TLDRs of Scientific Documents | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/986/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/986",
"html_url": "https://github.com/huggingface/datasets/pull/986",
"diff_url": "https://github.com/huggingface/datasets/pull/986.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/986.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/985 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/985/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/985/comments | https://api.github.com/repos/huggingface/datasets/issues/985/events | https://github.com/huggingface/datasets/pull/985 | 755,020,564 | MDExOlB1bGxSZXF1ZXN0NTMwODEyNTM1 | 985 | Add GAP dataset | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This dataset already exists apparently, sorry :/ \r\nsee\r\nhttps://github.com/huggingface/datasets/blob/master/datasets/gap/gap.py\r\n\r\nFeel free to re-use the dataset card you did for `/datasets/gap`\r\n",
"oh heck, my bad 🤦♂️ sorry"
] | 1,606,893,911,000 | 1,606,925,792,000 | 1,606,925,792,000 | MEMBER | null | GAP dataset
Gender bias coreference resolution | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/985/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/985",
"html_url": "https://github.com/huggingface/datasets/pull/985",
"diff_url": "https://github.com/huggingface/datasets/pull/985.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/985.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/984 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/984/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/984/comments | https://api.github.com/repos/huggingface/datasets/issues/984/events | https://github.com/huggingface/datasets/pull/984 | 755,009,916 | MDExOlB1bGxSZXF1ZXN0NTMwODAzNzgw | 984 | committing Whoa file | {
"login": "StulosDunamos",
"id": 75356780,
"node_id": "MDQ6VXNlcjc1MzU2Nzgw",
"avatar_url": "https://avatars.githubusercontent.com/u/75356780?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StulosDunamos",
"html_url": "https://github.com/StulosDunamos",
"followers_url": "https://api.github.com/users/StulosDunamos/followers",
"following_url": "https://api.github.com/users/StulosDunamos/following{/other_user}",
"gists_url": "https://api.github.com/users/StulosDunamos/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StulosDunamos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StulosDunamos/subscriptions",
"organizations_url": "https://api.github.com/users/StulosDunamos/orgs",
"repos_url": "https://api.github.com/users/StulosDunamos/repos",
"events_url": "https://api.github.com/users/StulosDunamos/events{/privacy}",
"received_events_url": "https://api.github.com/users/StulosDunamos/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"can't find the Whoa file since there' nothing left",
"The classic `rm -rf` command - nice one"
] | 1,606,892,866,000 | 1,606,925,729,000 | 1,606,923,658,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/984/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/984",
"html_url": "https://github.com/huggingface/datasets/pull/984",
"diff_url": "https://github.com/huggingface/datasets/pull/984.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/984.patch",
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/983 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/983/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/983/comments | https://api.github.com/repos/huggingface/datasets/issues/983/events | https://github.com/huggingface/datasets/pull/983 | 754,966,620 | MDExOlB1bGxSZXF1ZXN0NTMwNzY4MTMw | 983 | add mc taco | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,888,495,000 | 1,606,923,467,000 | 1,606,923,466,000 | MEMBER | null | MC-TACO
Temporal commonsense knowledge | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/983/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/983",
"html_url": "https://github.com/huggingface/datasets/pull/983",
"diff_url": "https://github.com/huggingface/datasets/pull/983.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/983.patch",
"merged_at": 1606923466000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/982 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/982/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/982/comments | https://api.github.com/repos/huggingface/datasets/issues/982/events | https://github.com/huggingface/datasets/pull/982 | 754,946,337 | MDExOlB1bGxSZXF1ZXN0NTMwNzUxMzYx | 982 | add prachathai67k take2 | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,885,921,000 | 1,606,904,291,000 | 1,606,904,291,000 | CONTRIBUTOR | null | I decided it will be faster to create a new pull request instead of fixing the rebase issues.
continuing from https://github.com/huggingface/datasets/pull/954
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/982/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/982/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/982",
"html_url": "https://github.com/huggingface/datasets/pull/982",
"diff_url": "https://github.com/huggingface/datasets/pull/982.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/982.patch",
"merged_at": 1606904291000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/981 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/981/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/981/comments | https://api.github.com/repos/huggingface/datasets/issues/981/events | https://github.com/huggingface/datasets/pull/981 | 754,937,612 | MDExOlB1bGxSZXF1ZXN0NTMwNzQ0MTYx | 981 | add wisesight_sentiment take2 | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,884,659,000 | 1,606,905,433,000 | 1,606,905,433,000 | CONTRIBUTOR | null | Take 2 since last time the rebase issues were taking me too much time to fix as opposed to just open a new one. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/981/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/981/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/981",
"html_url": "https://github.com/huggingface/datasets/pull/981",
"diff_url": "https://github.com/huggingface/datasets/pull/981.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/981.patch",
"merged_at": 1606905433000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/980 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/980/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/980/comments | https://api.github.com/repos/huggingface/datasets/issues/980/events | https://github.com/huggingface/datasets/pull/980 | 754,899,301 | MDExOlB1bGxSZXF1ZXN0NTMwNzEzNjY3 | 980 | Wongnai - Thai reviews dataset | {
"login": "mapmeld",
"id": 643918,
"node_id": "MDQ6VXNlcjY0MzkxOA==",
"avatar_url": "https://avatars.githubusercontent.com/u/643918?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mapmeld",
"html_url": "https://github.com/mapmeld",
"followers_url": "https://api.github.com/users/mapmeld/followers",
"following_url": "https://api.github.com/users/mapmeld/following{/other_user}",
"gists_url": "https://api.github.com/users/mapmeld/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mapmeld/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mapmeld/subscriptions",
"organizations_url": "https://api.github.com/users/mapmeld/orgs",
"repos_url": "https://api.github.com/users/mapmeld/repos",
"events_url": "https://api.github.com/users/mapmeld/events{/privacy}",
"received_events_url": "https://api.github.com/users/mapmeld/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thank you for contributing a Thai dataset, @mapmeld ! I'm super hyped. \r\nOne comment I may add is that wongnai-corpus has two datasets: review classification (this) and word tokenization (https://github.com/wongnai/wongnai-corpus/blob/master/search/labeled_queries_by_judges.txt).\r\nWould it be possible for you to rename this one something along the line of `wongnai-reviews` so that when/if we include the word tokenization dataset, we will know which is which.\r\n\r\nThis helps solve my check_code_quality issue.\r\n```\r\nmake style\r\nblack --line-length 119 --target-version py36 datasets/wongnai\r\nflake8 datasets/wongnai\r\nisort datasets/wongnai/wongnai.py\r\n```",
"@cstorm125 thanks! following your suggestions on formatting and on naming the dataset\r\n\r\nI am writing a blog post about Thai NLP and transformers (example: mBERT does 1-2 character tokens instead of doing word segmentation), started adding this dataset to use as an example, and then saw you were adding other datasets. Great work! And if you know any Thai BERT models beyond https://github.com/ThAIKeras/bert we should maybe talk over email!"
] | 1,606,879,208,000 | 1,606,923,281,000 | 1,606,923,005,000 | CONTRIBUTOR | null | 40,000 reviews, previously released on GitHub ( https://github.com/wongnai/wongnai-corpus ) with an LGPL license, and on a closed Kaggle competition ( https://www.kaggle.com/c/wongnai-challenge-review-rating-prediction/ ) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/980/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/980/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/980",
"html_url": "https://github.com/huggingface/datasets/pull/980",
"diff_url": "https://github.com/huggingface/datasets/pull/980.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/980.patch",
"merged_at": 1606923004000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/979 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/979/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/979/comments | https://api.github.com/repos/huggingface/datasets/issues/979/events | https://github.com/huggingface/datasets/pull/979 | 754,893,337 | MDExOlB1bGxSZXF1ZXN0NTMwNzA4OTA5 | 979 | [WIP] Add multi woz | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,878,342,000 | 1,606,925,236,000 | 1,606,925,236,000 | MEMBER | null | This PR adds version 2.2 of the Multi-domain Wizard of OZ dataset: https://github.com/budzianowski/multiwoz/tree/master/data/MultiWOZ_2.2
It was a pretty big chunk of work to figure out the structure, so I stil have tol add the description to the README.md
On the plus side the structure is broadly similar to that of the Google Schema Guided dialogue [dataset](https://github.com/google-research-datasets/dstc8-schema-guided-dialogue), so will take care of that one next. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/979/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/979/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/979",
"html_url": "https://github.com/huggingface/datasets/pull/979",
"diff_url": "https://github.com/huggingface/datasets/pull/979.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/979.patch",
"merged_at": 1606925236000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/978 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/978/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/978/comments | https://api.github.com/repos/huggingface/datasets/issues/978/events | https://github.com/huggingface/datasets/pull/978 | 754,854,478 | MDExOlB1bGxSZXF1ZXN0NTMwNjc4NTUy | 978 | Add code refinement | {
"login": "reshinthadithyan",
"id": 36307201,
"node_id": "MDQ6VXNlcjM2MzA3MjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/36307201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reshinthadithyan",
"html_url": "https://github.com/reshinthadithyan",
"followers_url": "https://api.github.com/users/reshinthadithyan/followers",
"following_url": "https://api.github.com/users/reshinthadithyan/following{/other_user}",
"gists_url": "https://api.github.com/users/reshinthadithyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/reshinthadithyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reshinthadithyan/subscriptions",
"organizations_url": "https://api.github.com/users/reshinthadithyan/orgs",
"repos_url": "https://api.github.com/users/reshinthadithyan/repos",
"events_url": "https://api.github.com/users/reshinthadithyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/reshinthadithyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Also cc @madlag since I recall you wanted to work on CodeXGlue as well ?",
"Yes, sorry I did not see earlier your message. I added 34 on the 35 datasets in CodeXGlue, tomorrow I will wrap it up, and so I will remove my version for code_refinement. Maybe we can just have a renaming after the merge, to have a consistent naming with all the other codexglue datasets ? What do you think @reshinthadithyan ?",
"> Yes, sorry I did not see earlier your message. I added 34 on the 35 datasets in CodeXGlue, tomorrow I will wrap it up, and so I will remove my version for code_refinement. Maybe we can just have a renaming after the merge, to have a consistent naming with all the other codexglue datasets ? What do you think @reshinthadithyan ?\r\n\r\nHello @madlag, I think you can retain that in your script. Let's stick onto the same file like how Glue is maintained.",
"Hi @reshinthadithyan ! Are you still working on this version of the dataset or are we going with @madlag 's only ?",
"> Hi @reshinthadithyan ! Are you still working on this version of the dataset or are we going with @madlag 's only ?\r\n\r\nHello, yes. We are going with Madlag's"
] | 1,606,872,598,000 | 1,607,305,978,000 | 1,607,305,978,000 | CONTRIBUTOR | null | ### OVERVIEW
Millions of open-source projects with numerous bug fixes
are available in code repositories. This proliferation
of software development histories can be leveraged to
learn how to fix common programming bugs
Code refinement aims to automatically fix bugs in the code,
which can contribute to reducing the cost of bug-fixes for developers.
Given a piece of Java code with bugs,
the task is to remove the bugs to output the refined code. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/978/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/978",
"html_url": "https://github.com/huggingface/datasets/pull/978",
"diff_url": "https://github.com/huggingface/datasets/pull/978.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/978.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/977 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/977/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/977/comments | https://api.github.com/repos/huggingface/datasets/issues/977/events | https://github.com/huggingface/datasets/pull/977 | 754,839,594 | MDExOlB1bGxSZXF1ZXN0NTMwNjY2ODg3 | 977 | Add ROPES dataset | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,870,330,000 | 1,606,906,716,000 | 1,606,906,715,000 | MEMBER | null | ROPES dataset
Reasoning over paragraph effects in situations - testing a system's ability to apply knowledge from a passage of text to a new situation. The task is framed into a reading comprehension task following squad-style extractive qa.
One thing to note: labels of the test set are hidden (leaderboard submission) so I encoded that as an empty list (ropes.py:L125) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/977/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/977",
"html_url": "https://github.com/huggingface/datasets/pull/977",
"diff_url": "https://github.com/huggingface/datasets/pull/977.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/977.patch",
"merged_at": 1606906715000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/976 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/976/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/976/comments | https://api.github.com/repos/huggingface/datasets/issues/976/events | https://github.com/huggingface/datasets/pull/976 | 754,826,146 | MDExOlB1bGxSZXF1ZXN0NTMwNjU1NzM5 | 976 | Arabic pos dialect | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"looks like this PR includes changes about many other files than the oens for Araboc POS Dialect\r\n\r\nCan you create a another branch and another PR please ?",
"Sorry! I'm not sure how I managed to do that. I'll make a new branch."
] | 1,606,868,473,000 | 1,607,535,032,000 | 1,607,535,032,000 | CONTRIBUTOR | null | A README.md and loading script for the Arabic POS Dialect dataset. The README is missing the sections on personal information, biases, and limitations, as it would probably be better for those to be filled by someone who can read the contents of the dataset and is familiar with Arabic NLP. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/976/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/976",
"html_url": "https://github.com/huggingface/datasets/pull/976",
"diff_url": "https://github.com/huggingface/datasets/pull/976.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/976.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/975 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/975/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/975/comments | https://api.github.com/repos/huggingface/datasets/issues/975/events | https://github.com/huggingface/datasets/pull/975 | 754,823,701 | MDExOlB1bGxSZXF1ZXN0NTMwNjUzNjg4 | 975 | add MeTooMA dataset | {
"login": "akash418",
"id": 23264033,
"node_id": "MDQ6VXNlcjIzMjY0MDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/23264033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akash418",
"html_url": "https://github.com/akash418",
"followers_url": "https://api.github.com/users/akash418/followers",
"following_url": "https://api.github.com/users/akash418/following{/other_user}",
"gists_url": "https://api.github.com/users/akash418/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akash418/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akash418/subscriptions",
"organizations_url": "https://api.github.com/users/akash418/orgs",
"repos_url": "https://api.github.com/users/akash418/repos",
"events_url": "https://api.github.com/users/akash418/events{/privacy}",
"received_events_url": "https://api.github.com/users/akash418/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,868,155,000 | 1,606,906,736,000 | 1,606,906,735,000 | CONTRIBUTOR | null | This PR adds the #MeToo MA dataset. It presents multi-label data points for tweets mined in the backdrop of the #MeToo movement. The dataset includes data points in the form of Tweet ids and appropriate labels. Please refer to the accompanying paper for detailed information regarding annotation, collection, and guidelines.
Paper: https://ojs.aaai.org/index.php/ICWSM/article/view/7292
Dataset Link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU
---
annotations_creators:
- expert-generated
language_creators:
- found
languages:
- en
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
- text-retrieval
task_ids:
- multi-class-classification
- multi-label-classification
---
# Dataset Card for #MeTooMA dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU
- **Paper:** https://ojs.aaai.org//index.php/ICWSM/article/view/7292
- **Point of Contact:** https://github.com/midas-research/MeTooMA
### Dataset Summary
- The dataset consists of tweets belonging to #MeToo movement on Twitter, labeled into different categories.
- This dataset includes more data points and has more labels than any of the previous datasets that contain social media
posts about sexual abuse disclosures. Please refer to the Related Datasets of the publication for detailed information about this.
- Due to Twitter's development policies, the authors provide only the tweet IDs and corresponding labels,
other data can be fetched via Twitter API.
- The data has been labeled by experts, with the majority taken into the account for deciding the final label.
- The authors provide these labels for each of the tweets.
- Relevance
- Directed Hate
- Generalized Hate
- Sarcasm
- Allegation
- Justification
- Refutation
- Support
- Oppose
- The definitions for each task/label are in the main publication.
- Please refer to the accompanying paper https://aaai.org/ojs/index.php/ICWSM/article/view/7292 for statistical analysis on the textual data
extracted from this dataset.
- The language of all the tweets in this dataset is English
- Time period: October 2018 - December 2018
- Suggested Use Cases of this dataset:
- Evaluating usage of linguistic acts such as hate-speech and sarcasm in the context of public sexual abuse disclosures.
- Extracting actionable insights and virtual dynamics of gender roles in sexual abuse revelations.
- Identifying how influential people were portrayed on the public platform in the
events of mass social movements.
- Polarization analysis based on graph simulations of social nodes of users involved
in the #MeToo movement.
### Supported Tasks and Leaderboards
Multi-Label and Multi-Class Classification
### Languages
English
## Dataset Structure
- The dataset is structured into CSV format with TweetID and accompanying labels.
- Train and Test sets are split into respective files.
### Data Instances
Tweet ID and the appropriate labels
### Data Fields
Tweet ID and appropriate labels (binary label applicable for a data point) and multiple labels for each Tweet ID
### Data Splits
- Train: 7979
- Test: 1996
## Dataset Creation
### Curation Rationale
- Twitter was the major source of all the public disclosures of sexual abuse incidents during the #MeToo movement.
- People expressed their opinions over issues that were previously missing from the social media space.
- This provides an option to study the linguistic behaviors of social media users in an informal setting,
therefore the authors decide to curate this annotated dataset.
- The authors expect this dataset would be of great interest and use to both computational and socio-linguists.
- For computational linguists, it provides an opportunity to model three new complex dialogue acts (allegation, refutation, and justification) and also to study how these acts interact with some of the other linguistic components like stance, hate, and sarcasm. For socio-linguists, it provides an opportunity to explore how a movement manifests in social media.
### Source Data
- Source of all the data points in this dataset is a Twitter social media platform.
#### Initial Data Collection and Normalization
- All the tweets are mined from Twitter with initial search parameters identified using keywords from the #MeToo movement.
- Redundant keywords were removed based on manual inspection.
- Public streaming APIs of Twitter was used for querying with the selected keywords.
- Based on text de-duplication and cosine similarity score, the set of tweets were pruned.
- Non-English tweets were removed.
- The final set was labeled by experts with the majority label taken into the account for deciding the final label.
- Please refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292
#### Who are the source language producers?
Please refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292
### Annotations
#### Annotation process
- The authors chose against crowdsourcing for labeling this dataset due to its highly sensitive nature.
- The annotators are domain experts having degrees in advanced clinical psychology and gender studies.
- They were provided a guidelines document with instructions about each task and its definitions, labels, and examples.
- They studied the document, worked on a few examples to get used to this annotation task.
- They also provided feedback for improving the class definitions.
- The annotation process is not mutually exclusive, implying that the presence of one label does not mean the
absence of the other one.
#### Who are the annotators?
- The annotators are domain experts having a degree in clinical psychology and gender studies.
- Please refer to the accompanying paper for a detailed annotation process.
### Personal and Sensitive Information
- Considering Twitter's policy for distribution of data, only Tweet ID and applicable labels are shared for public use.
- It is highly encouraged to use this dataset for scientific purposes only.
- This dataset collection completely follows the Twitter mandated guidelines for distribution and usage.
## Considerations for Using the Data
### Social Impact of Dataset
- The authors of this dataset do not intend to conduct a population-centric analysis of the #MeToo movement on Twitter.
- The authors acknowledge that findings from this dataset cannot be used as-is for any direct social intervention, these
should be used to assist already existing human intervention tools and therapies.
- Enough care has been taken to ensure that this work comes off as trying to target a specific person for their
the personal stance of issues pertaining to the #MeToo movement.
- The authors of this work do not aim to vilify anyone accused in the #MeToo movement in any manner.
- Please refer to the ethics and discussion section of the mentioned publication for appropriate sharing of this dataset
and the social impact of this work.
### Discussion of Biases
- The #MeToo movement acted as a catalyst for implementing social policy changes to benefit the members of
the community affected by sexual abuse.
- Any work undertaken on this dataset should aim to minimize the bias against minority groups which
might amplify in cases of a sudden outburst of public reactions over sensitive social media discussions.
### Other Known Limitations
- Considering privacy concerns, social media practitioners should be aware of making automated interventions
to aid the victims of sexual abuse as some people might not prefer to disclose their notions.
- Concerned social media users might also repeal their social information if they found out that their
information is being used for computational purposes, hence it is important to seek subtle individual consent
before trying to profile authors involved in online discussions to uphold personal privacy.
## Additional Information
Please refer to this link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU
### Dataset Curators
- If you use the corpus in a product or application, then please credit the authors
and [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi]
(http://midas.iiitd.edu.in) appropriately.
Also, if you send us an email, we will be thrilled to know about how you have used the corpus.
- If interested in the commercial use of the corpus, send an email to midas@iiitd.ac.in.
- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India
disclaims any responsibility for the use of the corpus and does not provide technical support.
However, the contact listed above will be happy to respond to queries and clarifications
- Please feel free to send us an email:
- with feedback regarding the corpus.
- with information on how you have used the corpus.
- if interested in having us analyze your social media data.
- if interested in a collaborative research project.
### Licensing Information
[More Information Needed]
### Citation Information
Please cite the following publication if you make use of the dataset: https://ojs.aaai.org/index.php/ICWSM/article/view/7292
```
@article{Gautam_Mathur_Gosangi_Mahata_Sawhney_Shah_2020, title={#MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement}, volume={14}, url={https://aaai.org/ojs/index.php/ICWSM/article/view/7292}, abstractNote={<p>In this paper, we present a dataset containing 9,973 tweets related to the MeToo movement that were manually annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. We present a detailed account of the data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.79 to 0.93 k-alpha) due to the domain expertise of the annotators and clear annotation instructions. We analyze the data in terms of geographical distribution, label correlations, and keywords. Lastly, we present some potential use cases of this dataset. We expect this dataset would be of great interest to psycholinguists, socio-linguists, and computational linguists to study the discursive space of digitally mobilized social movements on sensitive issues like sexual harassment.</p&gt;}, number={1}, journal={Proceedings of the International AAAI Conference on Web and Social Media}, author={Gautam, Akash and Mathur, Puneet and Gosangi, Rakesh and Mahata, Debanjan and Sawhney, Ramit and Shah, Rajiv Ratn}, year={2020}, month={May}, pages={209-216} }
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/975/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/975",
"html_url": "https://github.com/huggingface/datasets/pull/975",
"diff_url": "https://github.com/huggingface/datasets/pull/975.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/975.patch",
"merged_at": 1606906735000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/974 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/974/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/974/comments | https://api.github.com/repos/huggingface/datasets/issues/974/events | https://github.com/huggingface/datasets/pull/974 | 754,811,185 | MDExOlB1bGxSZXF1ZXN0NTMwNjQzNzQ3 | 974 | Add MeTooMA Dataset | {
"login": "akash418",
"id": 23264033,
"node_id": "MDQ6VXNlcjIzMjY0MDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/23264033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akash418",
"html_url": "https://github.com/akash418",
"followers_url": "https://api.github.com/users/akash418/followers",
"following_url": "https://api.github.com/users/akash418/following{/other_user}",
"gists_url": "https://api.github.com/users/akash418/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akash418/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akash418/subscriptions",
"organizations_url": "https://api.github.com/users/akash418/orgs",
"repos_url": "https://api.github.com/users/akash418/repos",
"events_url": "https://api.github.com/users/akash418/events{/privacy}",
"received_events_url": "https://api.github.com/users/akash418/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,866,241,000 | 1,606,867,078,000 | 1,606,867,078,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/974/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/974",
"html_url": "https://github.com/huggingface/datasets/pull/974",
"diff_url": "https://github.com/huggingface/datasets/pull/974.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/974.patch",
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/973 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/973/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/973/comments | https://api.github.com/repos/huggingface/datasets/issues/973/events | https://github.com/huggingface/datasets/pull/973 | 754,807,963 | MDExOlB1bGxSZXF1ZXN0NTMwNjQxMTky | 973 | Adding The Microsoft Terminology Collection dataset. | {
"login": "leoxzhao",
"id": 7915719,
"node_id": "MDQ6VXNlcjc5MTU3MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7915719?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leoxzhao",
"html_url": "https://github.com/leoxzhao",
"followers_url": "https://api.github.com/users/leoxzhao/followers",
"following_url": "https://api.github.com/users/leoxzhao/following{/other_user}",
"gists_url": "https://api.github.com/users/leoxzhao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leoxzhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leoxzhao/subscriptions",
"organizations_url": "https://api.github.com/users/leoxzhao/orgs",
"repos_url": "https://api.github.com/users/leoxzhao/repos",
"events_url": "https://api.github.com/users/leoxzhao/events{/privacy}",
"received_events_url": "https://api.github.com/users/leoxzhao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I have to manually copy a dataset_infos.json file from other dataset and modify it since the `datasets-cli` isn't able to handle manually downloaded datasets yet (as far as I know).",
"you can generate the dataset_infos.json file even for dataset with manual data\r\nTo do so just specify `--data_dir <path/to/the/folder/containing/the/manual/data>`",
"Also, dummy_data seems having difficulty to handle manually downloaded datasets. `python datasets-cli dummy_data datasets/ms_terms --data_dir ...` reported `error: unrecognized arguments: --data_dir` error. Without `--data_dir`, it reported this error:\r\n```\r\nDataset ms_terms with config BuilderConfig(name='ms_terms-full', version=1.0.0, data_dir=None, data_files=None, description='...\\n') seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has to be created with less guidance. Make sure you create the file None.\r\nTraceback (most recent call last):\r\n File \"datasets-cli\", line 36, in <module>\r\n service.run()\r\n File \"/Users/lzhao/Downloads/huggingface/datasets/src/datasets/commands/dummy_data.py\", line 326, in run\r\n dataset_builder=dataset_builder, mock_dl_manager=mock_dl_manager\r\n File \"/Users/lzhao/Downloads/huggingface/datasets/src/datasets/commands/dummy_data.py\", line 406, in _print_dummy_data_instructions\r\n for split in generator_splits:\r\nUnboundLocalError: local variable 'generator_splits' referenced before assignment\r\n```",
"Oh yes `--data_dir` seems to only be supported for the `datasets_cli test` command. Sorry about that.\r\n\r\nCan you try to build the dummy_data.zip file manually ?\r\n\r\nIt has to be inside `./datasets/ms_terms/dummy/ms_terms-full/1.0.0`.\r\nInside this folder, please create a folder `dummy_data` that contains a dummy file `MicrosoftTermCollection.tbx` (with just a few examples in it). Then you can zip the `dummy_data` folder to `dummy_data.zip`\r\n\r\nThen you can check if it worked using the command\r\n```\r\npytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_ms_terms\r\n```\r\n\r\nFeel free to use some debugging print statements in your script if it doesn't work first try to see what `dl_manager.manual_dir` ends up being and also `path_to_manual_file`.\r\n\r\nFeel free to ping me if you have other questions",
"`pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_ms_terms` gave `1 passed, 4 warnings in 8.13s`. Existing datasets, like `wikihow`, and `newsroom`, also report 4 warnings. So, I guess that is not related to this dataset.",
"Could you run `make style` before we merge @leoxzhao ?",
"the other errors are fixed on master so it's fine",
"> Could you run `make style` before we merge @leoxzhao ?\r\n\r\nSure thing. Done. Thanks Quentin. I have other datasets in mind. All of which requires manual download. This process is very helpful",
"Thank you :) "
] | 1,606,865,783,000 | 1,607,095,544,000 | 1,607,094,766,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/973/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/973",
"html_url": "https://github.com/huggingface/datasets/pull/973",
"diff_url": "https://github.com/huggingface/datasets/pull/973.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/973.patch",
"merged_at": 1607094766000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/972 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/972/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/972/comments | https://api.github.com/repos/huggingface/datasets/issues/972/events | https://github.com/huggingface/datasets/pull/972 | 754,787,314 | MDExOlB1bGxSZXF1ZXN0NTMwNjI0NTI3 | 972 | Add Children's Book Test (CBT) dataset | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq,\r\n\r\nI guess this PR can be closed since we merged #2044?\r\n\r\nI have used the same link for the homepage, as it is where the dataset is provided, hope that is okay?",
"Closing in favor of #2044, thanks again :)\r\n\r\n> I have used the same link for the homepage, as it is where the dataset is provided, hope that is okay?\r\n\r\nYea it's ok actually, at that time I thought there was another homepage for this dataset"
] | 1,606,863,206,000 | 1,616,153,403,000 | 1,616,153,403,000 | MEMBER | null | Add the Children's Book Test (CBT) from Facebook (Hill et al. 2016).
Sentence completion given a few sentences as context from a children's book. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/972/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/972",
"html_url": "https://github.com/huggingface/datasets/pull/972",
"diff_url": "https://github.com/huggingface/datasets/pull/972.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/972.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/971 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/971/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/971/comments | https://api.github.com/repos/huggingface/datasets/issues/971/events | https://github.com/huggingface/datasets/pull/971 | 754,784,041 | MDExOlB1bGxSZXF1ZXN0NTMwNjIxOTQz | 971 | add piqa | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,862,824,000 | 1,606,903,082,000 | 1,606,903,081,000 | MEMBER | null | Physical Interaction: Question Answering (commonsense)
https://yonatanbisk.com/piqa/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/971/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/971",
"html_url": "https://github.com/huggingface/datasets/pull/971",
"diff_url": "https://github.com/huggingface/datasets/pull/971.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/971.patch",
"merged_at": 1606903081000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/970 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/970/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/970/comments | https://api.github.com/repos/huggingface/datasets/issues/970/events | https://github.com/huggingface/datasets/pull/970 | 754,697,489 | MDExOlB1bGxSZXF1ZXN0NTMwNTUxNTkz | 970 | Add SWAG | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,854,065,000 | 1,606,902,916,000 | 1,606,902,915,000 | MEMBER | null | Commonsense NLI -> https://rowanzellers.com/swag/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/970/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/970",
"html_url": "https://github.com/huggingface/datasets/pull/970",
"diff_url": "https://github.com/huggingface/datasets/pull/970.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/970.patch",
"merged_at": 1606902915000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/969 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/969/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/969/comments | https://api.github.com/repos/huggingface/datasets/issues/969/events | https://github.com/huggingface/datasets/pull/969 | 754,681,940 | MDExOlB1bGxSZXF1ZXN0NTMwNTM4ODQz | 969 | Add wiki auto dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,852,691,000 | 1,606,925,954,000 | 1,606,925,954,000 | MEMBER | null | This PR adds the WikiAuto sentence simplification dataset
https://github.com/chaojiang06/wiki-auto
This is also a prospective GEM task, hence the README.md | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/969/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/969",
"html_url": "https://github.com/huggingface/datasets/pull/969",
"diff_url": "https://github.com/huggingface/datasets/pull/969.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/969.patch",
"merged_at": 1606925954000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/968 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/968/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/968/comments | https://api.github.com/repos/huggingface/datasets/issues/968/events | https://github.com/huggingface/datasets/pull/968 | 754,659,015 | MDExOlB1bGxSZXF1ZXN0NTMwNTIwMjEz | 968 | ADD Afrikaans NER | {
"login": "yvonnegitau",
"id": 7923902,
"node_id": "MDQ6VXNlcjc5MjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7923902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yvonnegitau",
"html_url": "https://github.com/yvonnegitau",
"followers_url": "https://api.github.com/users/yvonnegitau/followers",
"following_url": "https://api.github.com/users/yvonnegitau/following{/other_user}",
"gists_url": "https://api.github.com/users/yvonnegitau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yvonnegitau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yvonnegitau/subscriptions",
"organizations_url": "https://api.github.com/users/yvonnegitau/orgs",
"repos_url": "https://api.github.com/users/yvonnegitau/repos",
"events_url": "https://api.github.com/users/yvonnegitau/events{/privacy}",
"received_events_url": "https://api.github.com/users/yvonnegitau/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"One trick if you want to add other datasets: consider running these commands each time you want to add a new dataset\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit checkout -b add-<my_dataset_name>\r\n```"
] | 1,606,850,583,000 | 1,606,902,088,000 | 1,606,902,088,000 | CONTRIBUTOR | null | Afrikaans NER corpus | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/968/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/968",
"html_url": "https://github.com/huggingface/datasets/pull/968",
"diff_url": "https://github.com/huggingface/datasets/pull/968.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/968.patch",
"merged_at": 1606902088000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/967 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/967/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/967/comments | https://api.github.com/repos/huggingface/datasets/issues/967/events | https://github.com/huggingface/datasets/pull/967 | 754,578,988 | MDExOlB1bGxSZXF1ZXN0NTMwNDU0OTI3 | 967 | Add CS Restaurants dataset | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Oh yeah, for some reason I thought you had to do it after the merge, I'll get on it",
"Weird, now the CI seems to fail because of other datasets (XGLUE, Norwegian_NER)",
"Yea you just need to rebase from master",
"Re-opening a PR without the messed-up rebase"
] | 1,606,843,057,000 | 1,606,931,864,000 | 1,606,931,845,000 | MEMBER | null | This PR adds the Czech restaurants dataset for Czech NLG. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/967/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/967",
"html_url": "https://github.com/huggingface/datasets/pull/967",
"diff_url": "https://github.com/huggingface/datasets/pull/967.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/967.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/966 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/966/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/966/comments | https://api.github.com/repos/huggingface/datasets/issues/966/events | https://github.com/huggingface/datasets/pull/966 | 754,558,686 | MDExOlB1bGxSZXF1ZXN0NTMwNDM4NDE4 | 966 | Add CLINC150 Dataset | {
"login": "sumanthd17",
"id": 28291870,
"node_id": "MDQ6VXNlcjI4MjkxODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumanthd17",
"html_url": "https://github.com/sumanthd17",
"followers_url": "https://api.github.com/users/sumanthd17/followers",
"following_url": "https://api.github.com/users/sumanthd17/following{/other_user}",
"gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions",
"organizations_url": "https://api.github.com/users/sumanthd17/orgs",
"repos_url": "https://api.github.com/users/sumanthd17/repos",
"events_url": "https://api.github.com/users/sumanthd17/events{/privacy}",
"received_events_url": "https://api.github.com/users/sumanthd17/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks like your PR now shows changes in many other files than the ones for CLINC150.\r\nFeel free to create another branch and another PR",
"created new [PR](https://github.com/huggingface/datasets/pull/1016)\r\n\r\nclosing this!"
] | 1,606,841,413,000 | 1,606,934,743,000 | 1,606,934,730,000 | CONTRIBUTOR | null | Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/966/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/966",
"html_url": "https://github.com/huggingface/datasets/pull/966",
"diff_url": "https://github.com/huggingface/datasets/pull/966.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/966.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/965 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/965/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/965/comments | https://api.github.com/repos/huggingface/datasets/issues/965/events | https://github.com/huggingface/datasets/pull/965 | 754,553,169 | MDExOlB1bGxSZXF1ZXN0NTMwNDMzODQ2 | 965 | Add CLINC150 Dataset | {
"login": "sumanthd17",
"id": 28291870,
"node_id": "MDQ6VXNlcjI4MjkxODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumanthd17",
"html_url": "https://github.com/sumanthd17",
"followers_url": "https://api.github.com/users/sumanthd17/followers",
"following_url": "https://api.github.com/users/sumanthd17/following{/other_user}",
"gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions",
"organizations_url": "https://api.github.com/users/sumanthd17/orgs",
"repos_url": "https://api.github.com/users/sumanthd17/repos",
"events_url": "https://api.github.com/users/sumanthd17/events{/privacy}",
"received_events_url": "https://api.github.com/users/sumanthd17/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,840,980,000 | 1,606,841,476,000 | 1,606,841,355,000 | CONTRIBUTOR | null | Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/965/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/965",
"html_url": "https://github.com/huggingface/datasets/pull/965",
"diff_url": "https://github.com/huggingface/datasets/pull/965.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/965.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/964 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/964/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/964/comments | https://api.github.com/repos/huggingface/datasets/issues/964/events | https://github.com/huggingface/datasets/pull/964 | 754,474,660 | MDExOlB1bGxSZXF1ZXN0NTMwMzY4OTAy | 964 | Adding the WebNLG dataset | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This is task is part of the GEM suite so will actually need a more complete dataset card. I'm taking a break for now though and will get back to it before merging :) "
] | 1,606,835,123,000 | 1,606,930,445,000 | 1,606,930,445,000 | MEMBER | null | This PR adds data from the WebNLG challenge, with one configuration per release and challenge iteration.
More information can be found [here](https://webnlg-challenge.loria.fr/)
Unfortunately, the data itself comes from a pretty large number of small XML files, so the dummy data ends up being quite large (8.4 MB even keeping only one example per file). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/964/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/964",
"html_url": "https://github.com/huggingface/datasets/pull/964",
"diff_url": "https://github.com/huggingface/datasets/pull/964.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/964.patch",
"merged_at": 1606930445000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/963 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/963/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/963/comments | https://api.github.com/repos/huggingface/datasets/issues/963/events | https://github.com/huggingface/datasets/pull/963 | 754,451,234 | MDExOlB1bGxSZXF1ZXN0NTMwMzQ5NjQ4 | 963 | add CODAH dataset | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,833,425,000 | 1,606,916,758,000 | 1,606,915,285,000 | MEMBER | null | Adding CODAH dataset.
More info:
https://github.com/Websail-NU/CODAH | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/963/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/963",
"html_url": "https://github.com/huggingface/datasets/pull/963",
"diff_url": "https://github.com/huggingface/datasets/pull/963.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/963.patch",
"merged_at": 1606915285000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/962 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/962/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/962/comments | https://api.github.com/repos/huggingface/datasets/issues/962/events | https://github.com/huggingface/datasets/pull/962 | 754,441,428 | MDExOlB1bGxSZXF1ZXN0NTMwMzQxMDA2 | 962 | Add Danish Political Comments Dataset | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,832,912,000 | 1,606,991,515,000 | 1,606,991,514,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/962/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/962",
"html_url": "https://github.com/huggingface/datasets/pull/962",
"diff_url": "https://github.com/huggingface/datasets/pull/962.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/962.patch",
"merged_at": 1606991514000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/961 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/961/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/961/comments | https://api.github.com/repos/huggingface/datasets/issues/961/events | https://github.com/huggingface/datasets/issues/961 | 754,434,398 | MDU6SXNzdWU3NTQ0MzQzOTg= | 961 | sample multiple datasets | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"here I share my dataloader currently for multiple tasks: https://gist.github.com/rabeehkarimimahabadi/39f9444a4fb6f53dcc4fca5d73bf8195 \r\n\r\nI need to train my model distributedly with this dataloader, \"MultiTasksataloader\", currently this does not work in distributed fasion,\r\nto save on memory I tried to use iterative datasets, could you have a look in this dataloader and tell me if this is indeed the case? not sure how to make datasets being iterative to not load them in memory, then I remove the sampler for dataloader, and shard the data per core, could you tell me please how I should implement this case in datasets library? and how do you find my implementation in terms of correctness? thanks \r\n"
] | 1,606,832,402,000 | 1,606,872,764,000 | null | CONTRIBUTOR | null | Hi
I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is:
- I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it
sub-questions:
- I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do?
- I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/961/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/960 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/960/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/960/comments | https://api.github.com/repos/huggingface/datasets/issues/960/events | https://github.com/huggingface/datasets/pull/960 | 754,422,710 | MDExOlB1bGxSZXF1ZXN0NTMwMzI1MzUx | 960 | Add code to automate parts of the dataset card | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,831,491,000 | 1,619,423,761,000 | 1,619,423,761,000 | MEMBER | null | Most parts of the "Dataset Structure" section can be generated automatically. This PR adds some code to do so. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/960/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/960",
"html_url": "https://github.com/huggingface/datasets/pull/960",
"diff_url": "https://github.com/huggingface/datasets/pull/960.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/960.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/959 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/959/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/959/comments | https://api.github.com/repos/huggingface/datasets/issues/959/events | https://github.com/huggingface/datasets/pull/959 | 754,418,610 | MDExOlB1bGxSZXF1ZXN0NTMwMzIxOTM1 | 959 | Add Tunizi Dataset | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,831,179,000 | 1,607,005,301,000 | 1,607,005,300,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/959/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/959/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/959",
"html_url": "https://github.com/huggingface/datasets/pull/959",
"diff_url": "https://github.com/huggingface/datasets/pull/959.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/959.patch",
"merged_at": 1607005300000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/958 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/958/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/958/comments | https://api.github.com/repos/huggingface/datasets/issues/958/events | https://github.com/huggingface/datasets/pull/958 | 754,404,095 | MDExOlB1bGxSZXF1ZXN0NTMwMzA5ODkz | 958 | dataset(ncslgr): add initial loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq I added the README files, and now the tests fail... (check commit history, only changed MD file)\r\nThe tests seem a bit unstable",
"the `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine",
"merging since the CI is fixed on master"
] | 1,606,830,077,000 | 1,607,358,939,000 | 1,607,358,939,000 | CONTRIBUTOR | null | clean #789 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/958/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/958",
"html_url": "https://github.com/huggingface/datasets/pull/958",
"diff_url": "https://github.com/huggingface/datasets/pull/958.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/958.patch",
"merged_at": 1607358939000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/957 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/957/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/957/comments | https://api.github.com/repos/huggingface/datasets/issues/957/events | https://github.com/huggingface/datasets/pull/957 | 754,380,073 | MDExOlB1bGxSZXF1ZXN0NTMwMjg5OTk4 | 957 | Isixhosa ner corpus | {
"login": "yvonnegitau",
"id": 7923902,
"node_id": "MDQ6VXNlcjc5MjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7923902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yvonnegitau",
"html_url": "https://github.com/yvonnegitau",
"followers_url": "https://api.github.com/users/yvonnegitau/followers",
"following_url": "https://api.github.com/users/yvonnegitau/following{/other_user}",
"gists_url": "https://api.github.com/users/yvonnegitau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yvonnegitau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yvonnegitau/subscriptions",
"organizations_url": "https://api.github.com/users/yvonnegitau/orgs",
"repos_url": "https://api.github.com/users/yvonnegitau/repos",
"events_url": "https://api.github.com/users/yvonnegitau/events{/privacy}",
"received_events_url": "https://api.github.com/users/yvonnegitau/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,828,116,000 | 1,606,846,498,000 | 1,606,846,498,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/957/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/957/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/957",
"html_url": "https://github.com/huggingface/datasets/pull/957",
"diff_url": "https://github.com/huggingface/datasets/pull/957.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/957.patch",
"merged_at": 1606846498000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/956 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/956/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/956/comments | https://api.github.com/repos/huggingface/datasets/issues/956/events | https://github.com/huggingface/datasets/pull/956 | 754,368,378 | MDExOlB1bGxSZXF1ZXN0NTMwMjgwMzU1 | 956 | Add Norwegian NER | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Merging this one, good job and thank you @jplu :) "
] | 1,606,827,062,000 | 1,606,899,191,000 | 1,606,846,161,000 | CONTRIBUTOR | null | This PR adds the [Norwegian NER](https://github.com/ljos/navnkjenner) dataset.
I have added the `conllu` package as a test dependency. This is required to properly parse the `.conllu` files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/956/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/956/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/956",
"html_url": "https://github.com/huggingface/datasets/pull/956",
"diff_url": "https://github.com/huggingface/datasets/pull/956.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/956.patch",
"merged_at": 1606846161000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/955 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/955/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/955/comments | https://api.github.com/repos/huggingface/datasets/issues/955/events | https://github.com/huggingface/datasets/pull/955 | 754,367,291 | MDExOlB1bGxSZXF1ZXN0NTMwMjc5NDQw | 955 | Added PragmEval benchmark | {
"login": "sileod",
"id": 9168444,
"node_id": "MDQ6VXNlcjkxNjg0NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9168444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sileod",
"html_url": "https://github.com/sileod",
"followers_url": "https://api.github.com/users/sileod/followers",
"following_url": "https://api.github.com/users/sileod/following{/other_user}",
"gists_url": "https://api.github.com/users/sileod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sileod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sileod/subscriptions",
"organizations_url": "https://api.github.com/users/sileod/orgs",
"repos_url": "https://api.github.com/users/sileod/repos",
"events_url": "https://api.github.com/users/sileod/events{/privacy}",
"received_events_url": "https://api.github.com/users/sileod/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> Really cool ! Thanks for adding this one :)\r\n> Good job at adding all those citations for each task\r\n> \r\n> Looks like the dummy data test doesn't pass. Maybe some files are missing in the dummy_data.zip files ?\r\n> The error reports `pragmeval/verifiability/train.tsv` to be missing\r\n> \r\n> Also could you add the tags part of the dataset card (the rest is optional) ?\r\n> See more info here : https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nIn the prior commits I generated dataset_infos and the dummy files myself\r\nNow they are generated with the cli, and the tests now seem to be passing better\r\nI will look into the tag\r\n",
"Looks like you did a good job with dummy data in the first place !\r\nThe downside of automatically generated dummy data is that the files are heavier (here 40KB per file).\r\nIf you could replace the generated dummy files with the one you created yourself it would be awesome, since the one you did yourself are way lighter (around 1KB per file). Using small files make `git clone` run faster so we encourage to use small dummy_data files.",
"could you rebase from master ? it should fix the CI",
"> could you rebase from master ? it should fix the CI\r\n\r\nI think it is due to the file structure of the dummy data that causes test failure. The automatically generated dummy data pass the tests",
"Indeed the error reports that `pragmeval/verifiability/train.tsv` is missing for the verifiability dummy_data.zip file.\r\nTo fix that you should add the missing data files in each dummy_data.zip file.\r\nTo test that your dummy data work you can run\r\n```\r\nRUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_\r\n```\r\nif some file is missing it should tell you which one",
"Also it looks like you haven't rebased from master yet, even though you did a `rebase` commit. \r\n\r\nrebasing should fix the other CI fails",
"It's ok if we have `RemoteDatasetTest ` errors, they're fixed on master",
"merging since the CI is fixed on master",
"Hey @sileod! Super nice to see you participating ;)\r\n\r\nDid you officially joined the sprint by posting on [the forum thread](https://discuss.huggingface.co/t/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library/2176) and joining our slack?\r\n\r\nI can't seem to find you there! Should I add you directly with your gmail address?",
"Hi @sileod 👋 "
] | 1,606,826,955,000 | 1,607,078,612,000 | 1,606,988,207,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/955/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/955",
"html_url": "https://github.com/huggingface/datasets/pull/955",
"diff_url": "https://github.com/huggingface/datasets/pull/955.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/955.patch",
"merged_at": 1606988207000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/954 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/954/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/954/comments | https://api.github.com/repos/huggingface/datasets/issues/954/events | https://github.com/huggingface/datasets/pull/954 | 754,362,012 | MDExOlB1bGxSZXF1ZXN0NTMwMjc1MDY4 | 954 | add prachathai67k | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Test failing for same issues as https://github.com/huggingface/datasets/pull/939\r\nPlease advise.\r\n\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_xglue\r\n===== 7 failed, 1309 passed, 932 skipped, 11 warnings in 166.71s (0:02:46) =====\r\n```",
"Closing and opening a new pull request to solve rebase issues",
"To be continued on https://github.com/huggingface/datasets/pull/982"
] | 1,606,826,455,000 | 1,606,885,931,000 | 1,606,884,232,000 | CONTRIBUTOR | null | `prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com
The prachathai-67k dataset was scraped from the news site Prachathai.
We filtered out those articles with less than 500 characters of body text, mostly images and cartoons.
It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018.
The dataset was originally scraped by @lukkiddd and cleaned by @cstorm125.
You can also see preliminary exploration at https://github.com/PyThaiNLP/prachathai-67k/blob/master/exploration.ipynb | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/954/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/954",
"html_url": "https://github.com/huggingface/datasets/pull/954",
"diff_url": "https://github.com/huggingface/datasets/pull/954.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/954.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/953 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/953/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/953/comments | https://api.github.com/repos/huggingface/datasets/issues/953/events | https://github.com/huggingface/datasets/pull/953 | 754,359,942 | MDExOlB1bGxSZXF1ZXN0NTMwMjczMzg5 | 953 | added health_fact dataset | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq,\r\nInitially I tried int(-1) only in place of nan labels and missing values but I kept on getting this error ```pyarrow.lib.ArrowTypeError: Expected bytes, got a 'int' object``` maybe because I'm sending int values (-1) to objects which are string type"
] | 1,606,826,264,000 | 1,606,864,293,000 | 1,606,864,293,000 | CONTRIBUTOR | null | Added dataset Explainable Fact-Checking for Public Health Claims (dataset_id: health_fact) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/953/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/953",
"html_url": "https://github.com/huggingface/datasets/pull/953",
"diff_url": "https://github.com/huggingface/datasets/pull/953.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/953.patch",
"merged_at": 1606864293000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/952 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/952/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/952/comments | https://api.github.com/repos/huggingface/datasets/issues/952/events | https://github.com/huggingface/datasets/pull/952 | 754,357,270 | MDExOlB1bGxSZXF1ZXN0NTMwMjcxMTQz | 952 | Add orange sum | {
"login": "moussaKam",
"id": 28675016,
"node_id": "MDQ6VXNlcjI4Njc1MDE2",
"avatar_url": "https://avatars.githubusercontent.com/u/28675016?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moussaKam",
"html_url": "https://github.com/moussaKam",
"followers_url": "https://api.github.com/users/moussaKam/followers",
"following_url": "https://api.github.com/users/moussaKam/following{/other_user}",
"gists_url": "https://api.github.com/users/moussaKam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moussaKam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moussaKam/subscriptions",
"organizations_url": "https://api.github.com/users/moussaKam/orgs",
"repos_url": "https://api.github.com/users/moussaKam/repos",
"events_url": "https://api.github.com/users/moussaKam/events{/privacy}",
"received_events_url": "https://api.github.com/users/moussaKam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,826,014,000 | 1,606,837,440,000 | 1,606,837,440,000 | CONTRIBUTOR | null | Add OrangeSum a french abstractive summarization dataset.
Paper: [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/952/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/952",
"html_url": "https://github.com/huggingface/datasets/pull/952",
"diff_url": "https://github.com/huggingface/datasets/pull/952.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/952.patch",
"merged_at": 1606837440000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/951 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/951/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/951/comments | https://api.github.com/repos/huggingface/datasets/issues/951/events | https://github.com/huggingface/datasets/pull/951 | 754,349,979 | MDExOlB1bGxSZXF1ZXN0NTMwMjY1MTY0 | 951 | Prachathai67k | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Wrongly branching from existing branch of wisesight_sentiment. Closing and opening another one specifically for prachathai67k"
] | 1,606,825,312,000 | 1,606,825,793,000 | 1,606,825,706,000 | CONTRIBUTOR | null | Add `prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com
The `prachathai-67k` dataset was scraped from the news site [Prachathai](prachathai.com). We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018. The dataset was originally scraped by [@lukkiddd](https://github.com/lukkiddd) and cleaned by [@cstorm125](https://github.com/cstorm125). Download the dataset [here](https://www.dropbox.com/s/fsxepdka4l2pr45/prachathai-67k.zip?dl=1). You can also see preliminary exploration in [exploration.ipynb](https://github.com/PyThaiNLP/prachathai-67k/blob/master/exploration.ipynb).
This dataset is a part of [pyThaiNLP](https://github.com/PyThaiNLP/) Thai text [classification-benchmarks](https://github.com/PyThaiNLP/classification-benchmarks). For the benchmark, we selected the following tags with substantial volume that resemble **classifying types of articles**:
* `การเมือง` - politics
* `สิทธิมนุษยชน` - human_rights
* `คุณภาพชีวิต` - quality_of_life
* `ต่างประเทศ` - international
* `สังคม` - social
* `สิ่งแวดล้อม` - environment
* `เศรษฐกิจ` - economics
* `วัฒนธรรม` - culture
* `แรงงาน` - labor
* `ความมั่นคง` - national_security
* `ไอซีที` - ict
* `การศึกษา` - education | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/951/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/951",
"html_url": "https://github.com/huggingface/datasets/pull/951",
"diff_url": "https://github.com/huggingface/datasets/pull/951.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/951.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/950 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/950/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/950/comments | https://api.github.com/repos/huggingface/datasets/issues/950/events | https://github.com/huggingface/datasets/pull/950 | 754,318,686 | MDExOlB1bGxSZXF1ZXN0NTMwMjM4OTQx | 950 | Support .xz file format | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,822,488,000 | 1,606,829,958,000 | 1,606,829,958,000 | MEMBER | null | Add support to extract/uncompress files in .xz format. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/950/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/950/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/950",
"html_url": "https://github.com/huggingface/datasets/pull/950",
"diff_url": "https://github.com/huggingface/datasets/pull/950.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/950.patch",
"merged_at": 1606829958000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/949 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/949/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/949/comments | https://api.github.com/repos/huggingface/datasets/issues/949/events | https://github.com/huggingface/datasets/pull/949 | 754,317,777 | MDExOlB1bGxSZXF1ZXN0NTMwMjM4MTky | 949 | Add GermaNER Dataset | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq added. "
] | 1,606,822,411,000 | 1,607,004,401,000 | 1,607,004,400,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/949/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/949",
"html_url": "https://github.com/huggingface/datasets/pull/949",
"diff_url": "https://github.com/huggingface/datasets/pull/949.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/949.patch",
"merged_at": 1607004400000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/948 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/948/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/948/comments | https://api.github.com/repos/huggingface/datasets/issues/948/events | https://github.com/huggingface/datasets/pull/948 | 754,306,260 | MDExOlB1bGxSZXF1ZXN0NTMwMjI4NjQz | 948 | docs(ADD_NEW_DATASET): correct indentation for script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,821,458,000 | 1,606,821,918,000 | 1,606,821,918,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/948/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/948",
"html_url": "https://github.com/huggingface/datasets/pull/948",
"diff_url": "https://github.com/huggingface/datasets/pull/948.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/948.patch",
"merged_at": 1606821918000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/947 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/947/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/947/comments | https://api.github.com/repos/huggingface/datasets/issues/947/events | https://github.com/huggingface/datasets/pull/947 | 754,286,658 | MDExOlB1bGxSZXF1ZXN0NTMwMjEyMjc3 | 947 | Add europeana newspapers | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,819,938,000 | 1,606,902,155,000 | 1,606,902,129,000 | CONTRIBUTOR | null | This PR adds the [Europeana newspapers](https://github.com/EuropeanaNewspapers/ner-corpora) dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/947/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/947/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/947",
"html_url": "https://github.com/huggingface/datasets/pull/947",
"diff_url": "https://github.com/huggingface/datasets/pull/947.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/947.patch",
"merged_at": 1606902129000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/946 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/946/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/946/comments | https://api.github.com/repos/huggingface/datasets/issues/946/events | https://github.com/huggingface/datasets/pull/946 | 754,278,632 | MDExOlB1bGxSZXF1ZXN0NTMwMjA1Nzgw | 946 | add PEC dataset | {
"login": "zhongpeixiang",
"id": 11826803,
"node_id": "MDQ6VXNlcjExODI2ODAz",
"avatar_url": "https://avatars.githubusercontent.com/u/11826803?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhongpeixiang",
"html_url": "https://github.com/zhongpeixiang",
"followers_url": "https://api.github.com/users/zhongpeixiang/followers",
"following_url": "https://api.github.com/users/zhongpeixiang/following{/other_user}",
"gists_url": "https://api.github.com/users/zhongpeixiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhongpeixiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhongpeixiang/subscriptions",
"organizations_url": "https://api.github.com/users/zhongpeixiang/orgs",
"repos_url": "https://api.github.com/users/zhongpeixiang/repos",
"events_url": "https://api.github.com/users/zhongpeixiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhongpeixiang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The checks failed again even if I didn't make any changes.",
"you just need to rebase from master to fix the CI :)",
"Sorry for the mess, I'm confused by the rebase and thus created a new branch."
] | 1,606,819,301,000 | 1,606,963,634,000 | 1,606,963,634,000 | CONTRIBUTOR | null | A persona-based empathetic conversation dataset published at EMNLP 2020. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/946/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/946",
"html_url": "https://github.com/huggingface/datasets/pull/946",
"diff_url": "https://github.com/huggingface/datasets/pull/946.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/946.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/945 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/945/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/945/comments | https://api.github.com/repos/huggingface/datasets/issues/945/events | https://github.com/huggingface/datasets/pull/945 | 754,273,920 | MDExOlB1bGxSZXF1ZXN0NTMwMjAyMDM1 | 945 | Adding Babi dataset - English version | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Replaced by #1126"
] | 1,606,818,936,000 | 1,607,096,585,000 | 1,607,096,574,000 | MEMBER | null | Adding the English version of bAbI.
Samples are taken from ParlAI for consistency with the main users at the moment. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/945/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/945/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/945",
"html_url": "https://github.com/huggingface/datasets/pull/945",
"diff_url": "https://github.com/huggingface/datasets/pull/945.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/945.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/944 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/944/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/944/comments | https://api.github.com/repos/huggingface/datasets/issues/944/events | https://github.com/huggingface/datasets/pull/944 | 754,228,947 | MDExOlB1bGxSZXF1ZXN0NTMwMTY0NTU5 | 944 | Add German Legal Entity Recognition Dataset | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"thanks ! merging this one"
] | 1,606,815,502,000 | 1,607,000,816,000 | 1,607,000,815,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/944/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/944",
"html_url": "https://github.com/huggingface/datasets/pull/944",
"diff_url": "https://github.com/huggingface/datasets/pull/944.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/944.patch",
"merged_at": 1607000814000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/943 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/943/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/943/comments | https://api.github.com/repos/huggingface/datasets/issues/943/events | https://github.com/huggingface/datasets/pull/943 | 754,192,491 | MDExOlB1bGxSZXF1ZXN0NTMwMTM2ODM3 | 943 | The FLUE Benchmark | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,813,250,000 | 1,606,836,278,000 | 1,606,836,270,000 | CONTRIBUTOR | null | This PR adds the [FLUE](https://github.com/getalp/Flaubert/tree/master/flue) benchmark which is a set of different datasets to evaluate models for French content.
Two datasets are missing, the French Treebank that we can use only for research purpose and we are not allowed to distribute, and the Word Sense disambiguation for Nouns that will be added later. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/943/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/943/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/943",
"html_url": "https://github.com/huggingface/datasets/pull/943",
"diff_url": "https://github.com/huggingface/datasets/pull/943.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/943.patch",
"merged_at": 1606836270000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/942 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/942/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/942/comments | https://api.github.com/repos/huggingface/datasets/issues/942/events | https://github.com/huggingface/datasets/issues/942 | 754,162,318 | MDU6SXNzdWU3NTQxNjIzMTg= | 942 | D | {
"login": "CryptoMiKKi",
"id": 74238514,
"node_id": "MDQ6VXNlcjc0MjM4NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/74238514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CryptoMiKKi",
"html_url": "https://github.com/CryptoMiKKi",
"followers_url": "https://api.github.com/users/CryptoMiKKi/followers",
"following_url": "https://api.github.com/users/CryptoMiKKi/following{/other_user}",
"gists_url": "https://api.github.com/users/CryptoMiKKi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CryptoMiKKi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CryptoMiKKi/subscriptions",
"organizations_url": "https://api.github.com/users/CryptoMiKKi/orgs",
"repos_url": "https://api.github.com/users/CryptoMiKKi/repos",
"events_url": "https://api.github.com/users/CryptoMiKKi/events{/privacy}",
"received_events_url": "https://api.github.com/users/CryptoMiKKi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,810,630,000 | 1,607,013,773,000 | 1,607,013,773,000 | NONE | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/942/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/941 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/941/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/941/comments | https://api.github.com/repos/huggingface/datasets/issues/941/events | https://github.com/huggingface/datasets/pull/941 | 754,141,321 | MDExOlB1bGxSZXF1ZXN0NTMwMDk0MTI2 | 941 | Add People's Daily NER dataset | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> LGTM thanks :)\n> \n> \n> \n> Before we merge, could you add a dataset card ? see here for more info: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\n> \n> \n> \n> Note that only the tags at the top of the dataset card are mandatory, if you feel like it's going to take too much time writing the rest to fill it all you can just skip the paragraphs\n\nNope. I don't think there is a citation. Also, can I do the dataset card later (maybe in bulk)?",
"We're doing one PR = one dataset to keep track of things. Feel free to add the tags later in this PR if you want to.\r\nAlso only the tags are required now, because we don't want people spending too much time on the cards",
"added @lhoestq ",
"Merging since the CI is fixed on master"
] | 1,606,808,933,000 | 1,606,934,563,000 | 1,606,934,561,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/941/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/941",
"html_url": "https://github.com/huggingface/datasets/pull/941",
"diff_url": "https://github.com/huggingface/datasets/pull/941.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/941.patch",
"merged_at": 1606934561000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/940 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/940/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/940/comments | https://api.github.com/repos/huggingface/datasets/issues/940/events | https://github.com/huggingface/datasets/pull/940 | 754,010,753 | MDExOlB1bGxSZXF1ZXN0NTI5OTc3OTQ2 | 940 | Add MSRA NER dataset | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions",
"organizations_url": "https://api.github.com/users/JetRunner/orgs",
"repos_url": "https://api.github.com/users/JetRunner/repos",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"received_events_url": "https://api.github.com/users/JetRunner/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"LGTM, don't forget the tags ;)"
] | 1,606,798,931,000 | 1,607,074,180,000 | 1,606,807,553,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/940/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/940",
"html_url": "https://github.com/huggingface/datasets/pull/940",
"diff_url": "https://github.com/huggingface/datasets/pull/940.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/940.patch",
"merged_at": 1606807553000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/939/comments | https://api.github.com/repos/huggingface/datasets/issues/939/events | https://github.com/huggingface/datasets/pull/939 | 753,965,405 | MDExOlB1bGxSZXF1ZXN0NTI5OTQwOTYz | 939 | add wisesight_sentiment | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions",
"organizations_url": "https://api.github.com/users/cstorm125/orgs",
"repos_url": "https://api.github.com/users/cstorm125/repos",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"received_events_url": "https://api.github.com/users/cstorm125/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq Thanks, Quentin. Removed the .ipynb_checkpoints and edited the README.md. The tests are failing because of other dataets. I'm figuring out why since the commits only have changes on `wisesight_sentiment`\r\n\r\n```\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_xglue\r\n```",
"@cstorm125 I really like the dataset and dataset card but there seems to have been a rebase issue at some point since it's now changing 140 files :D \r\n\r\nCould you rebase from master?",
"I think it might be faster to close and reopen.",
"To be continued on: https://github.com/huggingface/datasets/pull/981"
] | 1,606,791,999,000 | 1,606,884,758,000 | 1,606,883,751,000 | CONTRIBUTOR | null | Add `wisesight_sentiment` Social media messages in Thai language with sentiment label (positive, neutral, negative, question)
Model Card:
---
YAML tags:
annotations_creators:
- expert-generated
language_creators:
- found
languages:
- th
licenses:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for wisesight_sentiment
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/PyThaiNLP/wisesight-sentiment
- **Repository:** https://github.com/PyThaiNLP/wisesight-sentiment
- **Paper:**
- **Leaderboard:** https://www.kaggle.com/c/wisesight-sentiment/
- **Point of Contact:** https://github.com/PyThaiNLP/
### Dataset Summary
Wisesight Sentiment Corpus: Social media messages in Thai language with sentiment label (positive, neutral, negative, question)
- Released to public domain under Creative Commons Zero v1.0 Universal license.
- Labels: {"pos": 0, "neu": 1, "neg": 2, "q": 3}
- Size: 26,737 messages
- Language: Central Thai
- Style: Informal and conversational. With some news headlines and advertisement.
- Time period: Around 2016 to early 2019. With small amount from other period.
- Domains: Mixed. Majority are consumer products and services (restaurants, cosmetics, drinks, car, hotels), with some current affairs.
- Privacy:
- Only messages that made available to the public on the internet (websites, blogs, social network sites).
- For Facebook, this means the public comments (everyone can see) that made on a public page.
- Private/protected messages and messages in groups, chat, and inbox are not included.
- Alternations and modifications:
- Keep in mind that this corpus does not statistically represent anything in the language register.
- Large amount of messages are not in their original form. Personal data are removed or masked.
- Duplicated, leading, and trailing whitespaces are removed. Other punctuations, symbols, and emojis are kept intact.
(Mis)spellings are kept intact.
- Messages longer than 2,000 characters are removed.
- Long non-Thai messages are removed. Duplicated message (exact match) are removed.
- More characteristics of the data can be explore [this notebook](https://github.com/PyThaiNLP/wisesight-sentiment/blob/master/exploration.ipynb)
### Supported Tasks and Leaderboards
Sentiment analysis / [Kaggle Leaderboard](https://www.kaggle.com/c/wisesight-sentiment/)
### Languages
Thai
## Dataset Structure
### Data Instances
```
{'category': 'pos', 'texts': 'น่าสนนน'}
{'category': 'neu', 'texts': 'ครับ #phithanbkk'}
{'category': 'neg', 'texts': 'ซื้อแต่ผ้าอนามัยแบบเย็นมาค่ะ แบบว่าอีห่ากูนอนไม่ได้'}
{'category': 'q', 'texts': 'มีแอลกอฮอลมั้ยคะ'}
```
### Data Fields
- `texts`: texts
- `category`: sentiment of texts ranging from `pos` (positive; 0), `neu` (neutral; 1), `neg` (negative; 2) and `q` (question; 3)
### Data Splits
| | train | valid | test |
|-----------|-------|-------|-------|
| # samples | 21628 | 2404 | 2671 |
| # neu | 11795 | 1291 | 1453 |
| # neg | 5491 | 637 | 683 |
| # pos | 3866 | 434 | 478 |
| # q | 476 | 42 | 57 |
| avg words | 27.21 | 27.18 | 27.12 |
| avg chars | 89.82 | 89.50 | 90.36 |
## Dataset Creation
### Curation Rationale
Originally, the dataset was conceived for the [In-class Kaggle Competition](https://www.kaggle.com/c/wisesight-sentiment/) at Chulalongkorn university by [Ekapol Chuangsuwanich](https://www.cp.eng.chula.ac.th/en/about/faculty/ekapolc/) (Faculty of Engineering, Chulalongkorn University). It has since become one of the benchmarks for sentiment analysis in Thai.
### Source Data
#### Initial Data Collection and Normalization
- Style: Informal and conversational. With some news headlines and advertisement.
- Time period: Around 2016 to early 2019. With small amount from other period.
- Domains: Mixed. Majority are consumer products and services (restaurants, cosmetics, drinks, car, hotels), with some current affairs.
- Privacy:
- Only messages that made available to the public on the internet (websites, blogs, social network sites).
- For Facebook, this means the public comments (everyone can see) that made on a public page.
- Private/protected messages and messages in groups, chat, and inbox are not included.
- Usernames and non-public figure names are removed
- Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222)
- If you see any personal data still remain in the set, please tell us - so we can remove them.
- Alternations and modifications:
- Keep in mind that this corpus does not statistically represent anything in the language register.
- Large amount of messages are not in their original form. Personal data are removed or masked.
- Duplicated, leading, and trailing whitespaces are removed. Other punctuations, symbols, and emojis are kept intact.
- (Mis)spellings are kept intact.
- Messages longer than 2,000 characters are removed.
- Long non-Thai messages are removed. Duplicated message (exact match) are removed.
#### Who are the source language producers?
Social media users in Thailand
### Annotations
#### Annotation process
- Sentiment values are assigned by human annotators.
- A human annotator put his/her best effort to assign just one label, out of four, to a message.
- Agreement, enjoyment, and satisfaction are positive. Disagreement, sadness, and disappointment are negative.
- Showing interest in a topic or in a product is counted as positive. In this sense, a question about a particular product could has a positive sentiment value, if it shows the interest in the product.
- Saying that other product or service is better is counted as negative.
- General information or news title tend to be counted as neutral.
#### Who are the annotators?
Outsourced annotators hired by [Wisesight (Thailand) Co., Ltd.](https://github.com/wisesight/)
### Personal and Sensitive Information
- We trying to exclude any known personally identifiable information from this data set.
- Usernames and non-public figure names are removed
- Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222)
- If you see any personal data still remain in the set, please tell us - so we can remove them.
## Considerations for Using the Data
### Social Impact of Dataset
- `wisesight_sentiment` is the first and one of the few open datasets for sentiment analysis of social media data in Thai
- There are risks of personal information that escape the anonymization process
### Discussion of Biases
- A message can be ambiguous. When possible, the judgement will be based solely on the text itself.
- In some situation, like when the context is missing, the annotator may have to rely on his/her own world knowledge and just guess.
- In some cases, the human annotator may have an access to the message's context, like an image. These additional information are not included as part of this corpus.
### Other Known Limitations
- The labels are imbalanced; over half of the texts are `neu` (neutral) whereas there are very few `q` (question).
- Misspellings in social media texts make word tokenization process for Thai difficult, thus impacting the model performance
## Additional Information
### Dataset Curators
Thanks [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp) community, [Kitsuchart Pasupa](http://www.it.kmitl.ac.th/~kitsuchart/) (Faculty of Information Technology, King Mongkut's Institute of Technology Ladkrabang), and [Ekapol Chuangsuwanich](https://www.cp.eng.chula.ac.th/en/about/faculty/ekapolc/) (Faculty of Engineering, Chulalongkorn University) for advice. The original Kaggle competition, using the first version of this corpus, can be found at https://www.kaggle.com/c/wisesight-sentiment/
### Licensing Information
- If applicable, copyright of each message content belongs to the original poster.
- **Annotation data (labels) are released to public domain.**
- [Wisesight (Thailand) Co., Ltd.](https://github.com/wisesight/) helps facilitate the annotation, but does not necessarily agree upon the labels made by the human annotators. This annotation is for research purpose and does not reflect the professional work that Wisesight has been done for its customers.
- The human annotator does not necessarily agree or disagree with the message. Likewise, the label he/she made to the message does not necessarily reflect his/her personal view towards the message.
### Citation Information
Please cite the following if you make use of the dataset:
Arthit Suriyawongkul, Ekapol Chuangsuwanich, Pattarawat Chormai, and Charin Polpanumas. 2019. **PyThaiNLP/wisesight-sentiment: First release.** September.
BibTeX:
```
@software{bact_2019_3457447,
author = {Suriyawongkul, Arthit and
Chuangsuwanich, Ekapol and
Chormai, Pattarawat and
Polpanumas, Charin},
title = {PyThaiNLP/wisesight-sentiment: First release},
month = sep,
year = 2019,
publisher = {Zenodo},
version = {v1.0},
doi = {10.5281/zenodo.3457447},
url = {https://doi.org/10.5281/zenodo.3457447}
}
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/939/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/939",
"html_url": "https://github.com/huggingface/datasets/pull/939",
"diff_url": "https://github.com/huggingface/datasets/pull/939.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/939.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/938/comments | https://api.github.com/repos/huggingface/datasets/issues/938/events | https://github.com/huggingface/datasets/pull/938 | 753,940,979 | MDExOlB1bGxSZXF1ZXN0NTI5OTIxNzU5 | 938 | V-1.0.0 of isizulu_ner_corpus | {
"login": "yvonnegitau",
"id": 7923902,
"node_id": "MDQ6VXNlcjc5MjM5MDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7923902?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yvonnegitau",
"html_url": "https://github.com/yvonnegitau",
"followers_url": "https://api.github.com/users/yvonnegitau/followers",
"following_url": "https://api.github.com/users/yvonnegitau/following{/other_user}",
"gists_url": "https://api.github.com/users/yvonnegitau/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yvonnegitau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yvonnegitau/subscriptions",
"organizations_url": "https://api.github.com/users/yvonnegitau/orgs",
"repos_url": "https://api.github.com/users/yvonnegitau/repos",
"events_url": "https://api.github.com/users/yvonnegitau/events{/privacy}",
"received_events_url": "https://api.github.com/users/yvonnegitau/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"closing since it's been added in #957 "
] | 1,606,788,272,000 | 1,606,865,676,000 | 1,606,865,676,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/938/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/938/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/938",
"html_url": "https://github.com/huggingface/datasets/pull/938",
"diff_url": "https://github.com/huggingface/datasets/pull/938.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/938.patch",
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/937/comments | https://api.github.com/repos/huggingface/datasets/issues/937/events | https://github.com/huggingface/datasets/issues/937 | 753,921,078 | MDU6SXNzdWU3NTM5MjEwNzg= | 937 | Local machine/cluster Beam Datasets example/tutorial | {
"login": "shangw-nvidia",
"id": 66387198,
"node_id": "MDQ6VXNlcjY2Mzg3MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/66387198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shangw-nvidia",
"html_url": "https://github.com/shangw-nvidia",
"followers_url": "https://api.github.com/users/shangw-nvidia/followers",
"following_url": "https://api.github.com/users/shangw-nvidia/following{/other_user}",
"gists_url": "https://api.github.com/users/shangw-nvidia/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shangw-nvidia/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shangw-nvidia/subscriptions",
"organizations_url": "https://api.github.com/users/shangw-nvidia/orgs",
"repos_url": "https://api.github.com/users/shangw-nvidia/repos",
"events_url": "https://api.github.com/users/shangw-nvidia/events{/privacy}",
"received_events_url": "https://api.github.com/users/shangw-nvidia/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"I tried to make it run once on the SparkRunner but it seems that this runner has some issues when it is run locally.\r\nFrom my experience the DirectRunner is fine though, even if it's clearly not memory efficient.\r\n\r\nIt would be awesome though to make it work locally on a SparkRunner !\r\nDid you manage to make your processing work ?"
] | 1,606,785,103,000 | 1,608,731,696,000 | null | NONE | null | Hi,
I'm wondering if https://huggingface.co/docs/datasets/beam_dataset.html has an non-GCP or non-Dataflow version example/tutorial? I tried to migrate it to run on DirectRunner and SparkRunner, however, there were way too many runtime errors that I had to fix during the process, and even so I wasn't able to get either runner correctly producing the desired output.
Thanks!
Shang | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/937/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/936/comments | https://api.github.com/repos/huggingface/datasets/issues/936/events | https://github.com/huggingface/datasets/pull/936 | 753,915,603 | MDExOlB1bGxSZXF1ZXN0NTI5OTAxODMw | 936 | Added HANS parses and categories | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,784,296,000 | 1,606,828,781,000 | 1,606,828,780,000 | MEMBER | null | This pull request adds HANS missing information: the sentence parses, as well as the heuristic category. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/936/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/936/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/936",
"html_url": "https://github.com/huggingface/datasets/pull/936",
"diff_url": "https://github.com/huggingface/datasets/pull/936.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/936.patch",
"merged_at": 1606828780000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/935 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/935/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/935/comments | https://api.github.com/repos/huggingface/datasets/issues/935/events | https://github.com/huggingface/datasets/pull/935 | 753,863,055 | MDExOlB1bGxSZXF1ZXN0NTI5ODU5MjM4 | 935 | add PIB dataset | {
"login": "vasudevgupta7",
"id": 53136577,
"node_id": "MDQ6VXNlcjUzMTM2NTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vasudevgupta7",
"html_url": "https://github.com/vasudevgupta7",
"followers_url": "https://api.github.com/users/vasudevgupta7/followers",
"following_url": "https://api.github.com/users/vasudevgupta7/following{/other_user}",
"gists_url": "https://api.github.com/users/vasudevgupta7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vasudevgupta7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vasudevgupta7/subscriptions",
"organizations_url": "https://api.github.com/users/vasudevgupta7/orgs",
"repos_url": "https://api.github.com/users/vasudevgupta7/repos",
"events_url": "https://api.github.com/users/vasudevgupta7/events{/privacy}",
"received_events_url": "https://api.github.com/users/vasudevgupta7/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi, \r\n\r\nI am unable to get success in these tests. Can someone help me by pointing out possible errors?\r\n\r\nThanks",
"Hi ! you can read the tests by logging in to circleci.\r\n\r\nAnyway for information here are the errors : \r\n```\r\ndatasets/pib/pib.py:19:1: F401 'csv' imported but unused\r\ndatasets/pib/pib.py:20:1: F401 'json' imported but unused\r\ndatasets/pib/pib.py:36:84: W291 trailing whitespace\r\n```\r\nand \r\n```\r\nFAILED tests/test_file_encoding.py::TestFileEncoding::test_no_encoding_on_file_open\r\n```\r\n\r\nTo fix the `test_no_encoding_on_file_open` you just have to specify an encoding while opening a text file. For example `encoding=\"utf-8\"`\r\n",
"All suggested changes are done.",
"Nice ! can you re-generate the dataset_infos.json file to take into account the feature type change ?\r\n```\r\ndatasets-cli test ./datasets/pib --save_infos --all_configs --ignore_verifications\r\n```\r\nAnd also format your code ?\r\n```\r\nmake style\r\n```"
] | 1,606,776,943,000 | 1,606,864,631,000 | 1,606,864,631,000 | CONTRIBUTOR | null | This pull request will add PIB dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/935/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/935",
"html_url": "https://github.com/huggingface/datasets/pull/935",
"diff_url": "https://github.com/huggingface/datasets/pull/935.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/935.patch",
"merged_at": 1606864631000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/934 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/934/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/934/comments | https://api.github.com/repos/huggingface/datasets/issues/934/events | https://github.com/huggingface/datasets/pull/934 | 753,860,095 | MDExOlB1bGxSZXF1ZXN0NTI5ODU2ODY4 | 934 | small updates to the "add new dataset" guide | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"cc @yjernite @lhoestq @thomwolf "
] | 1,606,776,550,000 | 1,606,798,582,000 | 1,606,778,040,000 | MEMBER | null | small updates (corrections/typos) to the "add new dataset" guide | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/934/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/934",
"html_url": "https://github.com/huggingface/datasets/pull/934",
"diff_url": "https://github.com/huggingface/datasets/pull/934.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/934.patch",
"merged_at": 1606778040000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/933 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/933/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/933/comments | https://api.github.com/repos/huggingface/datasets/issues/933/events | https://github.com/huggingface/datasets/pull/933 | 753,854,272 | MDExOlB1bGxSZXF1ZXN0NTI5ODUyMTI1 | 933 | Add NumerSense | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,775,793,000 | 1,606,854,350,000 | 1,606,852,316,000 | CONTRIBUTOR | null | Adds the NumerSense dataset
- Webpage/leaderboard: https://inklab.usc.edu/NumerSense/
- Paper: https://arxiv.org/abs/2005.00683
- Description: NumerSense is a new numerical commonsense reasoning probing task, with a diagnostic dataset consisting of 3,145 masked-word-prediction probes. Basically, it's a benchmark to see whether your MLM can figure out the right number in a fill-in-the-blank task based on commonsense knowledge (a bird has **two** legs) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/933/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/933",
"html_url": "https://github.com/huggingface/datasets/pull/933",
"diff_url": "https://github.com/huggingface/datasets/pull/933.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/933.patch",
"merged_at": 1606852316000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/932 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/932/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/932/comments | https://api.github.com/repos/huggingface/datasets/issues/932/events | https://github.com/huggingface/datasets/pull/932 | 753,840,300 | MDExOlB1bGxSZXF1ZXN0NTI5ODQwNjQ3 | 932 | adding metooma dataset | {
"login": "akash418",
"id": 23264033,
"node_id": "MDQ6VXNlcjIzMjY0MDMz",
"avatar_url": "https://avatars.githubusercontent.com/u/23264033?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akash418",
"html_url": "https://github.com/akash418",
"followers_url": "https://api.github.com/users/akash418/followers",
"following_url": "https://api.github.com/users/akash418/following{/other_user}",
"gists_url": "https://api.github.com/users/akash418/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akash418/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akash418/subscriptions",
"organizations_url": "https://api.github.com/users/akash418/orgs",
"repos_url": "https://api.github.com/users/akash418/repos",
"events_url": "https://api.github.com/users/akash418/events{/privacy}",
"received_events_url": "https://api.github.com/users/akash418/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This PR adds the #MeToo MA dataset. It presents multi-label data points for tweets mined in the backdrop of the #MeToo movement. The dataset includes data points in the form of Tweet ids and appropriate labels. Please refer to the accompanying paper for detailed information regarding annotation, collection, and guidelines. \r\n\r\nPaper: https://ojs.aaai.org/index.php/ICWSM/article/view/7292\r\nDataset Link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU\r\n\r\nYAML tags:\r\nannotations_creators:\r\n- expert-generated\r\nlanguage_creators:\r\n- found\r\nlanguages:\r\n- en\r\nmultilinguality:\r\n- monolingual\r\nsize_categories:\r\n- 1K<n<10K\r\nsource_datasets:\r\n- original\r\ntask_categories:\r\n- text-classification\r\n- text-retrieval\r\ntask_ids:\r\n- multi-class-classification\r\n- multi-label-classification\r\n\r\n# Dataset Card for #MeTooMA dataset\r\n\r\n## Table of Contents\r\n- [Dataset Description](#dataset-description)\r\n - [Dataset Summary](#dataset-summary)\r\n - [Supported Tasks](#supported-tasks-and-leaderboards)\r\n - [Languages](#languages)\r\n- [Dataset Structure](#dataset-structure)\r\n - [Data Instances](#data-instances)\r\n - [Data Fields](#data-instances)\r\n - [Data Splits](#data-instances)\r\n- [Dataset Creation](#dataset-creation)\r\n - [Curation Rationale](#curation-rationale)\r\n - [Source Data](#source-data)\r\n - [Annotations](#annotations)\r\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\r\n- [Considerations for Using the Data](#considerations-for-using-the-data)\r\n - [Social Impact of Dataset](#social-impact-of-dataset)\r\n - [Discussion of Biases](#discussion-of-biases)\r\n - [Other Known Limitations](#other-known-limitations)\r\n- [Additional Information](#additional-information)\r\n - [Dataset Curators](#dataset-curators)\r\n - [Licensing Information](#licensing-information)\r\n - [Citation Information](#citation-information)\r\n\r\n## Dataset Description\r\n\r\n- **Homepage:** https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU\r\n- **Paper:** https://ojs.aaai.org//index.php/ICWSM/article/view/7292\r\n- **Point of Contact:** https://github.com/midas-research/MeTooMA\r\n\r\n\r\n### Dataset Summary\r\n\r\n- The dataset consists of tweets belonging to #MeToo movement on Twitter, labelled into different categories.\r\n- This dataset includes more data points and has more labels than any of the previous datasets in that contain social media\r\nposts about sexual abuse discloures. Please refer to the Related Datasets of the publication for a detailed information about this.\r\n- Due to Twitters development policies, the authors provide only the tweet IDs and corresponding labels,\r\nother data can be fetched via Twitter API.\r\n- The data has been labelled by experts, with the majority taken into the account for deciding the final label.\r\n- The authors provide these labels for each of the tweets.\r\n - Relevance\r\n - Directed Hate\r\n - Generalized Hate\r\n - Sarcasm\r\n - Allegation\r\n - Justification\r\n - Refutation\r\n - Support\r\n - Oppose\r\n- The definitions for each task/label is in the main publication.\r\n- Please refer to the accompanying paper https://aaai.org/ojs/index.php/ICWSM/article/view/7292 for statistical analysis on the textual data\r\nextracted from this dataset.\r\n- The language of all the tweets in this dataset is English\r\n- Time period: October 2018 - December 2018\r\n- Suggested Use Cases of this dataset:\r\n - Evaluating usage of linguistic acts such as: hate-spech and sarcasm in the incontext of public sexual abuse discloures.\r\n - Extracting actionable insights and virtual dynamics of gender roles in sexual abuse revelations.\r\n - Identifying how influential people were potrayed on public platform in the\r\n events of mass social movements.\r\n - Polarization analysis based on graph simulations of social nodes of users involved\r\n in the #MeToo movement.\r\n\r\n\r\n### Supported Tasks and Leaderboards\r\n\r\nMulti Label and Multi-Class Classification\r\n\r\n### Languages\r\n\r\nEnglish\r\n\r\n## Dataset Structure\r\n- The dataset is structured into CSV format with TweetID and accompanying labels.\r\n- Train and Test sets are split into respective files.\r\n\r\n### Data Instances\r\n\r\nTweet ID and the appropriatelabels\r\n\r\n### Data Fields\r\n\r\nTweet ID and appropriate labels (binary label applicable for a data point) and multiple labels for each Tweet ID\r\n\r\n### Data Splits\r\n\r\n- Train: 7979\r\n- Test: 1996\r\n\r\n## Dataset Creation\r\n\r\n### Curation Rationale\r\n\r\n- Twitter was the major source of all the public discloures of sexual abuse incidents during the #MeToo movement.\r\n- People expressed their opinions over issues which were previously missing from the social media space.\r\n- This provides an option to study the linguistic behaviours of social media users in an informal setting,\r\ntherefore the authors decide to curate this annotated dataset.\r\n- The authors expect this dataset would be of great interest and use to both computational and socio-linguists.\r\n- For computational linguists, it provides an opportunity to model three new complex dialogue acts (allegation, refutation, and justification) and also to study how these acts interact with some of the other linguistic components like stance, hate, and sarcasm. For socio-linguists, it provides an opportunity to explore how a movement manifests in social media.\r\n\r\n\r\n### Source Data\r\n- Source of all the data points in this dataset is Twitter.\r\n\r\n#### Initial Data Collection and Normalization\r\n\r\n- All the tweets are mined from Twitter with initial search paramters identified using keywords from the #MeToo movement.\r\n- Redundant keywords were removed based on manual inspection.\r\n- Public streaming APIs of Twitter were used for querying with the selected keywords.\r\n- Based on text de-duplication and cosine similarity score, the set of tweets were pruned.\r\n- Non english tweets were removed.\r\n- The final set was labelled by experts with the majority label taken into the account for deciding the final label.\r\n- Please refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292\r\n\r\n#### Who are the source language producers?\r\n\r\nPlease refer to this paper for detailed information: https://ojs.aaai.org//index.php/ICWSM/article/view/7292\r\n\r\n### Annotations\r\n\r\n#### Annotation process\r\n\r\n- The authors chose against crowd sourcing for labeling this dataset due to its highly sensitive nature.\r\n- The annotators are domain experts having degress in advanced clinical psychology and gender studies.\r\n- They were provided a guidelines document with instructions about each task and its definitions, labels and examples.\r\n- They studied the document, worked a few examples to get used to this annotation task.\r\n- They also provided feedback for improving the class definitions.\r\n- The annotation process is not mutually exclusive, implying that presence of one label does not mean the\r\nabsence of the other one.\r\n\r\n\r\n#### Who are the annotators?\r\n\r\n- The annotators are domain experts having a degree in clinical psychology and gender studies.\r\n- Please refer to the accompnaying paper for a detailed annotation process.\r\n\r\n### Personal and Sensitive Information\r\n\r\n- Considering Twitters policy for distribution of data, only Tweet ID and applicable labels are shared for the public use.\r\n- It is highly encouraged to use this dataset for scientific purposes only.\r\n- This dataset collection completely follows the Twitter mandated guidelines for distribution and usage.\r\n\r\n## Considerations for Using the Data\r\n\r\n### Social Impact of Dataset\r\n\r\n- The authors of this dataset do not intend to conduct a population centric analysis of #MeToo movement on Twitter.\r\n- The authors acknowledge that findings from this dataset cannot be used as-is for any direct social intervention, these\r\nshould be used to assist already existing human intervention tools and therapies.\r\n- Enough care has been taken to ensure that this work comes of as trying to target a specific person for their\r\npersonal stance of issues pertaining to the #MeToo movement.\r\n- The authors of this work do not aim to vilify anyone accused in the #MeToo movement in any manner.\r\n- Please refer to the ethics and discussion section of the mentioned publication for appropriate sharing of this dataset\r\nand social impact of this work.\r\n\r\n\r\n### Discussion of Biases\r\n\r\n- The #MeToo movement acted as a catalyst for implementing social policy changes to benefit the members of\r\ncommunity affected by sexual abuse.\r\n- Any work undertaken on this dataset should aim to minimize the bias against minority groups which\r\nmight amplified in cases of sudden outburst of public reactions over sensitive social media discussions.\r\n\r\n### Other Known Limitations\r\n\r\n- Considering privacy concerns, social media practitioners should be aware of making automated interventions\r\nto aid the victims of sexual abuse as some people might not prefer to disclose their notions.\r\n- Concerned social media users might also repeal their social information, if they found out that their\r\ninformation is being used for computational purposes, hence it is important seek subtle individual consent\r\nbefore trying to profile authors involved in online discussions to uphold personal privacy.\r\n\r\n## Additional Information\r\n\r\nPlease refer to this link: https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/JN4EYU\r\n\r\n### Dataset Curators\r\n\r\n- If you use the corpus in a product or application, then please credit the authors\r\nand [Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi]\r\n(http://midas.iiitd.edu.in) appropriately.\r\nAlso, if you send us an email, we will be thrilled to know about how you have used the corpus.\r\n- If interested in commercial use of the corpus, send email to midas@iiitd.ac.in.\r\n- Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India\r\ndisclaims any responsibility for the use of the corpus and does not provide technical support.\r\nHowever, the contact listed above will be happy to respond to queries and clarifications\r\n- Please feel free to send us an email:\r\n - with feedback regarding the corpus.\r\n - with information on how you have used the corpus.\r\n - if interested in having us analyze your social media data.\r\n - if interested in a collaborative research project.\r\n\r\n### Licensing Information\r\n\r\n[More Information Needed]\r\n\r\n### Citation Information\r\n\r\nPlease cite the following publication if you make use of the dataset: https://ojs.aaai.org/index.php/ICWSM/article/view/7292\r\n\r\n```\r\n\r\n@article{Gautam_Mathur_Gosangi_Mahata_Sawhney_Shah_2020, title={#MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement}, volume={14}, url={https://aaai.org/ojs/index.php/ICWSM/article/view/7292}, abstractNote={<p>In this paper, we present a dataset containing 9,973 tweets related to the MeToo movement that were manually annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. We present a detailed account of the data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.79 to 0.93 k-alpha) due to the domain expertise of the annotators and clear annotation instructions. We analyze the data in terms of geographical distribution, label correlations, and keywords. Lastly, we present some potential use cases of this dataset. We expect this dataset would be of great interest to psycholinguists, socio-linguists, and computational linguists to study the discursive space of digitally mobilized social movements on sensitive issues like sexual harassment.</p&gt;}, number={1}, journal={Proceedings of the International AAAI Conference on Web and Social Media}, author={Gautam, Akash and Mathur, Puneet and Gosangi, Rakesh and Mahata, Debanjan and Sawhney, Ramit and Shah, Rajiv Ratn}, year={2020}, month={May}, pages={209-216} }\r\n\r\n```\r\n\r\n\r\n\r\n",
"Hi, @lhoestq I have resolved all the comments you have raised. Can you review the PR again? However, I do need assistance on how to remove other files that came along in my PR. Should I manually delete unwanted files from the PR raised?",
"I am closing this PR, @lhoestq please review this PR instead https://github.com/huggingface/datasets/pull/975 where I have removed the unwanted files of other datasets and addressed each of your points. "
] | 1,606,774,189,000 | 1,606,869,474,000 | 1,606,869,474,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/932/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/932",
"html_url": "https://github.com/huggingface/datasets/pull/932",
"diff_url": "https://github.com/huggingface/datasets/pull/932.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/932.patch",
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/931/comments | https://api.github.com/repos/huggingface/datasets/issues/931/events | https://github.com/huggingface/datasets/pull/931 | 753,818,193 | MDExOlB1bGxSZXF1ZXN0NTI5ODIzMDYz | 931 | [WIP] complex_webqa - Error zipfile.BadZipFile: Bad CRC-32 | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,606,771,821,000 | 1,606,771,821,000 | null | MEMBER | null | Have a string `zipfile.BadZipFile: Bad CRC-32 for file 'web_snippets_train.json'` error when downloading the largest file from dropbox: `https://www.dropbox.com/sh/7pkwkrfnwqhsnpo/AABVENv_Q9rFtnM61liyzO0La/web_snippets_train.json.zip?dl=1`
Didn't managed to see how to solve that.
Putting aside for now.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/931/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 1,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/931/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/931",
"html_url": "https://github.com/huggingface/datasets/pull/931",
"diff_url": "https://github.com/huggingface/datasets/pull/931.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/931.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/930 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/930/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/930/comments | https://api.github.com/repos/huggingface/datasets/issues/930/events | https://github.com/huggingface/datasets/pull/930 | 753,801,204 | MDExOlB1bGxSZXF1ZXN0NTI5ODA5MzM1 | 930 | Lambada | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,770,153,000 | 1,606,783,032,000 | 1,606,783,031,000 | MEMBER | null | Added LAMBADA dataset.
A couple of points of attention (mostly because I am not sure)
- The training data are compressed in a .tar file inside the main tar.gz file. I had to manually un-tar the training file to access the examples.
- The dev and test splits don't have the `category` field so I put `None` by default.
Happy to make changes if it doesn't respect the guidelines!
Victor | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/930/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/930",
"html_url": "https://github.com/huggingface/datasets/pull/930",
"diff_url": "https://github.com/huggingface/datasets/pull/930.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/930.patch",
"merged_at": 1606783031000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/929/comments | https://api.github.com/repos/huggingface/datasets/issues/929/events | https://github.com/huggingface/datasets/pull/929 | 753,737,794 | MDExOlB1bGxSZXF1ZXN0NTI5NzU4NTU3 | 929 | Add weibo NER dataset | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,764,167,000 | 1,607,002,615,000 | 1,607,002,614,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/929/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/929",
"html_url": "https://github.com/huggingface/datasets/pull/929",
"diff_url": "https://github.com/huggingface/datasets/pull/929.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/929.patch",
"merged_at": 1607002614000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/928/comments | https://api.github.com/repos/huggingface/datasets/issues/928/events | https://github.com/huggingface/datasets/pull/928 | 753,722,324 | MDExOlB1bGxSZXF1ZXN0NTI5NzQ1OTIx | 928 | Add the Multilingual Amazon Reviews Corpus | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,762,686,000 | 1,606,838,670,000 | 1,606,838,667,000 | CONTRIBUTOR | null | - **Name:** Multilingual Amazon Reviews Corpus* (`amazon_reviews_multi`)
- **Description:** A collection of Amazon reviews in English, Japanese, German, French, Spanish and Chinese.
- **Paper:** https://arxiv.org/abs/2010.02573
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/928/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/928/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/928",
"html_url": "https://github.com/huggingface/datasets/pull/928",
"diff_url": "https://github.com/huggingface/datasets/pull/928.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/928.patch",
"merged_at": 1606838667000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/927 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/927/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/927/comments | https://api.github.com/repos/huggingface/datasets/issues/927/events | https://github.com/huggingface/datasets/issues/927 | 753,679,020 | MDU6SXNzdWU3NTM2NzkwMjA= | 927 | Hello | {
"login": "k125-ak",
"id": 75259546,
"node_id": "MDQ6VXNlcjc1MjU5NTQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/75259546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/k125-ak",
"html_url": "https://github.com/k125-ak",
"followers_url": "https://api.github.com/users/k125-ak/followers",
"following_url": "https://api.github.com/users/k125-ak/following{/other_user}",
"gists_url": "https://api.github.com/users/k125-ak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/k125-ak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/k125-ak/subscriptions",
"organizations_url": "https://api.github.com/users/k125-ak/orgs",
"repos_url": "https://api.github.com/users/k125-ak/repos",
"events_url": "https://api.github.com/users/k125-ak/events{/privacy}",
"received_events_url": "https://api.github.com/users/k125-ak/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,758,605,000 | 1,606,758,630,000 | 1,606,758,630,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/927/timeline | null | completed | null | null | false |
|
https://api.github.com/repos/huggingface/datasets/issues/926 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/926/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/926/comments | https://api.github.com/repos/huggingface/datasets/issues/926/events | https://github.com/huggingface/datasets/pull/926 | 753,676,069 | MDExOlB1bGxSZXF1ZXN0NTI5NzA4MTcy | 926 | add inquisitive | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"`dummy_data` right now contains all article files, keeping only the required articles for dummy data fails the dummy data test.\r\nAny idea ?",
"> `dummy_data` right now contains all article files, keeping only the required articles for dummy data fails the dummy data test.\r\n> Any idea ?\r\n\r\nWe should definitely find a way to make it work with only a few articles.\r\n\r\nIf it doesn't work right now for dummy data, I guess it's because it tries to load every single article file ?\r\n\r\nIf so, then maybe you can use `os.listdir` method to first check all the data files available in the path where the `articles.tgz` file is extracted. Then you can simply iter through the data files and depending on their ID, include them in the train or test set. With this method you should be able to have only a few articles files per split in the dummy data. Does that make sense ?",
"fixed! so the issue was, `articles_ids` were prepared based on the number of files in articles dir, so for dummy data questions it was not able to load some articles due to incorrect ids and the test was failing"
] | 1,606,758,322,000 | 1,606,916,722,000 | 1,606,916,413,000 | MEMBER | null | Adding inquisitive qg dataset
More info: https://github.com/wjko2/INQUISITIVE | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/926/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/926",
"html_url": "https://github.com/huggingface/datasets/pull/926",
"diff_url": "https://github.com/huggingface/datasets/pull/926.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/926.patch",
"merged_at": 1606916413000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/925/comments | https://api.github.com/repos/huggingface/datasets/issues/925/events | https://github.com/huggingface/datasets/pull/925 | 753,672,661 | MDExOlB1bGxSZXF1ZXN0NTI5NzA1MzM4 | 925 | Add Turku NLP Corpus for Finnish NER | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> Did you generate the dummy data with the cli or manually ?\r\n\r\nIt was generated by the cli. Do you want me to make it smaller keep it like this?\r\n\r\n"
] | 1,606,758,019,000 | 1,607,004,431,000 | 1,607,004,430,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/925/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/925",
"html_url": "https://github.com/huggingface/datasets/pull/925",
"diff_url": "https://github.com/huggingface/datasets/pull/925.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/925.patch",
"merged_at": 1607004430000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/924/comments | https://api.github.com/repos/huggingface/datasets/issues/924/events | https://github.com/huggingface/datasets/pull/924 | 753,631,951 | MDExOlB1bGxSZXF1ZXN0NTI5NjcyMzgw | 924 | Add DART | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"LGTM!"
] | 1,606,754,557,000 | 1,606,878,822,000 | 1,606,878,821,000 | MEMBER | null | - **Name:** *DART*
- **Description:** *DART is a large dataset for open-domain structured data record to text generation.*
- **Paper:** *https://arxiv.org/abs/2007.02871*
- **Data:** *https://github.com/Yale-LILY/dart#leaderboard*
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/924/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/924",
"html_url": "https://github.com/huggingface/datasets/pull/924",
"diff_url": "https://github.com/huggingface/datasets/pull/924.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/924.patch",
"merged_at": 1606878821000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/923/comments | https://api.github.com/repos/huggingface/datasets/issues/923/events | https://github.com/huggingface/datasets/pull/923 | 753,569,220 | MDExOlB1bGxSZXF1ZXN0NTI5NjIyMDQx | 923 | Add CC-100 dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892913,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": "This will not be worked on"
}
] | closed | false | null | [] | null | [
"Hello @lhoestq, I would like just to ask you if it is OK that I include this feature 9f32ba1 in this PR or you would prefer to have it in a separate one.\r\n\r\nI was wondering whether include also a test, but I did not find any test for the other file formats...",
"Hi ! Sure that would be valuable to support .xz files. Feel free to open a separate PR for this.\r\nAnd feel free to create the first test case for extracting compressed files if you have some inspiration (maybe create test_file_utils.py ?). We can still spend more time on tests next week when the sprint is over though so don't spend too much time on it.",
"@lhoestq, DONE! ;) See PR #950.",
"Thanks for adding support for `.xz` files :)\r\n\r\nFeel free to rebase from master to include it in your PR",
"@lhoestq DONE; I have merged instead, to avoid changing the history of my public PR ;)",
"Hi @lhoestq, I would need that you generate the dataset_infos.json and the dummy data for this dataset with a bigger computer. Sorry, but my laptop did not succeed...",
"Thanks for your work @albertvillanova \r\nWe'll definitely look into it after this sprint :)",
"Looks like #1456 added CC100 already.\r\nThe difference with your approach is that this implementation uses the `BuilderConfig` parameters to allow the creation of custom configs for all the languages, without having to specify them in the `BUILDER_CONFIGS` class attribute.\r\nFor example even if the dataset doesn't have a config for english already, you can still load the english CC100 with\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"cc100\", lang=\"en\")\r\n```",
"@lhoestq, oops!! I remember having assigned this dataset to me in the Google sheet, besides having mentioned the corresponding issue in the Pull Request... Nevermind! :)",
"Yes indeed I can see that...\r\nSorry for noticing that only now \r\n\r\nThe code of the other PR ended up being pretty close to yours though\r\nIf you want to add more details to the cc100 dataset card or in the script feel to do so, any addition is welcome"
] | 1,606,749,802,000 | 1,618,925,657,000 | 1,618,925,657,000 | MEMBER | null | Add CC-100.
Close #773 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/923/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/923",
"html_url": "https://github.com/huggingface/datasets/pull/923",
"diff_url": "https://github.com/huggingface/datasets/pull/923.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/923.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/922/comments | https://api.github.com/repos/huggingface/datasets/issues/922/events | https://github.com/huggingface/datasets/pull/922 | 753,559,130 | MDExOlB1bGxSZXF1ZXN0NTI5NjEzOTA4 | 922 | Add XOR QA Dataset | {
"login": "sumanthd17",
"id": 28291870,
"node_id": "MDQ6VXNlcjI4MjkxODcw",
"avatar_url": "https://avatars.githubusercontent.com/u/28291870?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sumanthd17",
"html_url": "https://github.com/sumanthd17",
"followers_url": "https://api.github.com/users/sumanthd17/followers",
"following_url": "https://api.github.com/users/sumanthd17/following{/other_user}",
"gists_url": "https://api.github.com/users/sumanthd17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sumanthd17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sumanthd17/subscriptions",
"organizations_url": "https://api.github.com/users/sumanthd17/orgs",
"repos_url": "https://api.github.com/users/sumanthd17/repos",
"events_url": "https://api.github.com/users/sumanthd17/events{/privacy}",
"received_events_url": "https://api.github.com/users/sumanthd17/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @sumanthd17 \r\n\r\nLooks like a good start! You will also need to add a Dataset card, following the instructions given [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)",
"I followed the instructions mentioned there but my dataset isn't showing up in the dropdown list. Am I missing something here? @yjernite ",
"> I followed the instructions mentioned there but my dataset isn't showing up in the dropdown list. Am I missing something here? @yjernite\r\n\r\nThe best way is to run the tagging app locally and provide it the location to the `dataset_infos.json` after you've run the CLI:\r\nhttps://github.com/huggingface/datasets-tagging\r\n",
"This is a really good data card!!\r\n\r\nSmall changes to make it even better:\r\n- Tags: the dataset has both \"original\" data and data that is \"extended\" from a source dataset: TydiQA - you should choose both options in the tagging apps\r\n- The language and annotation creator tags are off: the language here is the questions: I understand it's a mix of crowd-sourced and expert-generated? Is there any machine translation involved? The annotations are the span selections: is that crowd-sourced?\r\n- Personal and sensitive information: there should be a statement there, even if only to say that none could be found or that it only mentions public figures"
] | 1,606,749,054,000 | 1,606,878,741,000 | 1,606,878,741,000 | CONTRIBUTOR | null | Added XOR Question Answering Dataset. The link to the dataset can be found [here](https://nlp.cs.washington.edu/xorqa/)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/922/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/922",
"html_url": "https://github.com/huggingface/datasets/pull/922",
"diff_url": "https://github.com/huggingface/datasets/pull/922.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/922.patch",
"merged_at": 1606878741000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/920 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/920/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/920/comments | https://api.github.com/repos/huggingface/datasets/issues/920/events | https://github.com/huggingface/datasets/pull/920 | 753,445,747 | MDExOlB1bGxSZXF1ZXN0NTI5NTIzMTgz | 920 | add dream dataset | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> Awesome good job !\r\n> \r\n> Could you also add a dataset card using the template guide here : https://github.com/huggingface/datasets/blob/master/templates/README_guide.md\r\n> If you can't fill some fields then just leave `[N/A]`\r\n\r\nQuick amendment: `[N/A]` is for fields that are not relevant: if you can't find the information just leave `[More Information Needed]`",
"@lhoestq since datset cards are optional for this sprint I'll add those later. Good for merge.",
"Indeed we only require the tags to be added now (the yaml part at the top of the dataset card).\r\nCould you add them please ?\r\nYou can find more infos here : https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card",
"@lhoestq added tags, I'll fill rest of the info after current sprint :)",
"The tests are failing tests for other datasets, not this one.",
"@lhoestq could you tell me why these tests are failing, they don't seem related to this PR. "
] | 1,606,740,014,000 | 1,607,013,912,000 | 1,606,923,552,000 | MEMBER | null | Adding Dream: a Dataset and for Dialogue-Based Reading Comprehension
More details:
https://dataset.org/dream/
https://github.com/nlpdata/dream | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/920/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/920",
"html_url": "https://github.com/huggingface/datasets/pull/920",
"diff_url": "https://github.com/huggingface/datasets/pull/920.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/920.patch",
"merged_at": 1606923552000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/919 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/919/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/919/comments | https://api.github.com/repos/huggingface/datasets/issues/919/events | https://github.com/huggingface/datasets/issues/919 | 753,434,472 | MDU6SXNzdWU3NTM0MzQ0NzI= | 919 | wrong length with datasets | {
"login": "rabeehk",
"id": 6278280,
"node_id": "MDQ6VXNlcjYyNzgyODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rabeehk",
"html_url": "https://github.com/rabeehk",
"followers_url": "https://api.github.com/users/rabeehk/followers",
"following_url": "https://api.github.com/users/rabeehk/following{/other_user}",
"gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions",
"organizations_url": "https://api.github.com/users/rabeehk/orgs",
"repos_url": "https://api.github.com/users/rabeehk/repos",
"events_url": "https://api.github.com/users/rabeehk/events{/privacy}",
"received_events_url": "https://api.github.com/users/rabeehk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Also, I cannot first convert it to torch format, since huggingface seq2seq_trainer codes process the datasets afterwards during datacollector function to make it optimize for TPUs. ",
"sorry I misunderstood length of dataset with dataloader, closed. thanks "
] | 1,606,739,019,000 | 1,606,739,847,000 | 1,606,739,846,000 | CONTRIBUTOR | null | Hi
I have a MRPC dataset which I convert it to seq2seq format, then this is of this format:
`Dataset(features: {'src_texts': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 10)
`
I feed it to a dataloader:
```
dataloader = DataLoader(
train_dataset,
batch_size=self.args.train_batch_size,
sampler=train_sampler,
collate_fn=self.data_collator,
drop_last=self.args.dataloader_drop_last,
num_workers=self.args.dataloader_num_workers,
)
```
now if I type len(dataloader) this is 1, which is wrong, and this needs to be 10. could you assist me please? thanks
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/919/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/918 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/918/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/918/comments | https://api.github.com/repos/huggingface/datasets/issues/918/events | https://github.com/huggingface/datasets/pull/918 | 753,397,440 | MDExOlB1bGxSZXF1ZXN0NTI5NDgzOTk4 | 918 | Add conll2002 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,735,775,000 | 1,606,761,270,000 | 1,606,761,269,000 | MEMBER | null | Adding the Conll2002 dataset for NER.
More info here : https://www.clips.uantwerpen.be/conll2002/ner/
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/918/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/918",
"html_url": "https://github.com/huggingface/datasets/pull/918",
"diff_url": "https://github.com/huggingface/datasets/pull/918.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/918.patch",
"merged_at": 1606761269000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/917 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/917/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/917/comments | https://api.github.com/repos/huggingface/datasets/issues/917/events | https://github.com/huggingface/datasets/pull/917 | 753,391,591 | MDExOlB1bGxSZXF1ZXN0NTI5NDc5MTIy | 917 | Addition of Concode Dataset | {
"login": "reshinthadithyan",
"id": 36307201,
"node_id": "MDQ6VXNlcjM2MzA3MjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/36307201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reshinthadithyan",
"html_url": "https://github.com/reshinthadithyan",
"followers_url": "https://api.github.com/users/reshinthadithyan/followers",
"following_url": "https://api.github.com/users/reshinthadithyan/following{/other_user}",
"gists_url": "https://api.github.com/users/reshinthadithyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/reshinthadithyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reshinthadithyan/subscriptions",
"organizations_url": "https://api.github.com/users/reshinthadithyan/orgs",
"repos_url": "https://api.github.com/users/reshinthadithyan/repos",
"events_url": "https://api.github.com/users/reshinthadithyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/reshinthadithyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Testing command doesn't work\r\n###trace\r\n-- Docs: https://docs.pytest.org/en/stable/warnings.html\r\n========================================================= short test summary info ========================================================== \r\nERROR tests/test_dataset_common.py - absl.testing.parameterized.NoTestsError: parameterized test decorators did not generate any tests. Ma...\r\n====================================================== 2 warnings, 1 error in 54.23s ======================================================= \r\nERROR: not found: G:\\Work Related\\hf\\datasets\\tests\\test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_concode\r\n(no name 'G:\\\\Work Related\\\\hf\\\\datasets\\\\tests\\\\test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_concode' in any of [<Module test_dataset_common.py>])\r\n",
"Hello @lhoestq Test checks are passing in my local, but the commit fails in ci. Any idea onto why? \r\n#### Dummy Dataset Test \r\n====================================================== 1 passed, 6 warnings in 7.14s ======================================================= \r\n#### Real Dataset Test \r\n====================================================== 1 passed, 6 warnings in 25.54s ====================================================== ",
"Hello @lhoestq, Have a look, I've changed the file according to the reviews. Thanks!",
"@reshinthadithyan that's a great start! You will also need to add a Dataset card, following the instructions given [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)",
"> @reshinthadithyan that's a great start! You will also need to add a Dataset card, following the instructions given [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)\r\n\r\nHello @yjernite I'm facing issues in using the datasets-tagger Refer #1 in datasets-tagger. Thanks",
"> > @reshinthadithyan that's a great start! You will also need to add a Dataset card, following the instructions given [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)\r\n> \r\n> Hello @yjernite I'm facing issues in using the datasets-tagger Refer #1 in datasets-tagger. Thanks\r\n\r\nHi @reshinthadithyan ! Did you try with the latest version of the tagger? What issues are you facing?\r\n\r\nWe're also relaxed the dataset requirement for now, you'll only add to add the tags :) ",
"Could you work on another branch when adding different datasets ?\r\nThe idea is to have one PR per dataset",
"Thanks ! The github diff looks all clean now :) \r\nTo fix the CI you just need to rebase from master\r\n\r\nDon't forget to add the tags of the dataset card. It's the yaml part at the top of the dataset card\r\nMore infor here : https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nThe issue you had with the tagger should be fixed now by https://github.com/huggingface/datasets-tagging/pull/5\r\n"
] | 1,606,735,259,000 | 1,609,210,536,000 | 1,609,210,536,000 | CONTRIBUTOR | null | ##Overview
Concode Dataset contains pairs of Nl Queries and the corresponding Code.(Contextual Code Generation)
Reference Links
Paper Link = https://arxiv.org/pdf/1904.09086.pdf
Github Link =https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/text-to-code | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/917/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/917",
"html_url": "https://github.com/huggingface/datasets/pull/917",
"diff_url": "https://github.com/huggingface/datasets/pull/917.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/917.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/916 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/916/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/916/comments | https://api.github.com/repos/huggingface/datasets/issues/916/events | https://github.com/huggingface/datasets/pull/916 | 753,376,643 | MDExOlB1bGxSZXF1ZXN0NTI5NDY3MTkx | 916 | Add Swedish NER Corpus | {
"login": "abhishekkrthakur",
"id": 1183441,
"node_id": "MDQ6VXNlcjExODM0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhishekkrthakur",
"html_url": "https://github.com/abhishekkrthakur",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Yes the use of configs is optional",
"@abhishekkrthakur we want to keep track of the information that is and isn't in the dataset cards so we're asking everyone to use the full template :) If there is some information in there that you really can't find or don't feel qualified to add, you can just leave the `[More Information Needed]` text"
] | 1,606,733,991,000 | 1,606,878,650,000 | 1,606,878,649,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/916/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/916",
"html_url": "https://github.com/huggingface/datasets/pull/916",
"diff_url": "https://github.com/huggingface/datasets/pull/916.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/916.patch",
"merged_at": 1606878649000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/915 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/915/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/915/comments | https://api.github.com/repos/huggingface/datasets/issues/915/events | https://github.com/huggingface/datasets/issues/915 | 753,118,481 | MDU6SXNzdWU3NTMxMTg0ODE= | 915 | Shall we change the hashing to encoding to reduce potential replicated cache files? | {
"login": "zhuzilin",
"id": 10428324,
"node_id": "MDQ6VXNlcjEwNDI4MzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/10428324?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhuzilin",
"html_url": "https://github.com/zhuzilin",
"followers_url": "https://api.github.com/users/zhuzilin/followers",
"following_url": "https://api.github.com/users/zhuzilin/following{/other_user}",
"gists_url": "https://api.github.com/users/zhuzilin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhuzilin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhuzilin/subscriptions",
"organizations_url": "https://api.github.com/users/zhuzilin/orgs",
"repos_url": "https://api.github.com/users/zhuzilin/repos",
"events_url": "https://api.github.com/users/zhuzilin/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhuzilin/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | null | [] | null | [
"This is an interesting idea !\r\nDo you have ideas about how to approach the decoding and the normalization ?",
"@lhoestq\r\nI think we first need to save the transformation chain to a list in `self._fingerprint`. Then we can\r\n- decode all the current saved datasets to see if there is already one that is equivalent to the transformation we need now.\r\n- or, calculate all the possible hash value of the current chain for comparison so that we could continue to use hashing.\r\nIf we find one, we can adjust the list in `self._fingerprint` to it.\r\n\r\nAs for the transformation reordering rules, we can just start with some manual rules, like two sort on the same column should merge to one, filter and select can change orders.\r\n\r\nAnd for encoding and decoding, we can just manually specify `sort` is 0, `shuffling` is 2 and create a base-n number or use some general algorithm like `base64.urlsafe_b64encode`.\r\n\r\nBecause we are not doing lazy evaluation now, we may not be able to normalize the transformation to its minimal form. If we want to support that, we can provde a `Sequential` api and let user input a list or transformation, so that user would not use the intermediate datasets. This would look like tf.data.Dataset."
] | 1,606,708,246,000 | 1,608,786,709,000 | null | NONE | null | Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the fingerprint may help in those cases, for example, use `base64.urlsafe_b64encode`. In this way, before we want to save a new copy, we can decode the transformation chain and normalize it to prevent omit potential reuse. As the main targets of this project are the really large datasets that cannot be loaded entirely in memory, I believe it would save a lot of time if we can avoid some write.
If you have interest in this, I'd love to help :). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/915/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/914 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/914/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/914/comments | https://api.github.com/repos/huggingface/datasets/issues/914/events | https://github.com/huggingface/datasets/pull/914 | 752,956,106 | MDExOlB1bGxSZXF1ZXN0NTI5MTM2Njk3 | 914 | Add list_github_datasets api for retrieving dataset name list in github repo | {
"login": "zhuzilin",
"id": 10428324,
"node_id": "MDQ6VXNlcjEwNDI4MzI0",
"avatar_url": "https://avatars.githubusercontent.com/u/10428324?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhuzilin",
"html_url": "https://github.com/zhuzilin",
"followers_url": "https://api.github.com/users/zhuzilin/followers",
"following_url": "https://api.github.com/users/zhuzilin/following{/other_user}",
"gists_url": "https://api.github.com/users/zhuzilin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhuzilin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhuzilin/subscriptions",
"organizations_url": "https://api.github.com/users/zhuzilin/orgs",
"repos_url": "https://api.github.com/users/zhuzilin/repos",
"events_url": "https://api.github.com/users/zhuzilin/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhuzilin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"We can look into removing some of the attributes from `GET /api/datasets` to make it smaller/faster, what do you think @lhoestq?",
"> We can look into removing some of the attributes from `GET /api/datasets` to make it smaller/faster, what do you think @lhoestq?\r\n\r\nyes at least remove all the `dummy_data.zip`",
"`GET /api/datasets` should now be much faster. @zhuzilin can you check if `list_datasets` is now faster for you?",
"> `GET /api/datasets` should now be much faster. @zhuzilin can you check if `list_datasets` is now faster for you?\r\n\r\nYes, much faster! Thank you!"
] | 1,606,668,135,000 | 1,606,893,676,000 | 1,606,893,676,000 | NONE | null | Thank you for your great effort on unifying data processing for NLP!
This pr is trying to add a new api `list_github_datasets` in the `inspect` module. The reason for it is that the current `list_datasets` api need to access https://huggingface.co/api/datasets to get a large json. However, this connection can be really slow... (I was visiting from China) and from my own experience, most of the time `requests.get` failed to download the whole json after a long wait and will trigger fault in `r.json()`.
I also noticed that the current implementation will first try to download from github, which makes me be able to smoothly run `load_dataset('squad')` in the example.
Therefore, I think it would be better if we can have an api to get the list of datasets that are available on github, and it will also improve newcomers' experience (it is a little frustrating if one cannot successfully run the first function in the README example.) before we have faster source for huggingface.co.
As for the implementation, I've added a `dataset_infos.json` file under the `datasets` folder, and it has the following structure:
```json
{
"id": "aeslc",
"folder": "datasets/aeslc",
"dataset_infos": "datasets/aeslc/dataset_infos.json"
},
...
{
"id": "json",
"folder": "datasets/json"
},
...
```
The script I used to get this file is:
```python
import json
import os
DATASETS_BASE_DIR = "/root/datasets"
DATASET_INFOS_JSON = "dataset_infos.json"
datasets = []
for item in os.listdir(os.path.join(DATASETS_BASE_DIR, "datasets")):
if os.path.isdir(os.path.join(DATASETS_BASE_DIR, "datasets", item)):
datasets.append(item)
datasets.sort()
total_ds_info = []
for ds in datasets:
ds_dir = os.path.join("datasets", ds)
ds_info_dir = os.path.join(ds_dir, DATASET_INFOS_JSON)
if os.path.isfile(os.path.join(DATASETS_BASE_DIR, ds_info_dir)):
total_ds_info.append({"id": ds,
"folder": ds_dir,
"dataset_infos": ds_info_dir})
else:
total_ds_info.append({"id": ds,
"folder": ds_dir})
with open(DATASET_INFOS_JSON, "w") as f:
json.dump(total_ds_info, f)
```
The new `dataset_infos.json` was saved as a formated json so that it will be easy to add new dataset.
When calling `list_github_datasets`, the user will get the list of dataset names in this github repo and if `with_details` is set to be `True`, they can get the url of specific dataset info.
Thank you for your time on reviewing this pr :). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/914/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/914",
"html_url": "https://github.com/huggingface/datasets/pull/914",
"diff_url": "https://github.com/huggingface/datasets/pull/914.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/914.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/913 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/913/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/913/comments | https://api.github.com/repos/huggingface/datasets/issues/913/events | https://github.com/huggingface/datasets/pull/913 | 752,892,020 | MDExOlB1bGxSZXF1ZXN0NTI5MDkyOTc3 | 913 | My new dataset PEC | {
"login": "zhongpeixiang",
"id": 11826803,
"node_id": "MDQ6VXNlcjExODI2ODAz",
"avatar_url": "https://avatars.githubusercontent.com/u/11826803?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhongpeixiang",
"html_url": "https://github.com/zhongpeixiang",
"followers_url": "https://api.github.com/users/zhongpeixiang/followers",
"following_url": "https://api.github.com/users/zhongpeixiang/following{/other_user}",
"gists_url": "https://api.github.com/users/zhongpeixiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zhongpeixiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhongpeixiang/subscriptions",
"organizations_url": "https://api.github.com/users/zhongpeixiang/orgs",
"repos_url": "https://api.github.com/users/zhongpeixiang/repos",
"events_url": "https://api.github.com/users/zhongpeixiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zhongpeixiang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"How to resolve these failed checks?",
"Thanks for adding this one :) \r\n\r\nTo fix the check_code_quality, please run `make style` with the latest version of black, isort, flake8\r\nTo fix the test_no_encoding_on_file_open, make sure to specify the encoding each time you call `open()` on a text file.\r\nFor example : `encoding=\"utf-8\"`\r\nTo fix the test_load_dataset_pec , you must add the dummy_data.zip file. It is used to test the dataset script and make sure it runs fine. To add it, please refer to the steps in https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-add-a-dataset\r\n\r\n",
"Could you also add a dataset card ? you can find a template here : https://github.com/huggingface/datasets/blob/master/templates/README.md\r\n\r\nThat would be awesome",
"> Thanks for adding this one :)\r\n> \r\n> To fix the check_code_quality, please run `make style` with the latest version of black, isort, flake8\r\n> To fix the test_no_encoding_on_file_open, make sure to specify the encoding each time you call `open()` on a text file.\r\n> For example : `encoding=\"utf-8\"`\r\n> To fix the test_load_dataset_pec , you must add the dummy_data.zip file. It is used to test the dataset script and make sure it runs fine. To add it, please refer to the steps in https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-add-a-dataset\r\n\r\nThank you for the detailed suggestion.\r\n\r\nI have added dummy_data but it still failed the DistributedDatasetTest check. My dataset has a central file (containing a python dict) that needs to be accessed by each data example. Is it because the central file cannot be distributed (which would lead to a partial dictionary)?\r\n\r\nSpecifically, the central file contains a dictionary of speakers with their attributes. Each data example is also associated with a speaker. As of now, I keep the central file and data files separately. If I remove the central file by appending the speaker attributes to each data example, then there would be lots of redundancy because there are lots of duplicate speakers in the data files.",
"The `DistributedDatasetTest` fail and the changes of this PR are not related, there was just a bug in the CI. You can ignore it",
"> Really cool thanks !\r\n> \r\n> Could you make the dummy files smaller ? For example by reducing the size of persona.txt ?\r\n> I also left a comment about the files concatenation. It would be cool to replace that with simple iterations through the different files.\r\n> \r\n> Then once this is done, you can add a dataset card using the template guide here : https://github.com/huggingface/datasets/blob/master/templates/README_guide.md\r\n> If some fields can't be filled, just leave `[N/A]`\r\n\r\nSmall change: if you don't have the information for a field, please leave `[More Information Needed]` rather than `[N/A]`\r\n\r\nThe full information can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)"
] | 1,606,648,237,000 | 1,606,819,313,000 | 1,606,819,313,000 | CONTRIBUTOR | null | A new dataset PEC published in EMNLP 2020. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/913/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/913",
"html_url": "https://github.com/huggingface/datasets/pull/913",
"diff_url": "https://github.com/huggingface/datasets/pull/913.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/913.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/911 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/911/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/911/comments | https://api.github.com/repos/huggingface/datasets/issues/911/events | https://github.com/huggingface/datasets/issues/911 | 752,806,215 | MDU6SXNzdWU3NTI4MDYyMTU= | 911 | datasets module not found | {
"login": "sbassam",
"id": 15836274,
"node_id": "MDQ6VXNlcjE1ODM2Mjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/15836274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sbassam",
"html_url": "https://github.com/sbassam",
"followers_url": "https://api.github.com/users/sbassam/followers",
"following_url": "https://api.github.com/users/sbassam/following{/other_user}",
"gists_url": "https://api.github.com/users/sbassam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sbassam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sbassam/subscriptions",
"organizations_url": "https://api.github.com/users/sbassam/orgs",
"repos_url": "https://api.github.com/users/sbassam/repos",
"events_url": "https://api.github.com/users/sbassam/events{/privacy}",
"received_events_url": "https://api.github.com/users/sbassam/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"nvm, I'd made an assumption that the library gets installed with transformers. "
] | 1,606,613,055,000 | 1,606,660,389,000 | 1,606,660,389,000 | NONE | null | Currently, running `from datasets import load_dataset` will throw a `ModuleNotFoundError: No module named 'datasets'` error.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/911/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/911/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/910 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/910/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/910/comments | https://api.github.com/repos/huggingface/datasets/issues/910/events | https://github.com/huggingface/datasets/issues/910 | 752,772,723 | MDU6SXNzdWU3NTI3NzI3MjM= | 910 | Grindr meeting app web.Grindr | {
"login": "jackin34",
"id": 75184749,
"node_id": "MDQ6VXNlcjc1MTg0NzQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/75184749?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackin34",
"html_url": "https://github.com/jackin34",
"followers_url": "https://api.github.com/users/jackin34/followers",
"following_url": "https://api.github.com/users/jackin34/following{/other_user}",
"gists_url": "https://api.github.com/users/jackin34/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jackin34/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jackin34/subscriptions",
"organizations_url": "https://api.github.com/users/jackin34/orgs",
"repos_url": "https://api.github.com/users/jackin34/repos",
"events_url": "https://api.github.com/users/jackin34/events{/privacy}",
"received_events_url": "https://api.github.com/users/jackin34/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,599,383,000 | 1,606,644,711,000 | 1,606,644,711,000 | NONE | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/910/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/909 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/909/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/909/comments | https://api.github.com/repos/huggingface/datasets/issues/909/events | https://github.com/huggingface/datasets/pull/909 | 752,508,299 | MDExOlB1bGxSZXF1ZXN0NTI4ODE1NDYz | 909 | Add FiNER dataset | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> That's really cool thank you !\r\n> \r\n> Could you also add a dataset card ?\r\n> You can find a template here : https://github.com/huggingface/datasets/blob/master/templates/README.md\r\n\r\nThe full information for adding a dataset card can be found here :) \r\nhttps://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card\r\n",
"Thanks your suggestions! I've fixed them, and currently working on the dataset card!",
"@yjernite and @lhoestq I will add the dataset card a bit later in a separate PR if that's ok for you!",
"Yes I want to re-emphasize if it was not clear that dataset cards are optional for the sprint. \r\n\r\nOnly the tags are required for merging a datasets.\r\n\r\nPlease try to enforce this rule as well @lhoestq and @yjernite ",
"Yes @stefan-it if you could just add the tags (the yaml part at the top of the dataset card) that'd be perfect :) ",
"Oh, sorry, will add them now!\r\n",
"Initial README file is now added :) ",
"the `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine",
"merging since the CI is fixed on master"
] | 1,606,521,260,000 | 1,607,360,183,000 | 1,607,360,183,000 | CONTRIBUTOR | null | Hi,
this PR adds "A Finnish News Corpus for Named Entity Recognition" as new `finer` dataset.
The dataset is described in [this paper](https://arxiv.org/abs/1908.04212). The data is publicly available in [this GitHub](https://github.com/mpsilfve/finer-data).
Notice: they provide two testsets. The additional test dataset taken from Wikipedia is named as "test_wikipedia" split. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/909/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/909",
"html_url": "https://github.com/huggingface/datasets/pull/909",
"diff_url": "https://github.com/huggingface/datasets/pull/909.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/909.patch",
"merged_at": 1607360183000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/908 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/908/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/908/comments | https://api.github.com/repos/huggingface/datasets/issues/908/events | https://github.com/huggingface/datasets/pull/908 | 752,428,652 | MDExOlB1bGxSZXF1ZXN0NTI4NzUzMjcz | 908 | Add dependency on black for tests | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Sorry, I have just seen that it was already in `QUALITY_REQUIRE`.\r\n\r\nFor some reason it did not get installed on my virtual environment..."
] | 1,606,504,368,000 | 1,606,513,613,000 | 1,606,513,612,000 | MEMBER | null | Add package 'black' as an installation requirement for tests. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/908/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/908",
"html_url": "https://github.com/huggingface/datasets/pull/908",
"diff_url": "https://github.com/huggingface/datasets/pull/908.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/908.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/907 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/907/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/907/comments | https://api.github.com/repos/huggingface/datasets/issues/907/events | https://github.com/huggingface/datasets/pull/907 | 752,422,351 | MDExOlB1bGxSZXF1ZXN0NTI4NzQ4ODMx | 907 | Remove os.path.join from all URLs | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,503,330,000 | 1,606,690,100,000 | 1,606,690,099,000 | MEMBER | null | Remove `os.path.join` from all URLs in dataset scripts. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/907/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/907/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/907",
"html_url": "https://github.com/huggingface/datasets/pull/907",
"diff_url": "https://github.com/huggingface/datasets/pull/907.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/907.patch",
"merged_at": 1606690099000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/906 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/906/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/906/comments | https://api.github.com/repos/huggingface/datasets/issues/906/events | https://github.com/huggingface/datasets/pull/906 | 752,403,395 | MDExOlB1bGxSZXF1ZXN0NTI4NzM0MDY0 | 906 | Fix url with backslash in windows for blimp and pg19 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,499,951,000 | 1,606,501,196,000 | 1,606,501,196,000 | MEMBER | null | Following #903 I also fixed blimp and pg19 which were using the `os.path.join` to create urls
cc @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/906/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/906",
"html_url": "https://github.com/huggingface/datasets/pull/906",
"diff_url": "https://github.com/huggingface/datasets/pull/906.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/906.patch",
"merged_at": 1606501195000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/905 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/905/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/905/comments | https://api.github.com/repos/huggingface/datasets/issues/905/events | https://github.com/huggingface/datasets/pull/905 | 752,395,456 | MDExOlB1bGxSZXF1ZXN0NTI4NzI3OTEy | 905 | Disallow backslash in urls | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks like the test doesn't detect all the problems fixed by #907 , I'll fix that",
"Ok found why it doesn't detect the problems fixed by #907 . That's because for all those datasets the urls are actually fine (no backslash) on windows, even if it uses `os.path.join`.\r\n\r\nThis is because of the behavior of `os.path.join` on windows when the first path ends with a slash : \r\n\r\n```python\r\nimport os\r\nos.path.join(\"https://test.com/foo\", \"bar.txt\")\r\n# 'https://test.com/foo\\\\bar.txt'\r\nos.path.join(\"https://test.com/foo/\", \"bar.txt\")\r\n# 'https://test.com/foo/bar.txt'\r\n```\r\n\r\nHowever even though the urls are correct, this is definitely bad practice and we should never use `os.path.join` for urls"
] | 1,606,498,708,000 | 1,606,690,117,000 | 1,606,690,116,000 | MEMBER | null | Following #903 @albertvillanova noticed that there are sometimes bad usage of `os.path.join` in datasets scripts to create URLS. However this should be avoided since it doesn't work on windows.
I'm suggesting a test to make sure we that all the urls don't have backslashes in them in the datasets scripts.
The tests works by adding a callback feature to the MockDownloadManager used to test the dataset scripts. In a download callback I just make sure that the url is valid. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/905/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/905",
"html_url": "https://github.com/huggingface/datasets/pull/905",
"diff_url": "https://github.com/huggingface/datasets/pull/905.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/905.patch",
"merged_at": 1606690116000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/904 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/904/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/904/comments | https://api.github.com/repos/huggingface/datasets/issues/904/events | https://github.com/huggingface/datasets/pull/904 | 752,372,743 | MDExOlB1bGxSZXF1ZXN0NTI4NzA5NTUx | 904 | Very detailed step-by-step on how to add a dataset | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Awesome! Thanks @lhoestq "
] | 1,606,495,521,000 | 1,606,730,187,000 | 1,606,730,186,000 | MEMBER | null | Add very detailed step-by-step instructions to add a new dataset to the library. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/904/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/904",
"html_url": "https://github.com/huggingface/datasets/pull/904",
"diff_url": "https://github.com/huggingface/datasets/pull/904.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/904.patch",
"merged_at": 1606730186000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/903 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/903/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/903/comments | https://api.github.com/repos/huggingface/datasets/issues/903/events | https://github.com/huggingface/datasets/pull/903 | 752,360,614 | MDExOlB1bGxSZXF1ZXN0NTI4Njk5NDQ3 | 903 | Fix URL with backslash in Windows | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq I was indeed working on that... to make another commit on this feature branch...",
"But as you prefer... nevermind! :)",
"Ah what do you have in mind for the tests ? I was thinking of adding a check in the MockDownloadManager used for tests based on dummy data. I'm creating a PR right now, I'd be happy to have your opinion",
"Indeed I was thinking of something similar: monckeypatching the HTTP request...",
"Therefore, if you agree, I am removing all the rest of `os.path.join`, both from the code and the docs...",
"If you spot other `os.path.join` for urls in dataset scripts or metrics scripts feel free to fix them.\r\nIn the library itself (/src/datasets) it should be fine since there are tests and a windows CI, but if you have doubts of some usage of `os.path.join` somewhere, let me know.",
"Alright create the test in #905 .\r\nThe windows CI is failing for all the datasets that have bad usage of `os.path.join` for urls.\r\nThere are of course the ones you fixed in this PR (thanks again !) but I found others as well such as pg19 and blimp.\r\nYou can check the full list by looking at the CI failures of the commit 1ce3354",
"I am merging this one as well as #906 that should fix all of the datasets.\r\nThen I'll rebase #905 which adds the test that checks for bad urls and make sure it' all green now"
] | 1,606,494,384,000 | 1,606,500,286,000 | 1,606,500,286,000 | MEMBER | null | In Windows, `os.path.join` generates URLs containing backslashes, when the first "path" does not end with a slash.
In general, `os.path.join` should be avoided to generate URLs. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/903/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/903",
"html_url": "https://github.com/huggingface/datasets/pull/903",
"diff_url": "https://github.com/huggingface/datasets/pull/903.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/903.patch",
"merged_at": 1606500286000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/902 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/902/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/902/comments | https://api.github.com/repos/huggingface/datasets/issues/902/events | https://github.com/huggingface/datasets/pull/902 | 752,345,739 | MDExOlB1bGxSZXF1ZXN0NTI4Njg3NTYw | 902 | Follow cache_dir parameter to gcs downloader | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,492,926,000 | 1,606,690,134,000 | 1,606,690,133,000 | MEMBER | null | As noticed in #900 the cache_dir parameter was not followed to the downloader in the case of an already processed dataset hosted on our google storage (one of them is natural questions).
Fix #900 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/902/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/902",
"html_url": "https://github.com/huggingface/datasets/pull/902",
"diff_url": "https://github.com/huggingface/datasets/pull/902.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/902.patch",
"merged_at": 1606690133000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/901 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/901/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/901/comments | https://api.github.com/repos/huggingface/datasets/issues/901/events | https://github.com/huggingface/datasets/pull/901 | 752,233,851 | MDExOlB1bGxSZXF1ZXN0NTI4NTk3NDU5 | 901 | Addition of Nl2Bash Dataset | {
"login": "reshinthadithyan",
"id": 36307201,
"node_id": "MDQ6VXNlcjM2MzA3MjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/36307201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reshinthadithyan",
"html_url": "https://github.com/reshinthadithyan",
"followers_url": "https://api.github.com/users/reshinthadithyan/followers",
"following_url": "https://api.github.com/users/reshinthadithyan/following{/other_user}",
"gists_url": "https://api.github.com/users/reshinthadithyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/reshinthadithyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reshinthadithyan/subscriptions",
"organizations_url": "https://api.github.com/users/reshinthadithyan/orgs",
"repos_url": "https://api.github.com/users/reshinthadithyan/repos",
"events_url": "https://api.github.com/users/reshinthadithyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/reshinthadithyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hello, thanks. I had a talk with the dataset authors, found out that the data now is obsolete and they'll get a stable version soon. So temporality closing the PR.\r\n Although I have a question, What should _id_ be in the return statement? Should that be something like a start index (or) the type of split will do? Thanks. ",
"@reshinthadithyan we should hold off on this for a couple of weeks till NeurIPS concludes. The [NLC2CMD](http://nlc2cmd.us-east.mybluemix.net/) data will be out then; which includes a cleaner version of this NL2Bash data. The older data is sort of obsolete now. ",
"Ah nvm you already commented 😆 "
] | 1,606,481,635,000 | 1,606,673,365,000 | 1,606,673,331,000 | CONTRIBUTOR | null | ## Overview
The NL2Bash data contains over 10,000 instances of linux shell commands and their corresponding natural language descriptions provided by experts, from the Tellina system. The dataset features 100+ commonly used shell utilities.
## Footnotes
The following dataset marks the first ML on source code related Dataset in datasets module. It'll be really useful as a lot of the research direction involves Transformer Based Model.
Thanks.
### Reference Links
> Paper Link = https://arxiv.org/pdf/1802.08979.pdf
> Github Link = https://github.com/TellinaTool/nl2bash
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/901/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/901",
"html_url": "https://github.com/huggingface/datasets/pull/901",
"diff_url": "https://github.com/huggingface/datasets/pull/901.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/901.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/900 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/900/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/900/comments | https://api.github.com/repos/huggingface/datasets/issues/900/events | https://github.com/huggingface/datasets/issues/900 | 752,214,066 | MDU6SXNzdWU3NTIyMTQwNjY= | 900 | datasets.load_dataset() custom chaching directory bug | {
"login": "SapirWeissbuch",
"id": 44585792,
"node_id": "MDQ6VXNlcjQ0NTg1Nzky",
"avatar_url": "https://avatars.githubusercontent.com/u/44585792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SapirWeissbuch",
"html_url": "https://github.com/SapirWeissbuch",
"followers_url": "https://api.github.com/users/SapirWeissbuch/followers",
"following_url": "https://api.github.com/users/SapirWeissbuch/following{/other_user}",
"gists_url": "https://api.github.com/users/SapirWeissbuch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SapirWeissbuch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SapirWeissbuch/subscriptions",
"organizations_url": "https://api.github.com/users/SapirWeissbuch/orgs",
"repos_url": "https://api.github.com/users/SapirWeissbuch/repos",
"events_url": "https://api.github.com/users/SapirWeissbuch/events{/privacy}",
"received_events_url": "https://api.github.com/users/SapirWeissbuch/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting ! I'm looking into it."
] | 1,606,479,533,000 | 1,606,690,133,000 | 1,606,690,133,000 | NONE | null | Hello,
I'm having issue with loading a dataset with a custom `cache_dir`. Despite specifying the output dir, it is still downloaded to
`~/.cache`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```python
import datasets
from pathlib import Path
validation_dataset = datasets.load_dataset("natural_questions", split="validation[:5%]", cache_dir=Path("./data"))
```
## The output:
* The dataset is downloaded to my home directory's `.cache`
* A new empty directory named "`natural_questions` is created in the specified directory `.data`
* `tree data` in the shell outputs:
```
data
└── natural_questions
└── default
└── 0.0.2
3 directories, 0 files
```
The output:
```
Downloading: 8.61kB [00:00, 5.11MB/s]
Downloading: 13.6kB [00:00, 7.89MB/s]
Using custom data configuration default
Downloading and preparing dataset natural_questions/default (download: 41.97 GiB, generated: 92.95 GiB, post-processed: Unknown size, total: 134.92 GiB) to ./data/natural_questions/default/0.0.2/867dbbaf9137c1b8
3ecb19f5eb80559e1002ea26e702c6b919cfa81a17a8c531...
Downloading: 100%|██████████████████████████████████████████████████| 13.6k/13.6k [00:00<00:00, 1.51MB/s]
Downloading: 7%|███▎ | 6.70G/97.4G [03:46<1:37:05, 15.6MB/s]
```
## Expected behaviour:
The dataset "Natural Questions" should be downloaded to the directory "./data"
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/900/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/899 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/899/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/899/comments | https://api.github.com/repos/huggingface/datasets/issues/899/events | https://github.com/huggingface/datasets/pull/899 | 752,191,227 | MDExOlB1bGxSZXF1ZXN0NTI4NTYzNzYz | 899 | Allow arrow based builder in auto dummy data generation | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,477,178,000 | 1,606,483,809,000 | 1,606,483,808,000 | MEMBER | null | Following #898 I added support for arrow based builder for the auto dummy data generator | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/899/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/899",
"html_url": "https://github.com/huggingface/datasets/pull/899",
"diff_url": "https://github.com/huggingface/datasets/pull/899.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/899.patch",
"merged_at": 1606483808000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/898 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/898/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/898/comments | https://api.github.com/repos/huggingface/datasets/issues/898/events | https://github.com/huggingface/datasets/pull/898 | 752,148,284 | MDExOlB1bGxSZXF1ZXN0NTI4NTI4MDY1 | 898 | Adding SQA dataset | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This dataset seems to have around 1000 configs. Therefore when creating the dummy data we end up with hundreds of MB of dummy data which we don't want to add in the repo.\r\nLet's make this PR on hold for now and find a solution after the sprint of next week",
"Closing in favor of #1566 "
] | 1,606,472,958,000 | 1,608,036,880,000 | 1,608,036,859,000 | MEMBER | null | As discussed in #880
Seems like automatic dummy-data generation doesn't work if the builder is a `ArrowBasedBuilder`, do you think you could take a look @lhoestq ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/898/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/898",
"html_url": "https://github.com/huggingface/datasets/pull/898",
"diff_url": "https://github.com/huggingface/datasets/pull/898.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/898.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/897/comments | https://api.github.com/repos/huggingface/datasets/issues/897/events | https://github.com/huggingface/datasets/issues/897 | 752,100,256 | MDU6SXNzdWU3NTIxMDAyNTY= | 897 | Dataset viewer issues | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2107841032,
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer",
"name": "nlp-viewer",
"color": "94203D",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [
"Thanks for reporting !\r\ncc @srush for the empty feature list issue and the encoding issue\r\ncc @julien-c maybe we can update the url and just have a redirection from the old url to the new one ?",
"Ok, I redirected on our side to a new url. ⚠️ @srush: if you update the Streamlit config too to `/datasets/viewer`, let me know because I'll need to change our nginx config at the same time",
"9",
"⠀⠀⠀ ⠀ ",
"⠀⠀⠀ ⠀ "
] | 1,606,468,474,000 | 1,635,671,521,000 | 1,635,671,521,000 | CONTRIBUTOR | null | I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though:
- the URL is still under `nlp`, perhaps an alias for `datasets` can be made
- when I remove a **feature** (and the feature list is empty), I get an error. This is probably expected, but perhaps a better error message can be shown to the user
```bash
IndexError: list index out of range
Traceback:
File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 316, in <module>
st.table(style)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 122, in wrapped_method
return dg._enqueue_new_element_delta(marshall_element, delta_type, last_index)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 367, in _enqueue_new_element_delta
rv = marshall_element(msg.delta.new_element)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 120, in marshall_element
return method(dg, element, *args, **kwargs)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 2944, in table
data_frame_proto.marshall_data_frame(data, element.table)
File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 54, in marshall_data_frame
_marshall_styles(proto_df.style, df, styler)
File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 73, in _marshall_styles
translated_style = styler._translate()
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/pandas/io/formats/style.py", line 351, in _translate
* (len(clabels[0]) - len(hidden_columns))
```
- there seems to be **an encoding issue** in the default view, the dataset examples are shown as raw monospace text, without a decent encoding. That makes it hard to read for languages that use a lot of special characters. Take for instance the [cs-en WMT19 set](https://huggingface.co/nlp/viewer/?dataset=wmt19&config=cs-en). This problem goes away when you enable "List view", because then some syntax highlighteris used, and the special characters are coded correctly.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/897/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/896 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/896/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/896/comments | https://api.github.com/repos/huggingface/datasets/issues/896/events | https://github.com/huggingface/datasets/pull/896 | 751,834,265 | MDExOlB1bGxSZXF1ZXN0NTI4MjcyMjc0 | 896 | Add template and documentation for dataset card | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,426,225,000 | 1,606,525,815,000 | 1,606,525,815,000 | MEMBER | null | This PR adds a template for dataset cards, as well as a guide to filling out the template and a completed example for the ELI5 dataset, building on the work of @mcmillanmajora
New pull requests adding datasets should now have a README.md file which serves both to hold the tags we will have to index the datasets and as a data statement.
The template is designed to be pretty extensive. The idea is that the person who uploads the dataset should put in all the basic information (at least the Dataset Description section) and whatever else they feel comfortable adding and leave the `[More Information Needed]` annotation everywhere else as a placeholder.
We will then work with @mcmillanmajora to involve the data authors more directly in filling out the remaining information.
Direct links to:
- [Documentation](https://github.com/yjernite/datasets/blob/add_dataset_card_doc/templates/README_guide.md)
- [Empty template](https://github.com/yjernite/datasets/blob/add_dataset_card_doc/templates/README.md)
- [ELI5 example](https://github.com/yjernite/datasets/blob/add_dataset_card_doc/datasets/eli5/README.md) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/896/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/896",
"html_url": "https://github.com/huggingface/datasets/pull/896",
"diff_url": "https://github.com/huggingface/datasets/pull/896.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/896.patch",
"merged_at": 1606525814000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/895/comments | https://api.github.com/repos/huggingface/datasets/issues/895/events | https://github.com/huggingface/datasets/pull/895 | 751,782,295 | MDExOlB1bGxSZXF1ZXN0NTI4MjMyMjU3 | 895 | Better messages regarding split naming | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,416,946,000 | 1,606,483,860,000 | 1,606,483,859,000 | MEMBER | null | I made explicit the error message when a bad split name is used.
Also I wanted to allow the `-` symbol for split names but actually this symbol is used to name the arrow files `{dataset_name}-{dataset_split}.arrow` so we should probably keep it this way, i.e. not allowing the `-` symbol in split names. Moreover in the future we might want to use `{dataset_name}-{dataset_split}-{shard_id}_of_{n_shards}.arrow` and reuse the `-` symbol. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/895/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/895",
"html_url": "https://github.com/huggingface/datasets/pull/895",
"diff_url": "https://github.com/huggingface/datasets/pull/895.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/895.patch",
"merged_at": 1606483859000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/894/comments | https://api.github.com/repos/huggingface/datasets/issues/894/events | https://github.com/huggingface/datasets/pull/894 | 751,734,905 | MDExOlB1bGxSZXF1ZXN0NTI4MTkzNzQy | 894 | Allow several tags sets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Closing since we don't need to update the tags of those three datasets (for each one of them there is only one tag set)"
] | 1,606,410,253,000 | 1,620,239,057,000 | 1,606,508,149,000 | MEMBER | null | Hi !
Currently we have three dataset cards : snli, cnn_dailymail and allocine.
For each one of those datasets a set of tag is defined. The set of tags contains fields like `multilinguality`, `task_ids`, `licenses` etc.
For certain datasets like `glue` for example, there exist several configurations: `sst2`, `mnli` etc. Therefore we should define one set of tags per configuration. However the current format used for tags only supports one set of tags per dataset.
In this PR I propose a simple change in the yaml format used for tags to allow for several sets of tags.
Let me know what you think, especially @julien-c let me know if it's good for you since it's going to be parsed by moon-landing | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/894/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/894/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/894",
"html_url": "https://github.com/huggingface/datasets/pull/894",
"diff_url": "https://github.com/huggingface/datasets/pull/894.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/894.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/893 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/893/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/893/comments | https://api.github.com/repos/huggingface/datasets/issues/893/events | https://github.com/huggingface/datasets/pull/893 | 751,703,696 | MDExOlB1bGxSZXF1ZXN0NTI4MTY4NDgx | 893 | add metrec: arabic poetry dataset | {
"login": "zaidalyafeai",
"id": 15667714,
"node_id": "MDQ6VXNlcjE1NjY3NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zaidalyafeai",
"html_url": "https://github.com/zaidalyafeai",
"followers_url": "https://api.github.com/users/zaidalyafeai/followers",
"following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}",
"gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions",
"organizations_url": "https://api.github.com/users/zaidalyafeai/orgs",
"repos_url": "https://api.github.com/users/zaidalyafeai/repos",
"events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}",
"received_events_url": "https://api.github.com/users/zaidalyafeai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq removed prints and added the dataset card. ",
"@lhoestq, I want to add other datasets as well. I am not sure if it is possible to do so with the same branch. ",
"Hi @zaidalyafeai, really excited to get more Arabic coverage in the lib, thanks for your contribution!\r\n\r\nCouple of last comments:\r\n- this PR seems to modify some files that are unrelated to your dataset. Could you rebase from master? It should take care of that.\r\n- The dataset card is a good start! Can you describe the task in a few words and add more information in the Data Structure part, including listing and describing the fields? Also, if you don't know how to fill out a paragraph, or if you have some information but think more would be beneficial, please leave `[More Information Needed]` instead of `[N/A]`",
"> Hi @zaidalyafeai, really excited to get more Arabic coverage in the lib, thanks for your contribution!\r\n> \r\n> Couple of last comments:\r\n> \r\n> * this PR seems to modify some files that are unrelated to your dataset. Could you rebase from master? It should take care of that.\r\n> * The dataset card is a good start! Can you describe the task in a few words and add more information in the Data Structure part, including listing and describing the fields? Also, if you don't know how to fill out a paragraph, or if you have some information but think more would be beneficial, please leave `[More Information Needed]` instead of `[N/A]`\r\n\r\nI have no idea how some other files changed. I tried to rebase and push but this created some errors. I had to run the command \r\n`git push -u --force origin add-metrec-dataset` which might cause some problems. ",
"Feel free to create another branch/another PR without all the other changes",
"@yjernite can you explain which other files are changed because of the PR ? https://github.com/huggingface/datasets/pull/893/files only shows files related to the dataset. ",
"Right ! github is nice with us today :)",
"Looks like this one is ready to merge, thanks @zaidalyafeai !",
"@lhoestq thanks for the merge. I am not a GitHub geek. I already have another dataset to add. I'm not sure how to add another given my forked repo. Do I follow the same steps with a different checkout name ?",
"If you've followed the instructions in here : https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#start-by-preparing-your-environment\r\n\r\n(especially point 2. and the command `git remote add upstream ....`)\r\n\r\nThen you can try\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit checkout -b add-<my-new-dataset-name>\r\n```"
] | 1,606,407,016,000 | 1,606,839,895,000 | 1,606,835,707,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/893/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/893",
"html_url": "https://github.com/huggingface/datasets/pull/893",
"diff_url": "https://github.com/huggingface/datasets/pull/893.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/893.patch",
"merged_at": 1606835707000
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/892/comments | https://api.github.com/repos/huggingface/datasets/issues/892/events | https://github.com/huggingface/datasets/pull/892 | 751,658,262 | MDExOlB1bGxSZXF1ZXN0NTI4MTMxNTE1 | 892 | Add a few datasets of reference in the documentation | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks good to me. Do we also support TSV in this helper (explain if it should be text or CSV) and in the dummy-data creator?",
"snli is basically based on tsv files (but named as .txt) and it is in the list of datasets of reference.\r\nThe dummy data creator supports tsv",
"merging this one.\r\nIf you think of other datasets of reference to add we can still add them later"
] | 1,606,402,959,000 | 1,606,500,525,000 | 1,606,500,524,000 | MEMBER | null | I started making a small list of various datasets of reference in the documentation.
Since many datasets share a lot in common I think it's good to have a list of datasets scripts to get some inspiration from.
Let me know what you think, and if you have ideas of other datasets that we may add to this list, please let me know. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/892/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/892",
"html_url": "https://github.com/huggingface/datasets/pull/892",
"diff_url": "https://github.com/huggingface/datasets/pull/892.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/892.patch",
"merged_at": 1606500524000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/891 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/891/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/891/comments | https://api.github.com/repos/huggingface/datasets/issues/891/events | https://github.com/huggingface/datasets/pull/891 | 751,576,869 | MDExOlB1bGxSZXF1ZXN0NTI4MDY1MTQ3 | 891 | gitignore .python-version | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,606,395,958,000 | 1,606,397,307,000 | 1,606,397,306,000 | MEMBER | null | ignore `.python-version` added by `pyenv` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/891/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/891",
"html_url": "https://github.com/huggingface/datasets/pull/891",
"diff_url": "https://github.com/huggingface/datasets/pull/891.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/891.patch",
"merged_at": 1606397306000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/890 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/890/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/890/comments | https://api.github.com/repos/huggingface/datasets/issues/890/events | https://github.com/huggingface/datasets/pull/890 | 751,534,050 | MDExOlB1bGxSZXF1ZXN0NTI4MDI5NjA3 | 890 | Add LER | {
"login": "JoelNiklaus",
"id": 3775944,
"node_id": "MDQ6VXNlcjM3NzU5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoelNiklaus",
"html_url": "https://github.com/JoelNiklaus",
"followers_url": "https://api.github.com/users/JoelNiklaus/followers",
"following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}",
"gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions",
"organizations_url": "https://api.github.com/users/JoelNiklaus/orgs",
"repos_url": "https://api.github.com/users/JoelNiklaus/repos",
"events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoelNiklaus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for the comments. I addressed them and pushed again.\r\nWhen I run \"make quality\" I get the following error but I don't know how to resolve it or what the problem ist respectively:\r\nwould reformat /Users/joelniklaus/NextCloud/PhDJoelNiklaus/Code/datasets/datasets/ler/ler.py\r\nOh no! 💥 💔 💥\r\n1 file would be reformatted, 257 files would be left unchanged.\r\nmake: *** [quality] Error 1\r\n",
"Awesome thanks :)\r\nTo automatically format the python files you can run `make style`",
"I did that now. But still getting the following error:\r\nblack --check --line-length 119 --target-version py36 tests src benchmarks datasets metrics\r\nAll done! ✨ 🍰 ✨\r\n258 files would be left unchanged.\r\nisort --check-only tests src benchmarks datasets metrics\r\nflake8 tests src benchmarks datasets metrics\r\ndatasets/ler/ler.py:46:96: W291 trailing whitespace\r\ndatasets/ler/ler.py:47:68: W291 trailing whitespace\r\ndatasets/ler/ler.py:48:102: W291 trailing whitespace\r\ndatasets/ler/ler.py:49:112: W291 trailing whitespace\r\ndatasets/ler/ler.py:50:92: W291 trailing whitespace\r\ndatasets/ler/ler.py:51:116: W291 trailing whitespace\r\ndatasets/ler/ler.py:52:84: W291 trailing whitespace\r\nmake: *** [quality] Error 1\r\n\r\nHowever: When I look at the file I don't see any trailing whitespace",
"maybe a bug with flake8 ? could you try to update it ? which version do you have ?",
"This is my flake8 version: 3.7.9 (mccabe: 0.6.1, pycodestyle: 2.5.0, pyflakes: 2.1.1) CPython 3.8.5 on Darwin\r\n",
"Now I updated to: 3.8.4 (mccabe: 0.6.1, pycodestyle: 2.6.0, pyflakes: 2.2.0) CPython 3.8.5 on Darwin\r\n\r\nAnd now I even get additional errors:\r\nblack --check --line-length 119 --target-version py36 tests src benchmarks datasets metrics\r\nAll done! ✨ 🍰 ✨\r\n258 files would be left unchanged.\r\nisort --check-only tests src benchmarks datasets metrics\r\nflake8 tests src benchmarks datasets metrics\r\ndatasets/polyglot_ner/polyglot_ner.py:123:64: F541 f-string is missing placeholders\r\ndatasets/ler/ler.py:46:96: W291 trailing whitespace\r\ndatasets/ler/ler.py:47:68: W291 trailing whitespace\r\ndatasets/ler/ler.py:48:102: W291 trailing whitespace\r\ndatasets/ler/ler.py:49:112: W291 trailing whitespace\r\ndatasets/ler/ler.py:50:92: W291 trailing whitespace\r\ndatasets/ler/ler.py:51:116: W291 trailing whitespace\r\ndatasets/ler/ler.py:52:84: W291 trailing whitespace\r\ndatasets/math_dataset/math_dataset.py:233:25: E741 ambiguous variable name 'l'\r\nmetrics/coval/coval.py:236:31: F541 f-string is missing placeholders\r\nmake: *** [quality] Error 1\r\n\r\nI do this on macOS Catalina 10.15.7 in case this matters",
"Code quality test now passes, thanks :) \r\n\r\nTo fix the other tests failing I think you can just rebase from master.\r\nAlso make sure that the dummy data test passes with\r\n```python\r\nRUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_ler\r\n```",
"I will close this PR because abishek did the same better (https://github.com/huggingface/datasets/pull/944)",
"Sorry you had to close your PR ! It looks like this week's sprint doesn't always make it easy to see what's being added/what's already added. \r\nThank you for contributing to the library. You did a great job on adding LER so feel free to add other ones that you would like to see in the library, it will be a pleasure to review"
] | 1,606,391,903,000 | 1,606,829,615,000 | 1,606,829,176,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/890/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/890",
"html_url": "https://github.com/huggingface/datasets/pull/890",
"diff_url": "https://github.com/huggingface/datasets/pull/890.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/890.patch",
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/889 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/889/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/889/comments | https://api.github.com/repos/huggingface/datasets/issues/889/events | https://github.com/huggingface/datasets/pull/889 | 751,115,691 | MDExOlB1bGxSZXF1ZXN0NTI3NjkwODE2 | 889 | Optional per-dataset default config name | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https://api.github.com/users/joeddav/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joeddav/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joeddav/subscriptions",
"organizations_url": "https://api.github.com/users/joeddav/orgs",
"repos_url": "https://api.github.com/users/joeddav/repos",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"received_events_url": "https://api.github.com/users/joeddav/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I like the idea ! And the approach is right imo\r\n\r\nNote that by changing this we will have to add a way for users to get the config lists of a dataset. In the current user workflow, the user could see the list of the config when the missing config error is raised but now it won't be the case because of the default config.",
"Maybe let's add a test in the test_builder.py test script ?",
"@lhoestq Okay great, I added a test as well as two new inspect functions: `get_dataset_config_names` and `get_dataset_infos` (the latter is something I've been wanting anyway). As a quick hack, you can also just pass a random config name (e.g. an empty string) to `load_dataset` to get the config names in the error msg as before. Also added a couple paragraphs to the adding new datasets doc.\r\n\r\nI'll send a separate PR incorporating this in existing datasets so we can get this merged before our sprint on Monday.\r\n\r\nAny ideas on the failing tests? I'm having trouble making sense of it. **Edit**: nvm, it was master."
] | 1,606,338,150,000 | 1,606,757,253,000 | 1,606,757,247,000 | CONTRIBUTOR | null | This PR adds a `DEFAULT_CONFIG_NAME` class attribute to `DatasetBuilder`. This allows a dataset to have a specified default config name when a dataset has more than one config but the user does not specify it. For example, after defining `DEFAULT_CONFIG_NAME = "combined"` in PolyglotNER, a user can now do the following:
```python
ds = load_dataset("polyglot_ner")
```
which is equivalent to,
```python
ds = load_dataset("polyglot_ner", "combined")
```
In effect (for this particular dataset configuration), this means that if the user doesn't specify a language, they are given the combined dataset including all languages.
Since it doesn't always make sense to have a default config, this feature is opt-in. If `DEFAULT_CONFIG_NAME` is not defined and a user does not pass a config for a dataset with multiple configs available, a ValueError is raised like usual.
Let me know what you think about this approach @lhoestq @thomwolf and I'll add some documentation and define a default for some of our existing datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/889/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/889",
"html_url": "https://github.com/huggingface/datasets/pull/889",
"diff_url": "https://github.com/huggingface/datasets/pull/889.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/889.patch",
"merged_at": 1606757247000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/888 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/888/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/888/comments | https://api.github.com/repos/huggingface/datasets/issues/888/events | https://github.com/huggingface/datasets/issues/888 | 750,944,422 | MDU6SXNzdWU3NTA5NDQ0MjI= | 888 | Nested lists are zipped unexpectedly | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Yes following the Tensorflow Datasets convention, objects with type `Sequence of a Dict` are actually stored as a `dictionary of lists`.\r\nSee the [documentation](https://huggingface.co/docs/datasets/features.html?highlight=features) for more details",
"Thanks.\r\nThis is a bit (very) confusing, but I guess if its intended, I'll just work with it as if its how my data was originally structured :) \r\n"
] | 1,606,320,466,000 | 1,606,325,439,000 | 1,606,325,439,000 | CONTRIBUTOR | null | I might misunderstand something, but I expect that if I define:
```python
"top": datasets.features.Sequence({
"middle": datasets.features.Sequence({
"bottom": datasets.Value("int32")
})
})
```
And I then create an example:
```python
yield 1, {
"top": [{
"middle": [
{"bottom": 1},
{"bottom": 2}
]
}]
}
```
I then load my dataset:
```python
train = load_dataset("my dataset")["train"]
```
and expect to be able to access `data[0]["top"][0]["middle"][0]`.
That is not the case. Here is `data[0]` as JSON:
```json
{"top": {"middle": [{"bottom": [1, 2]}]}}
```
Clearly different than the thing I inputted.
```json
{"top": [{"middle": [{"bottom": 1},{"bottom": 2}]}]}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/888/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/887/comments | https://api.github.com/repos/huggingface/datasets/issues/887/events | https://github.com/huggingface/datasets/issues/887 | 750,868,831 | MDU6SXNzdWU3NTA4Njg4MzE= | 887 | pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/followers",
"following_url": "https://api.github.com/users/AmitMY/following{/other_user}",
"gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions",
"organizations_url": "https://api.github.com/users/AmitMY/orgs",
"repos_url": "https://api.github.com/users/AmitMY/repos",
"events_url": "https://api.github.com/users/AmitMY/events{/privacy}",
"received_events_url": "https://api.github.com/users/AmitMY/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Yes right now `ArrayXD` can only be used as a column feature type, not a subtype.\r\nWith the current Arrow limitations I don't think we'll be able to make it work as a subtype, however it should be possible to allow dimensions of dynamic sizes (`Array3D(shape=(None, 137, 2), dtype=\"float32\")` for example since the [underlying arrow type](https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L236) allows dynamic sizes.\r\n\r\nFor now I'd suggest the use of nested `Sequence` types. Once we have the dynamic sizes you can update the dataset.\r\nWhat do you think ?",
"> Yes right now ArrayXD can only be used as a column feature type, not a subtype. \r\n\r\nMeaning it can't be nested under `Sequence`?\r\nIf so, for now I'll just make it a python list and make it with the nested `Sequence` type you suggested.",
"Yea unfortunately..\r\nThat's a current limitation with Arrow ExtensionTypes that can't be used in the default Arrow Array objects.\r\nWe already have an ExtensionArray that allows us to use them as column types but not for subtypes.\r\nMaybe we can extend it, I haven't experimented with that yet",
"Cool\r\nSo please consider this issue as a feature request for:\r\n```\r\nArray3D(shape=(None, 137, 2), dtype=\"float32\")\r\n```\r\n\r\nits a way to represent videos, poses, and other cool sequences",
"@lhoestq well, so sequence of sequences doesn't work either...\r\n\r\n```\r\npyarrow.lib.ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648\r\n```\r\n\r\n\r\n",
"Working with Arrow can be quite fun sometimes.\r\nYou can fix this issue by trying to reduce the writer batch size (same trick than the one used to reduce the RAM usage in https://github.com/huggingface/datasets/issues/741).\r\n\r\nLet me know if it works.\r\nI haven't investigated yet on https://github.com/huggingface/datasets/issues/741 since I was preparing this week's sprint to add datasets but this is in my priority list for early next week.",
"The batch size fix doesn't work... not for #741 and not for this dataset I'm trying (DGS corpus)\r\nLoading the DGS corpus takes 400GB of RAM, which is fine with me as my machine is large enough\r\n",
"Sorry it doesn't work. Will let you know once I fixed it",
"Hi @lhoestq , any update on dynamic sized arrays?\r\n(`Array3D(shape=(None, 137, 2), dtype=\"float32\")`)",
"Not yet, I've been pretty busy with the dataset sprint lately but this is something that's been asked several times already. So I'll definitely work on this as soon as I'm done with the sprint and with the RAM issue you reported.",
"Hi @lhoestq,\r\nAny chance you have some updates on the supporting `ArrayXD` as a subtype or support of dynamic sized arrays?\r\n\r\ne.g.:\r\n`datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype=\"float32\"))`\r\n`Array3D(shape=(None, 137, 2), dtype=\"float32\")`",
"Hi ! We haven't worked in this lately and it's not in our very short-term roadmap since it requires a bit a work to make it work with arrow. Though this will definitely be added at one point.",
"@lhoestq, thanks for the update.\r\n\r\nI actually tried to modify some piece of code to make it work. Can you please tell if I missing anything here?\r\nI think that for vast majority of cases it's enough to make first dimension of the array dynamic i.e. `shape=(None, 100, 100)`. For that, it's enough to modify class [ArrayExtensionArray](https://github.com/huggingface/datasets/blob/9ca24250ea44e7611c4dabd01ecf9415a7f0be6c/src/datasets/features.py#L397) to output list of arrays of different sizes instead of list of arrays of same sizes (current version)\r\nBelow are my modifications of this class.\r\n\r\n```\r\nclass ArrayExtensionArray(pa.ExtensionArray):\r\n def __array__(self):\r\n zero_copy_only = _is_zero_copy_only(self.storage.type)\r\n return self.to_numpy(zero_copy_only=zero_copy_only)\r\n\r\n def __getitem__(self, i):\r\n return self.storage[i]\r\n\r\n def to_numpy(self, zero_copy_only=True):\r\n storage: pa.ListArray = self.storage\r\n size = 1\r\n for i in range(self.type.ndims):\r\n size *= self.type.shape[i]\r\n storage = storage.flatten()\r\n numpy_arr = storage.to_numpy(zero_copy_only=zero_copy_only)\r\n numpy_arr = numpy_arr.reshape(len(self), *self.type.shape)\r\n return numpy_arr\r\n\r\n def to_list_of_numpy(self, zero_copy_only=True):\r\n storage: pa.ListArray = self.storage\r\n shape = self.type.shape\r\n arrays = []\r\n for dim in range(1, self.type.ndims):\r\n assert shape[dim] is not None, f\"Support only dynamic size on first dimension. Got: {shape}\"\r\n\r\n first_dim_offsets = np.array([off.as_py() for off in storage.offsets])\r\n for i in range(len(storage)):\r\n storage_el = storage[i:i+1]\r\n first_dim = first_dim_offsets[i+1] - first_dim_offsets[i]\r\n # flatten storage\r\n for dim in range(self.type.ndims):\r\n storage_el = storage_el.flatten()\r\n\r\n numpy_arr = storage_el.to_numpy(zero_copy_only=zero_copy_only)\r\n arrays.append(numpy_arr.reshape(first_dim, *shape[1:]))\r\n\r\n return arrays\r\n\r\n def to_pylist(self):\r\n zero_copy_only = _is_zero_copy_only(self.storage.type)\r\n if self.type.shape[0] is None:\r\n return self.to_list_of_numpy(zero_copy_only=zero_copy_only)\r\n else:\r\n return self.to_numpy(zero_copy_only=zero_copy_only).tolist()\r\n```\r\n\r\nI ran few tests and it works as expected. Let me know what you think.",
"Thanks for diving into this !\r\n\r\nIndeed focusing on making the first dimensions dynamic make total sense (and users could still re-order their dimensions to match this constraint).\r\nYour code looks great :) I think it can even be extended to support several dynamic dimensions if we want to.\r\n\r\nFeel free to open a PR to include these changes, then we can update our test suite to make sure it works in all use cases.\r\nIn particular I think we might need a few tweaks to allow it to be converted to pandas (though I haven't tested yet):\r\n\r\n```python\r\nfrom datasets import Dataset, Features, Array3D\r\n\r\n# this works\r\nmatrix = [[1, 0], [0, 1]]\r\nfeatures = Features({\"a\": Array3D(dtype=\"int32\", shape=(1, 2, 2))})\r\nd = Dataset.from_dict({\"a\": [[matrix], [matrix]]})\r\nprint(d.to_pandas())\r\n\r\n# this should work as well\r\nmatrix = [[1, 0], [0, 1]]\r\nfeatures = Features({\"a\": Array3D(dtype=\"int32\", shape=(None, 2, 2))})\r\nd = Dataset.from_dict({\"a\": [[matrix], [matrix] * 2]})\r\nprint(d.to_pandas())\r\n```\r\n\r\nI'll be happy to help you on this :)"
] | 1,606,314,741,000 | 1,631,207,020,000 | null | CONTRIBUTOR | null | I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and their types
features=datasets.Features(
{
"pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))
}
),
homepage=_HOMEPAGE,
citation=_CITATION,
)
def _generate_examples(self):
""" Yields examples. """
yield 1, {
"pose": [np.zeros(shape=(137, 2), dtype=np.float32)]
}
```
But this doesn't work -
> pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/887/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/886 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/886/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/886/comments | https://api.github.com/repos/huggingface/datasets/issues/886/events | https://github.com/huggingface/datasets/pull/886 | 750,829,314 | MDExOlB1bGxSZXF1ZXN0NTI3NDU1MDU5 | 886 | Fix wikipedia custom config | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think this issue is still not resolve yet. Please check my comment in the following issue, thanks.\r\n[#577](https://github.com/huggingface/datasets/issues/577#issuecomment-868122769)"
] | 1,606,311,852,000 | 1,624,598,656,000 | 1,606,318,933,000 | MEMBER | null | It should be possible to use the wikipedia dataset with any `language` and `date`.
However it was not working as noticed in #784 . Indeed the custom wikipedia configurations were not enabled for some reason.
I fixed that and was able to run
```python
from datasets import load_dataset
load_dataset("./datasets/wikipedia", language="zh", date="20201120", beam_runner='DirectRunner')
```
cc @stvhuang @SamuelCahyawijaya
Fix #784 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/886/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/886/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/886",
"html_url": "https://github.com/huggingface/datasets/pull/886",
"diff_url": "https://github.com/huggingface/datasets/pull/886.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/886.patch",
"merged_at": 1606318933000
} | true |