url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.12B
| node_id
stringlengths 18
32
| number
int64 1
3.68k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1.59k
1,644B
| updated_at
int64 1.59k
1,694B
| closed_at
int64 1.59k
1,690B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 2
values | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/149/comments | https://api.github.com/repos/huggingface/datasets/issues/149/events | https://github.com/huggingface/datasets/issues/149 | 619,735,739 | MDU6SXNzdWU2MTk3MzU3Mzk= | 149 | [Feature request] Add Ubuntu Dialogue Corpus dataset | {
"login": "danth",
"id": 28959268,
"node_id": "MDQ6VXNlcjI4OTU5MjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/28959268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danth",
"html_url": "https://github.com/danth",
"followers_url": "https://api.github.com/users/danth/followers",
"following_url": "https://api.github.com/users/danth/following{/other_user}",
"gists_url": "https://api.github.com/users/danth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danth/subscriptions",
"organizations_url": "https://api.github.com/users/danth/orgs",
"repos_url": "https://api.github.com/users/danth/repos",
"events_url": "https://api.github.com/users/danth/events{/privacy}",
"received_events_url": "https://api.github.com/users/danth/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"@AlphaMycelium the Ubuntu Dialogue Corpus [version 2]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator) is added. Note that it requires a manual download by following the download instructions in the [repos]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator).\r\nMaybe we can close this issue for now?"
] | 1,589 | 1,589 | 1,589 | NONE | null | null | null | https://github.com/rkadlec/ubuntu-ranking-dataset-creator or http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/149/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/148/comments | https://api.github.com/repos/huggingface/datasets/issues/148/events | https://github.com/huggingface/datasets/issues/148 | 619,590,555 | MDU6SXNzdWU2MTk1OTA1NTU= | 148 | _download_and_prepare() got an unexpected keyword argument 'verify_infos' | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Same error for dataset 'wiki40b'",
"Should be fixed on master :)"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | null | null | # Reproduce
In Colab,
```
%pip install -q nlp
%pip install -q apache_beam mwparserfromhell
dataset = nlp.load_dataset('wikipedia')
```
get
```
Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wikipedia/20200501.aa/1.0.0...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-52471d2a0088> in <module>()
----> 1 dataset = nlp.load_dataset('wikipedia')
1 frames
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
515 download_mode=download_mode,
516 ignore_verifications=ignore_verifications,
--> 517 save_infos=save_infos,
518 )
519
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)
361 verify_infos = not save_infos and not ignore_verifications
362 self._download_and_prepare(
--> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
364 )
365 # Sync info
TypeError: _download_and_prepare() got an unexpected keyword argument 'verify_infos'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/148/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/148/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/147/comments | https://api.github.com/repos/huggingface/datasets/issues/147/events | https://github.com/huggingface/datasets/issues/147 | 619,581,907 | MDU6SXNzdWU2MTk1ODE5MDc= | 147 | Error with sklearn train_test_split | {
"login": "ClonedOne",
"id": 6853743,
"node_id": "MDQ6VXNlcjY4NTM3NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6853743?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ClonedOne",
"html_url": "https://github.com/ClonedOne",
"followers_url": "https://api.github.com/users/ClonedOne/followers",
"following_url": "https://api.github.com/users/ClonedOne/following{/other_user}",
"gists_url": "https://api.github.com/users/ClonedOne/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ClonedOne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ClonedOne/subscriptions",
"organizations_url": "https://api.github.com/users/ClonedOne/orgs",
"repos_url": "https://api.github.com/users/ClonedOne/repos",
"events_url": "https://api.github.com/users/ClonedOne/events{/privacy}",
"received_events_url": "https://api.github.com/users/ClonedOne/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Indeed. Probably we will want to have a similar method directly in the library",
"Related: #166 "
] | 1,589 | 1,592 | 1,592 | NONE | null | null | null | It would be nice if we could use sklearn `train_test_split` to quickly generate subsets from the dataset objects returned by `nlp.load_dataset`. At the moment the code:
```python
data = nlp.load_dataset('imdb', cache_dir=data_cache)
f_half, s_half = train_test_split(data['train'], test_size=0.5, random_state=seed)
```
throws:
```
ValueError: Can only get row(s) (int or slice) or columns (string).
```
It's not a big deal, since there are other ways to split the data, but it would be a cool thing to have. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/147/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/146/comments | https://api.github.com/repos/huggingface/datasets/issues/146/events | https://github.com/huggingface/datasets/pull/146 | 619,564,653 | MDExOlB1bGxSZXF1ZXN0NDE5MDI5MjUx | 146 | Add BERTScore to metrics | {
"login": "felixgwu",
"id": 7753366,
"node_id": "MDQ6VXNlcjc3NTMzNjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7753366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/felixgwu",
"html_url": "https://github.com/felixgwu",
"followers_url": "https://api.github.com/users/felixgwu/followers",
"following_url": "https://api.github.com/users/felixgwu/following{/other_user}",
"gists_url": "https://api.github.com/users/felixgwu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/felixgwu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felixgwu/subscriptions",
"organizations_url": "https://api.github.com/users/felixgwu/orgs",
"repos_url": "https://api.github.com/users/felixgwu/repos",
"events_url": "https://api.github.com/users/felixgwu/events{/privacy}",
"received_events_url": "https://api.github.com/users/felixgwu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/146",
"html_url": "https://github.com/huggingface/datasets/pull/146",
"diff_url": "https://github.com/huggingface/datasets/pull/146.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/146.patch",
"merged_at": 1589754129000
} | This PR adds [BERTScore](https://arxiv.org/abs/1904.09675) to metrics.
Here is an example of how to use it.
```sh
import nlp
bertscore = nlp.load_metric('metrics/bertscore') # or simply nlp.load_metric('bertscore') after this is added to huggingface's s3 bucket
predictions = ['example', 'fruit']
references = [['this is an example.', 'this is one example.'], ['apple']]
results = bertscore.compute(predictions, references, lang='en')
print(results)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/146/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/146/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/145/comments | https://api.github.com/repos/huggingface/datasets/issues/145/events | https://github.com/huggingface/datasets/pull/145 | 619,480,549 | MDExOlB1bGxSZXF1ZXN0NDE4OTcxMjg0 | 145 | [AWS Tests] Follow-up PR from #144 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/145",
"html_url": "https://github.com/huggingface/datasets/pull/145",
"diff_url": "https://github.com/huggingface/datasets/pull/145.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/145.patch",
"merged_at": 1589637262000
} | I forgot to add this line in PR #145 . | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/145/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/144/comments | https://api.github.com/repos/huggingface/datasets/issues/144/events | https://github.com/huggingface/datasets/pull/144 | 619,477,367 | MDExOlB1bGxSZXF1ZXN0NDE4OTY5NjA1 | 144 | [AWS tests] AWS test should not run for canonical datasets | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/144",
"html_url": "https://github.com/huggingface/datasets/pull/144",
"diff_url": "https://github.com/huggingface/datasets/pull/144.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/144.patch",
"merged_at": 1589636673000
} | AWS tests should in general not run for canonical datasets. Only local tests will run in this case. This way a PR is able to pass when adding a new dataset.
This PR changes to logic to the following:
1) All datasets that are present in `nlp/datasets` are tested only locally. This way when one adds a canonical dataset, the PR includes his dataset in the tests.
2) All datasets that are only present on AWS, such as `webis/tl_dr` atm are tested only on AWS.
I think the testing structure might need a bigger refactoring and better documentation very soon.
Merging for now to unblock new PRs @thomwolf @mariamabarham . | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/144/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/143/comments | https://api.github.com/repos/huggingface/datasets/issues/143/events | https://github.com/huggingface/datasets/issues/143 | 619,457,641 | MDU6SXNzdWU2MTk0NTc2NDE= | 143 | ArrowTypeError in squad metrics | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/users/patil-suraj/followers",
"following_url": "https://api.github.com/users/patil-suraj/following{/other_user}",
"gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions",
"organizations_url": "https://api.github.com/users/patil-suraj/orgs",
"repos_url": "https://api.github.com/users/patil-suraj/repos",
"events_url": "https://api.github.com/users/patil-suraj/events{/privacy}",
"received_events_url": "https://api.github.com/users/patil-suraj/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067393914,
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug",
"name": "metric bug",
"color": "25b21e",
"default": false,
"description": "A bug in a metric script"
}
] | closed | false | null | [] | null | [
"There was an issue in the format, thanks.\r\nNow you can do\r\n```python3\r\nsquad_dset = nlp.load_dataset(\"squad\")\r\nsquad_metric = nlp.load_metric(\"/Users/quentinlhoest/Desktop/hf/nlp-bis/metrics/squad\")\r\npredictions = [\r\n {\"id\": v[\"id\"], \"prediction_text\": v[\"answers\"][\"text\"][0]} # take first possible answer\r\n for v in squad_dset[\"validation\"]\r\n]\r\nsquad_metric.compute(predictions, squad_dset[\"validation\"])\r\n```\r\n\r\nand the expected format is \r\n```\r\nArgs:\r\n predictions: List of question-answers dictionaries with the following key-values:\r\n - 'id': id of the question-answer pair as given in the references (see below)\r\n - 'prediction_text': the text of the answer\r\n references: List of question-answers dictionaries with the following key-values:\r\n - 'id': id of the question-answer pair (see above),\r\n - 'answers': a Dict {'text': list of possible texts for the answer, as a list of strings}\r\n```"
] | 1,589 | 1,590 | 1,590 | MEMBER | null | null | null | `squad_metric.compute` is giving following error
```
ArrowTypeError: Could not convert [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}] with type list: was not a dict, tuple, or recognized null value for conversion to struct type
```
This is how my predictions and references look like
```
predictions[0]
# {'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'}
```
```
references[0]
# {'answers': [{'text': 'Denver Broncos'},
{'text': 'Denver Broncos'},
{'text': 'Denver Broncos'}],
'id': '56be4db0acb8001400a502ec'}
```
These are structured as per the `squad_metric.compute` help string. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/143/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/143/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/142/comments | https://api.github.com/repos/huggingface/datasets/issues/142/events | https://github.com/huggingface/datasets/pull/142 | 619,450,068 | MDExOlB1bGxSZXF1ZXN0NDE4OTU0OTc1 | 142 | [WMT] Add all wmt | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/142",
"html_url": "https://github.com/huggingface/datasets/pull/142",
"diff_url": "https://github.com/huggingface/datasets/pull/142.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/142.patch",
"merged_at": 1589717900000
} | This PR adds all wmt datasets scripts. At the moment the script is **not** functional for the language pairs "cs-en", "ru-en", "hi-en" because apparently it takes up to a week to get the manual data for these datasets: see http://ufal.mff.cuni.cz/czeng.
The datasets are fully functional though for the "big" language pairs "de-en" and "fr-en".
Overall I think the scripts are very messy and might need a big refactoring at some point.
For now I think there are good to merge (most dataset configs can be used). I will add "cs", "ru" and "hi" when the manual data is available. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/142/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/141/comments | https://api.github.com/repos/huggingface/datasets/issues/141/events | https://github.com/huggingface/datasets/pull/141 | 619,447,090 | MDExOlB1bGxSZXF1ZXN0NDE4OTUzMzQw | 141 | [Clean up] remove bogus folder | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Same for the dataset_infos.json at the project root no ?",
"Sorry guys, I haven't noticed. Thank you for mentioning it."
] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/141",
"html_url": "https://github.com/huggingface/datasets/pull/141",
"diff_url": "https://github.com/huggingface/datasets/pull/141.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/141.patch",
"merged_at": 1589635465000
} | @mariamabarham - I think you accidentally placed it there. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/141/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/140 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/140/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/140/comments | https://api.github.com/repos/huggingface/datasets/issues/140/events | https://github.com/huggingface/datasets/pull/140 | 619,443,613 | MDExOlB1bGxSZXF1ZXN0NDE4OTUxMzg4 | 140 | [Tests] run local tests as default | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"You are right and I think those are usual best practice :) I'm 100% fine with this^^",
"Merging this for now to unblock other PRs."
] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/140",
"html_url": "https://github.com/huggingface/datasets/pull/140",
"diff_url": "https://github.com/huggingface/datasets/pull/140.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/140.patch",
"merged_at": 1589635303000
} | This PR also enables local tests by default
I think it's safer for now to enable both local and aws tests for every commit. The problem currently is that when we do a PR to add a dataset, the dataset is not yet on AWS on therefore not tested on the PR itself. Thus the PR will always be green even if the datasets are not correct. This PR aims at fixing this.
## Suggestion on how to commit to the repo from now on:
Now since the repo is "online", I think we should adopt a couple of best practices:
1) - No direct committing to the repo anymore. Every change should be opened in a PR and be well documented so that we can find it later
2) - Every PR has to be reviewed by at least x people (I guess @thomwolf you should decide here) because we now have to be much more careful when doing changes to the API for backward compatibility, etc...
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/140/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/140/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/139/comments | https://api.github.com/repos/huggingface/datasets/issues/139/events | https://github.com/huggingface/datasets/pull/139 | 619,327,409 | MDExOlB1bGxSZXF1ZXN0NDE4ODc4NzMy | 139 | Add GermEval 2014 NER dataset | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions",
"organizations_url": "https://api.github.com/users/stefan-it/orgs",
"repos_url": "https://api.github.com/users/stefan-it/repos",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"received_events_url": "https://api.github.com/users/stefan-it/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Had really fun playing around with this new library :heart: ",
"That's awesome - thanks @stefan-it :-) \r\n\r\nCould you maybe rebase to master and check if all dummy data tests are fine. I should have included the local tests directly in the test suite so that all PRs are fully checked: #140 - sorry :D ",
"@patrickvonplaten Rebased it 😅\r\n\r\nHow can it test 🤔 I used:\r\n\r\n```bash\r\nRUN_SLOW=1 RUN_LOCAL=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_local_germeval_14\r\n# and\r\nRUN_SLOW=1 RUN_LOCAL=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_dataset_all_configs_local_germeval_14\r\n```\r\n\r\nand the tests still pass :)",
"Perfect, if these tests pass that's great - I'll merge the PR then :-) Was it very difficult to create the dummy data structure? "
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/139",
"html_url": "https://github.com/huggingface/datasets/pull/139",
"diff_url": "https://github.com/huggingface/datasets/pull/139.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/139.patch",
"merged_at": 1589637382000
} | Hi,
this PR adds the GermEval 2014 NER dataset 😃
> The GermEval 2014 NER Shared Task builds on a new dataset with German Named Entity annotation [1] with the following properties:
> - The data was sampled from German Wikipedia and News Corpora as a collection of citations.
> - The dataset covers over 31,000 sentences corresponding to over 590,000 tokens.
> - The NER annotation uses the NoSta-D guidelines, which extend the Tübingen Treebank guidelines, using four main NER categories with sub-structure, and annotating embeddings among NEs such as [ORG FC Kickers [LOC Darmstadt]].
Dataset will be downloaded from the [official GermEval 2014 website](https://sites.google.com/site/germeval2014ner/data).
## Dataset format
Here's an example of the dataset format from the original dataset:
```tsv
# http://de.wikipedia.org/wiki/Manfred_Korfmann [2009-10-17]
1 Aufgrund O O
2 seiner O O
3 Initiative O O
4 fand O O
5 2001/2002 O O
6 in O O
7 Stuttgart B-LOC O
8 , O O
9 Braunschweig B-LOC O
10 und O O
11 Bonn B-LOC O
12 eine O O
13 große O O
14 und O O
15 publizistisch O O
16 vielbeachtete O O
17 Troia-Ausstellung B-LOCpart O
18 statt O O
19 , O O
20 „ O O
21 Troia B-OTH B-LOC
22 - I-OTH O
23 Traum I-OTH O
24 und I-OTH O
25 Wirklichkeit I-OTH O
26 “ O O
27 . O O
```
The sentence is encoded as one token per line (tab separated columns.
The first column contains either a `#`, which signals the source the sentence is cited from and the date it was retrieved, or the token number within the sentence.
The second column contains the token.
Column three and four contain the named entity (in IOB2 scheme).
Outer spans are encoded in the third column, embedded/nested spans in the fourth column.
## Features
I decided to keep most information from the dataset. That means the so called "source" information (where the sentences come from + date information) is also returned for each sentence in the feature vector.
For each sentence in the dataset, one feature vector (`nlp.Features` definition) will be returned:
| Feature | Example | Description
| ---- | ---- | -----------------
| `id` | `0` | Number (id) of current sentence
| `source` | `http://de.wikipedia.org/wiki/Manfred_Korfmann [2009-10-17]` | URL and retrieval date as string
| `tokens` | `["Schwartau", "sagte", ":"]` | List of tokens (strings) for a sentence
| `labels` | `["B-PER", "O", "O"]` | List of labels (outer span)
| `nested-labels` | `["O", "O", "O"]` | List of labels for nested span
## Example
The following command downloads the dataset from the official GermEval 2014 page and pre-processed it:
```bash
python nlp-cli test datasets/germeval_14 --all_configs
```
It then outputs the number for training, development and testset. The training set consists of 24,000 sentences, the development set of 2,200 and the test of 5,100 sentences.
Now it can be imported and used with `nlp`:
```python
import nlp
germeval = nlp.load_dataset("./datasets/germeval_14/germeval_14.py")
assert len(germeval["train"]) == 24000
# Show first sentence of training set:
germeval["train"][0]
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/139/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/139/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/138 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/138/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/138/comments | https://api.github.com/repos/huggingface/datasets/issues/138/events | https://github.com/huggingface/datasets/issues/138 | 619,225,191 | MDU6SXNzdWU2MTkyMjUxOTE= | 138 | Consider renaming to nld | {
"login": "honnibal",
"id": 8059750,
"node_id": "MDQ6VXNlcjgwNTk3NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8059750?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/honnibal",
"html_url": "https://github.com/honnibal",
"followers_url": "https://api.github.com/users/honnibal/followers",
"following_url": "https://api.github.com/users/honnibal/following{/other_user}",
"gists_url": "https://api.github.com/users/honnibal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/honnibal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/honnibal/subscriptions",
"organizations_url": "https://api.github.com/users/honnibal/orgs",
"repos_url": "https://api.github.com/users/honnibal/repos",
"events_url": "https://api.github.com/users/honnibal/events{/privacy}",
"received_events_url": "https://api.github.com/users/honnibal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | closed | false | null | [] | null | [
"I would suggest `nlds`. NLP is a very general, broad and ambiguous term, the library is not about NLP (as in processing) per se, it is about accessing Natural Language related datasets. So the name should reflect its purpose.\r\n",
"Chiming in to second everything @honnibal said, and to add that I think the current name is going to impact the discoverability of this library. People who are looking for \"NLP Datasets\" through a search engine are going to see a library called `nlp` and think it's too broad. People who are looking to do NLP in python are going to search \"Python NLP\" and end up here, confused that this is a collection of datasets.\r\n\r\nThe names of the other huggingface libraries work because they're the only game in town: there are not very many robust, distinct libraries for `tokenizers` or `transformers` in python, for example. But there are several options for NLP in python, and adding this as a possible search result for \"python nlp\" when datasets are likely not what someone is searching for adds noise and frustrates potential users.",
"I'm also not sure whether the naming of `nlp` is the problem itself, as long as it comes with the appropriate identifier, so maybe something like `huggingface_nlp`? This is analogous to what @honnibal and spacy are doing for `spacy-transformers`. Of course, this is a \"step back\" from the recent changes/renaming of transformers, but may be some middle ground between a complete rebranding, and keeping it identifiable.",
"Interesting, thanks for sharing your thoughts.\r\n\r\nAs we’ll move toward a first non-beta release, we will pool the community of contributors/users of the library for their opinions on a good final name (like when we renamed the beautifully (?) named `pytorch-pretrained-bert`)\r\n\r\nIn the meantime, using `from nlp import load_dataset, load_metric` should work 😉",
"I feel like we are conflating two distinct subjects here:\r\n\r\n1. @honnibal's point is that using `nlp` as a package name might break existing code and bring developer usability issues in the future\r\n2. @pmbaumgartner's point is that the `nlp` package name is too broad and shouldn't be used by a package that exposes only datasets and metrics\r\n\r\n(let me know if I mischaracterize your point)\r\n\r\nI'll chime in to say that the first point is a bit silly IMO. As Python developers due to the limitations of the import system we already have to share:\r\n- a single flat namespace for packages\r\n- which also conflicts with local modules i.e. local files\r\n\r\nIf we add the constraint that this flat namespace also be shared with variable names this gets untractable pretty fast :)\r\n\r\nI also think all Python software developers/ML engineers/scientists are capable of at least a subset of:\r\n- importing only the methods that they need like @thomwolf suggested\r\n- aliasing their import\r\n- renaming a local variable",
"By the way, `nlp` will very likely not be only about datasets, and not even just about datasets and metrics.\r\n\r\nI see it as a laboratory for testing several long-term ideas about how we could do NLP in terms of research as well as open-source and community sharing, most of these ideas being too experimental/big to fit in `transformers`.\r\n\r\nSome of the directions we would like to explore are about sharing, traceability and more experimental models, as well as seeing a model as the community-based process of creating a composite entity from data, optimization, and code.\r\n\r\nWe'll see how these ideas end up being implemented and we'll better know how we should define the library when we start to dive into these topics. I'll try to get the `nlp` team to draft a roadmap on these topics at some point.",
"> If we add the constraint that this flat namespace also be shared with variable names this gets untractable pretty fast :)\r\n\r\nI'm sort of confused by your point here. The namespace *is* shared by variable names. You should not use local variables that are named the same as modules, because then you cannot use the module within the scope of your function.\r\n\r\nFor instance,\r\n\r\n```python\r\n\r\nimport nlp\r\nimport transformers\r\n\r\nnlp = transformers.pipeline(\"sentiment-analysis\")\r\n```\r\n\r\nThis is a bug: you've just overwritten the module, so now you can't use it. Or instead:\r\n\r\n```python\r\n\r\nimport transformers\r\n\r\nnlp = transformers.pipeline(\"sentiment-analysis\")\r\n# (Later, e.g. in a notebook)\r\nimport nlp\r\n```\r\n\r\nThis is also a bug: you've overwritten your variable with an import.\r\n\r\nIf you have a module named `nlp`, you should avoid using `nlp` as a variable, or you'll have bugs in some contexts and inconsistencies in other contexts. You'll have situations where you need to import differently in one module vs another, or name variables differently in one context vs another, which is bad.\r\n\r\n> importing only the methods that they need like @thomwolf suggested\r\n\r\nOkay but the same logic applies to naming the module *literally anything else*. There's absolutely no point in having a module name that's 3 letters if you always plan to do `import from`! It would be entirely better to name it `nlp_datasets` if you don't want people to do `import nlp`.\r\n\r\nAnd finally:\r\n\r\n> By the way, nlp will very likely not be only about datasets, and not even just about datasets and metrics.\r\n\r\nSo...it isn't a datasets library? https://twitter.com/Thom_Wolf/status/1261282491622731781\r\n\r\nI'm confused 😕 ",
"Dropping by as I noticed that the library has been renamed `datasets` so I wonder if the conversation above is settled (`nlp` not used anymore) :) ",
"I guess indeed",
"I'd argue that `datasets` is worse than `nlp`. Datasets should be a user specific decision and not encapsulate all of python (`pip install datasets`). If this package contained every dataset in the world (NLP / vision / etc) then it would make sense =/",
"I can't speak for the HF team @jramapuram, but as member of the community it looks to me that HF wanted to avoid the past path of changing names as scope broadened over time:\r\n\r\nRemember\r\nhttps://github.com/huggingface/pytorch-openai-transformer-lm\r\nhttps://github.com/huggingface/pytorch-pretrained-BERT\r\nhttps://github.com/huggingface/pytorch-transformers\r\nand now\r\nhttps://github.com/huggingface/transformers\r\n\r\n;) \r\n\r\nJokes aside, seems that the library is growing in a multi-modal direction (https://github.com/huggingface/datasets/pull/363) so the current name is not that implausible. Possibly HF ambition is really to grow its community and bring here a large chunk of datasets of the world (including tabular / vision / audio?).",
"Yea I see your point. However, wouldn't scoping solve the entire problem? \r\n\r\n```python\r\nimport huggingface.datasets as D\r\nimport huggingface.transformers as T\r\n```\r\n\r\nCalling something `datasets` is akin to saying I'm going to name my package `python` --> `import python` ",
"Sorry to reply to an old thread, but the name issue really makes troubles recently in my project.\r\n\r\nI'd never known in advance there's a package called \"datasets\". My first thought is that such a general term may be safe to arbitrarily use. Avoiding such a common name because of its ambiguity is quite weird.\r\n\r\nAs we know in python it's not easy to differentiate system-wide and project-wide import like in C and C++.\r\n\r\nOn the contrary I fully understand the challenge to rename a popular library. So it seems to provide a \"huggingface\" wrapper library as suggested above by @jramapuram may be a happy medium for both developers and users.\r\n\r\nBest Regards."
] | 1,589 | 1,663 | 1,601 | NONE | null | null | null | Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This means the package makes `nlp` a bad variable name everywhere in the codebase. I've always used `nlp` as the canonical variable name of spaCy's `Language` objects, and this is a convention that a lot of other code has followed (Stanza, flair, etc). And actually, your `transformers` library uses `nlp` as the name for its `Pipeline` instance in your readme.
If you stick with the `nlp` name for this package, if anyone uses it then they should rewrite all of that code. If `nlp` is a bad choice of variable anywhere, it's a bad choice of variable everywhere --- because you shouldn't have to notice whether some other function uses a module when you're naming variables within a function. You want to have one convention that you can stick to everywhere.
If people use your `nlp` package and continue to use the `nlp` variable name, they'll find themselves with confusing bugs. There will be many many bits of code cut-and-paste from tutorials that give confusing results when combined with the data loading from the `nlp` library. The problem will be especially bad for shadowed modules (people might reasonably have a module named `nlp.py` within their codebase) and notebooks, as people might run notebook cells for data loading out-of-order.
I don't think it's an exaggeration to say that if your library becomes popular, we'll all be answering issues around this about once a week for the next few years. That seems pretty unideal, so I do hope you'll reconsider.
I suggest `nld` as a better name. It more accurately represents what the package actually does. It's pretty unideal to have a package named `nlp` that doesn't do any processing, and contains data about natural language generation or other non-NLP tasks. The name is equally short, and is sort of a visual pun on `nlp`, since a d is a rotated p. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/138/reactions",
"total_count": 33,
"+1": 33,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/138/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/136/comments | https://api.github.com/repos/huggingface/datasets/issues/136/events | https://github.com/huggingface/datasets/pull/136 | 619,211,018 | MDExOlB1bGxSZXF1ZXN0NDE4NzgxNzI4 | 136 | Update README.md | {
"login": "renaud",
"id": 75369,
"node_id": "MDQ6VXNlcjc1MzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/75369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/renaud",
"html_url": "https://github.com/renaud",
"followers_url": "https://api.github.com/users/renaud/followers",
"following_url": "https://api.github.com/users/renaud/following{/other_user}",
"gists_url": "https://api.github.com/users/renaud/gists{/gist_id}",
"starred_url": "https://api.github.com/users/renaud/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/renaud/subscriptions",
"organizations_url": "https://api.github.com/users/renaud/orgs",
"repos_url": "https://api.github.com/users/renaud/repos",
"events_url": "https://api.github.com/users/renaud/events{/privacy}",
"received_events_url": "https://api.github.com/users/renaud/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks, this was fixed with #135 :)"
] | 1,589 | 1,589 | 1,589 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/136",
"html_url": "https://github.com/huggingface/datasets/pull/136",
"diff_url": "https://github.com/huggingface/datasets/pull/136.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/136.patch",
"merged_at": null
} | small typo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/136/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/136/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/135/comments | https://api.github.com/repos/huggingface/datasets/issues/135/events | https://github.com/huggingface/datasets/pull/135 | 619,206,708 | MDExOlB1bGxSZXF1ZXN0NDE4Nzc4MTMw | 135 | Fix print statement in READ.md | {
"login": "codehunk628",
"id": 51091425,
"node_id": "MDQ6VXNlcjUxMDkxNDI1",
"avatar_url": "https://avatars.githubusercontent.com/u/51091425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codehunk628",
"html_url": "https://github.com/codehunk628",
"followers_url": "https://api.github.com/users/codehunk628/followers",
"following_url": "https://api.github.com/users/codehunk628/following{/other_user}",
"gists_url": "https://api.github.com/users/codehunk628/gists{/gist_id}",
"starred_url": "https://api.github.com/users/codehunk628/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codehunk628/subscriptions",
"organizations_url": "https://api.github.com/users/codehunk628/orgs",
"repos_url": "https://api.github.com/users/codehunk628/repos",
"events_url": "https://api.github.com/users/codehunk628/events{/privacy}",
"received_events_url": "https://api.github.com/users/codehunk628/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Indeed, thanks!"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/135",
"html_url": "https://github.com/huggingface/datasets/pull/135",
"diff_url": "https://github.com/huggingface/datasets/pull/135.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/135.patch",
"merged_at": 1589717645000
} | print statement was throwing generator object instead of printing names of available datasets/metrics | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/135/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/135/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/134/comments | https://api.github.com/repos/huggingface/datasets/issues/134/events | https://github.com/huggingface/datasets/pull/134 | 619,112,641 | MDExOlB1bGxSZXF1ZXN0NDE4Njk5OTYz | 134 | Update README.md | {
"login": "pranv",
"id": 8753078,
"node_id": "MDQ6VXNlcjg3NTMwNzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8753078?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pranv",
"html_url": "https://github.com/pranv",
"followers_url": "https://api.github.com/users/pranv/followers",
"following_url": "https://api.github.com/users/pranv/following{/other_user}",
"gists_url": "https://api.github.com/users/pranv/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pranv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pranv/subscriptions",
"organizations_url": "https://api.github.com/users/pranv/orgs",
"repos_url": "https://api.github.com/users/pranv/repos",
"events_url": "https://api.github.com/users/pranv/events{/privacy}",
"received_events_url": "https://api.github.com/users/pranv/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"the readme got removed, closing this one"
] | 1,589 | 1,590 | 1,590 | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/134",
"html_url": "https://github.com/huggingface/datasets/pull/134",
"diff_url": "https://github.com/huggingface/datasets/pull/134.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/134.patch",
"merged_at": null
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/134/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/134/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/133/comments | https://api.github.com/repos/huggingface/datasets/issues/133/events | https://github.com/huggingface/datasets/issues/133 | 619,094,954 | MDU6SXNzdWU2MTkwOTQ5NTQ= | 133 | [Question] Using/adding a local dataset | {
"login": "zphang",
"id": 1668462,
"node_id": "MDQ6VXNlcjE2Njg0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zphang",
"html_url": "https://github.com/zphang",
"followers_url": "https://api.github.com/users/zphang/followers",
"following_url": "https://api.github.com/users/zphang/following{/other_user}",
"gists_url": "https://api.github.com/users/zphang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zphang/subscriptions",
"organizations_url": "https://api.github.com/users/zphang/orgs",
"repos_url": "https://api.github.com/users/zphang/repos",
"events_url": "https://api.github.com/users/zphang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zphang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @zphang,\r\n\r\nSo you can just give the local path to a dataset script file and it should work.\r\n\r\nHere is an example:\r\n- you can download one of the scripts in the `datasets` folder of the present repo (or clone the repo)\r\n- then you can load it with `load_dataset('PATH/TO/YOUR/LOCAL/SCRIPT.py')`\r\n\r\nDoes it make sense?",
"Could you give a more concrete example, please? \r\n\r\nI looked up wikitext dataset script from the repo. Should I just overwrite the `data_file` on line 98 to point to the local dataset directory? Would it work for different configurations of wikitext (wikitext2, wikitext103 etc.)?\r\n\r\nOr maybe we can use DownloadManager to specify local dataset location? In that case, where do we use DownloadManager instance?\r\n\r\nThanks",
"Hi @MaveriQ , although what I am doing is to commit a new dataset, but I think looking at imdb script might help.\r\nYou may want to use `dl_manager.download_custom`, give it a url(arbitrary string), a custom_download(arbitrary function) and return a path, and finally use _get sample to fetch a sample.",
"The download manager supports local directories. You can specify a local directory instead of a url and it should work.",
"Closing this one.\r\nFeel free to re-open if you have other questions :)"
] | 1,589 | 1,595 | 1,595 | NONE | null | null | null | Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets.
It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not exactly sure how to go about doing this.
A notebook/example script demonstrating this would be very helpful. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/133/reactions",
"total_count": 6,
"+1": 6,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/133/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/132/comments | https://api.github.com/repos/huggingface/datasets/issues/132/events | https://github.com/huggingface/datasets/issues/132 | 619,077,851 | MDU6SXNzdWU2MTkwNzc4NTE= | 132 | [Feature Request] Add the OpenWebText dataset | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"We're experimenting with hosting the OpenWebText corpus on Zenodo for easier downloading. https://zenodo.org/record/3834942#.Xs1w8i-z2J8",
"Closing since it's been added in #660 "
] | 1,589 | 1,602 | 1,602 | MEMBER | null | null | null | The OpenWebText dataset is an open clone of OpenAI's WebText dataset. It can be used to train ELECTRA as is specified in the [README](https://www.github.com/google-research/electra).
More information and the download link are available [here](https://skylion007.github.io/OpenWebTextCorpus/). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/132/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/132/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/131 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/131/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/131/comments | https://api.github.com/repos/huggingface/datasets/issues/131/events | https://github.com/huggingface/datasets/issues/131 | 619,073,731 | MDU6SXNzdWU2MTkwNzM3MzE= | 131 | [Feature request] Add Toronto BookCorpus dataset | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com/users/jarednielsen/followers",
"following_url": "https://api.github.com/users/jarednielsen/following{/other_user}",
"gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions",
"organizations_url": "https://api.github.com/users/jarednielsen/orgs",
"repos_url": "https://api.github.com/users/jarednielsen/repos",
"events_url": "https://api.github.com/users/jarednielsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/jarednielsen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"As far as I understand, `wikitext` is refer to `WikiText-103` and `WikiText-2` that created by researchers in Salesforce, and mostly used in traditional language modeling.\r\n\r\nYou might want to say `wikipedia`, a dump from wikimedia foundation.\r\n\r\nAlso I would like to have Toronto BookCorpus too ! Though it involves copyright problem...",
"Hi, @lhoestq, just a reminder that this is solved by #248 .😉 "
] | 1,589 | 1,593 | 1,593 | CONTRIBUTOR | null | null | null | I know the copyright/distribution of this one is complex, but it would be great to have! That, combined with the existing `wikitext`, would provide a complete dataset for pretraining models like BERT. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/131/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/131/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/130 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/130/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/130/comments | https://api.github.com/repos/huggingface/datasets/issues/130/events | https://github.com/huggingface/datasets/issues/130 | 619,035,440 | MDU6SXNzdWU2MTkwMzU0NDA= | 130 | Loading GLUE dataset loads CoLA by default | {
"login": "zphang",
"id": 1668462,
"node_id": "MDQ6VXNlcjE2Njg0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zphang",
"html_url": "https://github.com/zphang",
"followers_url": "https://api.github.com/users/zphang/followers",
"following_url": "https://api.github.com/users/zphang/following{/other_user}",
"gists_url": "https://api.github.com/users/zphang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zphang/subscriptions",
"organizations_url": "https://api.github.com/users/zphang/orgs",
"repos_url": "https://api.github.com/users/zphang/repos",
"events_url": "https://api.github.com/users/zphang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zphang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"As a follow-up to this: It looks like the actual GLUE task name is supplied as the `name` argument. Is there a way to check what `name`s/sub-datasets are available under a grouping like GLUE? That information doesn't seem to be readily available in info from `nlp.list_datasets()`.\r\n\r\nEdit: I found the info under `Glue.BUILDER_CONFIGS`",
"Yes so the first config is loaded by default when no `name` is supplied but for GLUE this should probably throw an error indeed.\r\n\r\nWe can probably just add an `__init__` at the top of the `class Glue(nlp.GeneratorBasedBuilder)` in the `glue.py` script which does this check:\r\n```\r\nclass Glue(nlp.GeneratorBasedBuilder):\r\n def __init__(self, *args, **kwargs):\r\n assert 'name' in kwargs and kwargs[name] is not None, \"Glue has to be called with a configuration name\"\r\n super(Glue, self).__init__(*args, **kwargs)\r\n```",
"An error is raised if the sub-dataset is not specified :)\r\n```\r\nValueError: Config name is missing.\r\nPlease pick one among the available configs: ['cola', 'sst2', 'mrpc', 'qqp', 'stsb', 'mnli', 'mnli_mismatched', 'mnli_matched', 'qnli', 'rte', 'wnli', 'ax']\r\nExample of usage:\r\n\t`load_dataset('glue', 'cola')`\r\n```"
] | 1,589 | 1,590 | 1,590 | NONE | null | null | null | If I run:
```python
dataset = nlp.load_dataset('glue')
```
The resultant dataset seems to be CoLA be default, without throwing any error. This is in contrast to calling:
```python
metric = nlp.load_metric("glue")
```
which throws an error telling the user that they need to specify a task in GLUE. Should the same apply for loading datasets? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/130/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/130/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/129/comments | https://api.github.com/repos/huggingface/datasets/issues/129/events | https://github.com/huggingface/datasets/issues/129 | 618,997,725 | MDU6SXNzdWU2MTg5OTc3MjU= | 129 | [Feature request] Add Google Natural Question dataset | {
"login": "elyase",
"id": 1175888,
"node_id": "MDQ6VXNlcjExNzU4ODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1175888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elyase",
"html_url": "https://github.com/elyase",
"followers_url": "https://api.github.com/users/elyase/followers",
"following_url": "https://api.github.com/users/elyase/following{/other_user}",
"gists_url": "https://api.github.com/users/elyase/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elyase/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elyase/subscriptions",
"organizations_url": "https://api.github.com/users/elyase/orgs",
"repos_url": "https://api.github.com/users/elyase/repos",
"events_url": "https://api.github.com/users/elyase/events{/privacy}",
"received_events_url": "https://api.github.com/users/elyase/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"Indeed, I think this one is almost ready cc @lhoestq ",
"I'm doing the latest adjustments to make the processing of the dataset run on Dataflow",
"Is there an update to this? It will be very beneficial for the QA community!",
"Still work in progress :)\r\nThe idea is to have the dataset already processed somewhere so that the user only have to download the processed files. I'm also doing it for wikipedia.",
"Super appreciate your hard work !!\r\nI'll cross my fingers and hope easily loadable wikipedia dataset will come soon. ",
"Quick update on NQ: due to some limitations I met using apache beam + parquet I was not able to use the dataset in a nested parquet structure in python to convert it to our Apache Arrow format yet.\r\nHowever we had planned to change this conversion step anyways so we'll make just sure that it enables to process and convert the NQ dataset to arrow.",
"NQ was added in #427 🎉"
] | 1,589 | 1,595 | 1,595 | NONE | null | null | null | Would be great to have https://github.com/google-research-datasets/natural-questions as an alternative to SQuAD. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/129/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/129/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/128/comments | https://api.github.com/repos/huggingface/datasets/issues/128/events | https://github.com/huggingface/datasets/issues/128 | 618,951,117 | MDU6SXNzdWU2MTg5NTExMTc= | 128 | Some error inside nlp.load_dataset() | {
"login": "polkaYK",
"id": 18486287,
"node_id": "MDQ6VXNlcjE4NDg2Mjg3",
"avatar_url": "https://avatars.githubusercontent.com/u/18486287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polkaYK",
"html_url": "https://github.com/polkaYK",
"followers_url": "https://api.github.com/users/polkaYK/followers",
"following_url": "https://api.github.com/users/polkaYK/following{/other_user}",
"gists_url": "https://api.github.com/users/polkaYK/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polkaYK/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polkaYK/subscriptions",
"organizations_url": "https://api.github.com/users/polkaYK/orgs",
"repos_url": "https://api.github.com/users/polkaYK/repos",
"events_url": "https://api.github.com/users/polkaYK/events{/privacy}",
"received_events_url": "https://api.github.com/users/polkaYK/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Google colab has an old version of Apache Arrow built-in.\r\nBe sure you execute the \"pip install\" cell and restart the notebook environment if the colab asks for it.",
"Thanks for reply, worked fine!\r\n"
] | 1,589 | 1,589 | 1,589 | NONE | null | null | null | First of all, nice work!
I am going through [this overview notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb)
In simple step `dataset = nlp.load_dataset('squad', split='validation[:10%]')`
I get an error, which is connected with some inner code, I think:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-8-d848d3a99b8c> in <module>()
1 # Downloading and loading a dataset
2
----> 3 dataset = nlp.load_dataset('squad', split='validation[:10%]')
8 frames
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
515 download_mode=download_mode,
516 ignore_verifications=ignore_verifications,
--> 517 save_infos=save_infos,
518 )
519
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)
361 verify_infos = not save_infos and not ignore_verifications
362 self._download_and_prepare(
--> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
364 )
365 # Sync info
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
414 try:
415 # Prepare split will record examples associated to the split
--> 416 self._prepare_split(split_generator, **prepare_split_kwargs)
417 except OSError:
418 raise OSError("Cannot find data file. " + (self.MANUAL_DOWNLOAD_INSTRUCTIONS or ""))
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _prepare_split(self, split_generator)
585 fname = "{}-{}.arrow".format(self.name, split_generator.name)
586 fpath = os.path.join(self._cache_dir, fname)
--> 587 examples_type = self.info.features.type
588 writer = ArrowWriter(data_type=examples_type, path=fpath, writer_batch_size=self._writer_batch_size)
589
/usr/local/lib/python3.6/dist-packages/nlp/features.py in type(self)
460 @property
461 def type(self):
--> 462 return get_nested_type(self)
463
464 @classmethod
/usr/local/lib/python3.6/dist-packages/nlp/features.py in get_nested_type(schema)
370 # Nested structures: we allow dict, list/tuples, sequences
371 if isinstance(schema, dict):
--> 372 return pa.struct({key: get_nested_type(value) for key, value in schema.items()})
373 elif isinstance(schema, (list, tuple)):
374 assert len(schema) == 1, "We defining list feature, you should just provide one example of the inner type"
/usr/local/lib/python3.6/dist-packages/nlp/features.py in <dictcomp>(.0)
370 # Nested structures: we allow dict, list/tuples, sequences
371 if isinstance(schema, dict):
--> 372 return pa.struct({key: get_nested_type(value) for key, value in schema.items()})
373 elif isinstance(schema, (list, tuple)):
374 assert len(schema) == 1, "We defining list feature, you should just provide one example of the inner type"
/usr/local/lib/python3.6/dist-packages/nlp/features.py in get_nested_type(schema)
379 # We allow to reverse list of dict => dict of list for compatiblity with tfds
380 if isinstance(inner_type, pa.StructType):
--> 381 return pa.struct(dict((f.name, pa.list_(f.type, schema.length)) for f in inner_type))
382 return pa.list_(inner_type, schema.length)
383
/usr/local/lib/python3.6/dist-packages/nlp/features.py in <genexpr>(.0)
379 # We allow to reverse list of dict => dict of list for compatiblity with tfds
380 if isinstance(inner_type, pa.StructType):
--> 381 return pa.struct(dict((f.name, pa.list_(f.type, schema.length)) for f in inner_type))
382 return pa.list_(inner_type, schema.length)
383
TypeError: list_() takes exactly one argument (2 given)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/128/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/128/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/127/comments | https://api.github.com/repos/huggingface/datasets/issues/127/events | https://github.com/huggingface/datasets/pull/127 | 618,909,042 | MDExOlB1bGxSZXF1ZXN0NDE4NTQ1MDcy | 127 | Update Overview.ipynb | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/127",
"html_url": "https://github.com/huggingface/datasets/pull/127",
"diff_url": "https://github.com/huggingface/datasets/pull/127.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/127.patch",
"merged_at": 1589543245000
} | update notebook | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/127/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/127/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/126/comments | https://api.github.com/repos/huggingface/datasets/issues/126/events | https://github.com/huggingface/datasets/pull/126 | 618,897,499 | MDExOlB1bGxSZXF1ZXN0NDE4NTM1Mzc5 | 126 | remove webis | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/126",
"html_url": "https://github.com/huggingface/datasets/pull/126",
"diff_url": "https://github.com/huggingface/datasets/pull/126.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/126.patch",
"merged_at": 1589542226000
} | Remove webis from dataset folder.
Our first dataset script that only lives on AWS :-) https://s3.console.aws.amazon.com/s3/buckets/datasets.huggingface.co/nlp/datasets/webis/tl_dr/?region=us-east-1 @julien-c @jplu | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/126/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/126/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/125/comments | https://api.github.com/repos/huggingface/datasets/issues/125/events | https://github.com/huggingface/datasets/pull/125 | 618,869,048 | MDExOlB1bGxSZXF1ZXN0NDE4NTExNDE0 | 125 | [Newsroom] add newsroom | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/125",
"html_url": "https://github.com/huggingface/datasets/pull/125",
"diff_url": "https://github.com/huggingface/datasets/pull/125.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/125.patch",
"merged_at": 1589539022000
} | I checked it with the data link of the mail you forwarded @thomwolf => works well! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/125/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/125/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/124 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/124/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/124/comments | https://api.github.com/repos/huggingface/datasets/issues/124/events | https://github.com/huggingface/datasets/pull/124 | 618,864,284 | MDExOlB1bGxSZXF1ZXN0NDE4NTA3NDUx | 124 | Xsum, require manual download of some files | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/124",
"html_url": "https://github.com/huggingface/datasets/pull/124",
"diff_url": "https://github.com/huggingface/datasets/pull/124.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/124.patch",
"merged_at": 1589540686000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/124/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/124/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/123/comments | https://api.github.com/repos/huggingface/datasets/issues/123/events | https://github.com/huggingface/datasets/pull/123 | 618,820,140 | MDExOlB1bGxSZXF1ZXN0NDE4NDcxODU5 | 123 | [Tests] Local => aws | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"For each dataset, If there exist a `dataset_info.json`, then the command `nlp-cli test path/to/my/dataset --al_configs` is successful only if the `dataset_infos.json` is correct. The infos are correct if the size and checksums of the downloaded file are correct, and if the number of examples in each split are correct.\r\n\r\nNote: the `test` command is supposed to test the script, that's why it runs the script even if the cached files already exist. Let me know if it's good to you.",
"> For each dataset, If there exist a `dataset_info.json`, then the command `nlp-cli test path/to/my/dataset --al_configs` is successful only if the `dataset_infos.json` is correct. The infos are correct if the size and checksums of the downloaded file are correct, and if the number of examples in each split are correct.\r\n> \r\n> Note: the `test` command is supposed to test the script, that's why it runs the script even if the cached files already exist. Let me know if it's good to you.\r\n\r\nDoes it have to download the whole data to check if the checksums are correct? I guess so no? ",
"> > For each dataset, If there exist a `dataset_info.json`, then the command `nlp-cli test path/to/my/dataset --al_configs` is successful only if the `dataset_infos.json` is correct. The infos are correct if the size and checksums of the downloaded file are correct, and if the number of examples in each split are correct.\r\n> > Note: the `test` command is supposed to test the script, that's why it runs the script even if the cached files already exist. Let me know if it's good to you.\r\n> \r\n> Does it have to download the whole data to check if the checksums are correct? I guess so no?\r\n\r\nYes it has to download them all (unless they were already downloaded in which case it just uses the cached downloaded files)."
] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/123",
"html_url": "https://github.com/huggingface/datasets/pull/123",
"diff_url": "https://github.com/huggingface/datasets/pull/123.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/123.patch",
"merged_at": 1589537006000
} | ## Change default Test from local => aws
As a default we set` aws=True`, `Local=False`, `slow=False`
### 1. RUN_AWS=1 (default)
This runs 4 tests per dataset script.
a) Does the dataset script have a valid etag / Can it be reached on AWS?
b) Can we load its `builder_class`?
c) Can we load **all** dataset configs?
d) _Most importantly_: Can we load the dataset?
Important - we currently only test the first config of each dataset to reduce test time. Total test time is around 1min20s.
### 2. RUN_LOCAL=1 RUN_AWS=0
***This should be done when debugging dataset scripts of the ./datasets folder***
This only runs 1 test per dataset test, which is equivalent to aws d) - Can we load the dataset from the local `datasets` directory?
### 3. RUN_SLOW=1
We should set up to run these tests maybe 1 time per week ? @thomwolf
The `slow` tests include two more important tests.
e) Can we load the dataset with all possible configs? This test will probably fail at the moment because a lot of dummy data is missing. We should add the dummy data step by step to be sure that all configs work.
f) Test that the actual dataset can be loaded. This will take quite some time to run, but is important to make sure that the "real" data can be loaded. It will also test whether the dataset script has the correct checksums file which is currently not tested with `aws=True`. @lhoestq - is there an easy way to check cheaply whether the `dataset_info.json` is correct for each dataset script? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/123/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/123/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/122/comments | https://api.github.com/repos/huggingface/datasets/issues/122/events | https://github.com/huggingface/datasets/pull/122 | 618,813,182 | MDExOlB1bGxSZXF1ZXN0NDE4NDY2Mzc3 | 122 | Final cleanup of readme and metrics | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,630 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/122",
"html_url": "https://github.com/huggingface/datasets/pull/122",
"diff_url": "https://github.com/huggingface/datasets/pull/122.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/122.patch",
"merged_at": 1589533342000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/122/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/122/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/121/comments | https://api.github.com/repos/huggingface/datasets/issues/121/events | https://github.com/huggingface/datasets/pull/121 | 618,790,040 | MDExOlB1bGxSZXF1ZXN0NDE4NDQ4MTkx | 121 | make style | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/121",
"html_url": "https://github.com/huggingface/datasets/pull/121",
"diff_url": "https://github.com/huggingface/datasets/pull/121.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/121.patch",
"merged_at": 1589531138000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/121/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/121/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/120/comments | https://api.github.com/repos/huggingface/datasets/issues/120/events | https://github.com/huggingface/datasets/issues/120 | 618,737,783 | MDU6SXNzdWU2MTg3Mzc3ODM= | 120 | 🐛 `map` not working | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I didn't assign the output 🤦♂️\r\n\r\n```python\r\ndataset.map(test)\r\n```\r\n\r\nshould be :\r\n\r\n```python\r\ndataset = dataset.map(test)\r\n```"
] | 1,589 | 1,589 | 1,589 | NONE | null | null | null | I'm trying to run a basic example (mapping function to add a prefix).
[Here is the colab notebook I'm using.](https://colab.research.google.com/drive/1YH4JCAy0R1MMSc-k_Vlik_s1LEzP_t1h?usp=sharing)
```python
import nlp
dataset = nlp.load_dataset('squad', split='validation[:10%]')
def test(sample):
sample['title'] = "test prefix @@@ " + sample["title"]
return sample
print(dataset[0]['title'])
dataset.map(test)
print(dataset[0]['title'])
```
Output :
> Super_Bowl_50
Super_Bowl_50
Expected output :
> Super_Bowl_50
test prefix @@@ Super_Bowl_50 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/120/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/120/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/119/comments | https://api.github.com/repos/huggingface/datasets/issues/119/events | https://github.com/huggingface/datasets/issues/119 | 618,652,145 | MDU6SXNzdWU2MTg2NTIxNDU= | 119 | 🐛 Colab : type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array' | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"It's strange, after installing `nlp` on Colab, the `pyarrow` version seems fine from `pip` but not from python :\r\n\r\n```python\r\nimport pyarrow\r\n\r\n!pip show pyarrow\r\nprint(\"version = {}\".format(pyarrow.__version__))\r\n```\r\n\r\n> Name: pyarrow\r\nVersion: 0.17.0\r\nSummary: Python library for Apache Arrow\r\nHome-page: https://arrow.apache.org/\r\nAuthor: None\r\nAuthor-email: None\r\nLicense: Apache License, Version 2.0\r\nLocation: /usr/local/lib/python3.6/dist-packages\r\nRequires: numpy\r\nRequired-by: nlp, feather-format\r\n> \r\n> version = 0.14.1",
"Ok I just had to restart the runtime after installing `nlp`. After restarting, the version of `pyarrow` is fine."
] | 1,589 | 1,589 | 1,589 | NONE | null | null | null | I'm trying to load CNN/DM dataset on Colab.
[Colab notebook](https://colab.research.google.com/drive/11Mf7iNhIyt6GpgA1dBEtg3cyMHmMhtZS?usp=sharing)
But I meet this error :
> AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/119/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/119/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/118 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/118/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/118/comments | https://api.github.com/repos/huggingface/datasets/issues/118/events | https://github.com/huggingface/datasets/issues/118 | 618,643,088 | MDU6SXNzdWU2MTg2NDMwODg= | 118 | ❓ How to apply a map to all subsets ? | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"That's the way!"
] | 1,589 | 1,589 | 1,589 | NONE | null | null | null | I'm working with CNN/DM dataset, where I have 3 subsets : `train`, `test`, `validation`.
Should I apply my map function on the subsets one by one ?
```python
import nlp
cnn_dm = nlp.load_dataset('cnn_dailymail')
for corpus in ['train', 'test', 'validation']:
cnn_dm[corpus] = cnn_dm[corpus].map(my_func)
```
Or is there a better way to do this ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/118/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/118/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/117 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/117/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/117/comments | https://api.github.com/repos/huggingface/datasets/issues/117/events | https://github.com/huggingface/datasets/issues/117 | 618,632,573 | MDU6SXNzdWU2MTg2MzI1NzM= | 117 | ❓ How to remove specific rows of a dataset ? | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi, you can't do that at the moment.",
"Can you do it by now? Coz it would be awfully helpful!",
"you can convert dataset object to pandas and remove a feature and convert back to dataset .",
"That's what I ended up doing too. but it feels like a workaround to a feature that should be added to the datasets class."
] | 1,589 | 1,657 | 1,589 | NONE | null | null | null | I saw on the [example notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb#scrollTo=efFhDWhlvSVC) how to remove a specific column :
```python
dataset.drop('id')
```
But I didn't find how to remove a specific row.
**For example, how can I remove all sample with `id` < 10 ?** | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/117/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/117/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/116/comments | https://api.github.com/repos/huggingface/datasets/issues/116/events | https://github.com/huggingface/datasets/issues/116 | 618,628,264 | MDU6SXNzdWU2MTg2MjgyNjQ= | 116 | 🐛 Trying to use ROUGE metric : pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323 | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067393914,
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug",
"name": "metric bug",
"color": "25b21e",
"default": false,
"description": "A bug in a metric script"
}
] | closed | false | null | [] | null | [
"Can you share your data files or a minimally reproducible example?",
"Sure, [here is a Colab notebook](https://colab.research.google.com/drive/1uiS89fnHMG7HV_cYxp3r-_LqJQvNNKs9?usp=sharing) reproducing the error.\r\n\r\n> ArrowInvalid: Column 1 named references expected length 36 but got length 56",
"This is because `add` takes as input a batch of elements and you provided only one. I think we should have `add` for one prediction/reference and `add_batch` for a batch of predictions/references. This would make it more coherent with the way we use Arrow.\r\n\r\nLet me do this change",
"Thanks for noticing though. I was mainly used to do `.compute` directly ^^",
"Thanks @lhoestq it works :)"
] | 1,589 | 1,590 | 1,590 | NONE | null | null | null | I'm trying to use rouge metric.
I have to files : `test.pred.tokenized` and `test.gold.tokenized` with each line containing a sentence.
I tried :
```python
import nlp
rouge = nlp.load_metric('rouge')
with open("test.pred.tokenized") as p, open("test.gold.tokenized") as g:
for lp, lg in zip(p, g):
rouge.add(lp, lg)
```
But I meet following error :
> pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323
---
Full stack-trace :
```
Traceback (most recent call last):
File "<stdin>", line 3, in <module>
File "/home/me/.venv/transformers/lib/python3.6/site-packages/nlp/metric.py", line 224, in add
self.writer.write_batch(batch)
File "/home/me/.venv/transformers/lib/python3.6/site-packages/nlp/arrow_writer.py", line 148, in write_batch
pa_table: pa.Table = pa.Table.from_pydict(batch_examples, schema=self._schema)
File "pyarrow/table.pxi", line 1550, in pyarrow.lib.Table.from_pydict
File "pyarrow/table.pxi", line 1503, in pyarrow.lib.Table.from_arrays
File "pyarrow/public-api.pxi", line 390, in pyarrow.lib.pyarrow_wrap_table
File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323
```
(`nlp` installed from source) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/116/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/116/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/115/comments | https://api.github.com/repos/huggingface/datasets/issues/115/events | https://github.com/huggingface/datasets/issues/115 | 618,615,855 | MDU6SXNzdWU2MTg2MTU4NTU= | 115 | AttributeError: 'dict' object has no attribute 'info' | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I could access the info by first accessing the different splits :\r\n\r\n```python\r\nimport nlp\r\n\r\ncnn_dm = nlp.load_dataset('cnn_dailymail')\r\nprint(cnn_dm['train'].info)\r\n```\r\n\r\nInformation seems to be duplicated between the subsets :\r\n\r\n```python\r\nprint(cnn_dm[\"train\"].info == cnn_dm[\"test\"].info == cnn_dm[\"validation\"].info)\r\n# True\r\n```\r\n\r\nIs it expected ?",
"Good point @Colanim ! What happens under the hood when running:\r\n\r\n```python\r\nimport nlp\r\n\r\ncnn_dm = nlp.load_dataset('cnn_dailymail')\r\n```\r\n\r\nis that for every split in `cnn_dailymail`, a different dataset object (which all holds the same info) is created. This has the advantages that the datasets are easily separable in a training setup. \r\nAlso note that you can load e.g. only the `train` split of the dataset via:\r\n\r\n```python\r\ncnn_dm_train = nlp.load_dataset('cnn_dailymail', split=\"train\")\r\nprint(cnn_dm_train.info)\r\n```\r\n\r\nI think we should make the `info` object slightly different when creating the dataset for each split - at the moment it contains for example the variable `splits` which should maybe be renamed to `split` and contain only one `SplitInfo` object ...\r\n"
] | 1,589 | 1,589 | 1,589 | NONE | null | null | null | I'm trying to access the information of CNN/DM dataset :
```python
cnn_dm = nlp.load_dataset('cnn_dailymail')
print(cnn_dm.info)
```
returns :
> AttributeError: 'dict' object has no attribute 'info' | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/115/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/115/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/114 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/114/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/114/comments | https://api.github.com/repos/huggingface/datasets/issues/114/events | https://github.com/huggingface/datasets/issues/114 | 618,611,310 | MDU6SXNzdWU2MTg2MTEzMTA= | 114 | Couldn't reach CNN/DM dataset | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"repos_url": "https://api.github.com/users/astariul/repos",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Installing from source (instead of Pypi package) solved the problem."
] | 1,589 | 1,589 | 1,589 | NONE | null | null | null | I can't get CNN / DailyMail dataset.
```python
import nlp
assert "cnn_dailymail" in [dataset.id for dataset in nlp.list_datasets()]
cnn_dm = nlp.load_dataset('cnn_dailymail')
```
[Colab notebook](https://colab.research.google.com/drive/1zQ3bYAVzm1h0mw0yWPqKAg_4EUlSx5Ex?usp=sharing)
gives following error :
```
ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/cnn_dailymail/cnn_dailymail.py
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/114/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/114/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/113 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/113/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/113/comments | https://api.github.com/repos/huggingface/datasets/issues/113/events | https://github.com/huggingface/datasets/pull/113 | 618,590,562 | MDExOlB1bGxSZXF1ZXN0NDE4MjkxNjIx | 113 | Adding docstrings and some doc | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/113",
"html_url": "https://github.com/huggingface/datasets/pull/113",
"diff_url": "https://github.com/huggingface/datasets/pull/113.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/113.patch",
"merged_at": 1589498564000
} | Some doc | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/113/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/113/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/112 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/112/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/112/comments | https://api.github.com/repos/huggingface/datasets/issues/112/events | https://github.com/huggingface/datasets/pull/112 | 618,569,195 | MDExOlB1bGxSZXF1ZXN0NDE4Mjc0MTU4 | 112 | Qa4mre - add dataset | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/112",
"html_url": "https://github.com/huggingface/datasets/pull/112",
"diff_url": "https://github.com/huggingface/datasets/pull/112.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/112.patch",
"merged_at": 1589534202000
} | Added dummy data test only for the first config. Will do the rest later.
I had to do add some minor hacks to an important function to make it work.
There might be a cleaner way to handle it - can you take a look @thomwolf ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/112/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/112/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/111/comments | https://api.github.com/repos/huggingface/datasets/issues/111/events | https://github.com/huggingface/datasets/pull/111 | 618,528,060 | MDExOlB1bGxSZXF1ZXN0NDE4MjQwMjMy | 111 | [Clean-up] remove under construction datastes | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/111",
"html_url": "https://github.com/huggingface/datasets/pull/111",
"diff_url": "https://github.com/huggingface/datasets/pull/111.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/111.patch",
"merged_at": 1589489542000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/111/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/111/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/110 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/110/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/110/comments | https://api.github.com/repos/huggingface/datasets/issues/110/events | https://github.com/huggingface/datasets/pull/110 | 618,520,325 | MDExOlB1bGxSZXF1ZXN0NDE4MjMzODIy | 110 | fix reddit tifu dummy data | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/110",
"html_url": "https://github.com/huggingface/datasets/pull/110",
"diff_url": "https://github.com/huggingface/datasets/pull/110.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/110.patch",
"merged_at": 1589488813000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/110/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/110/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/109/comments | https://api.github.com/repos/huggingface/datasets/issues/109/events | https://github.com/huggingface/datasets/pull/109 | 618,508,359 | MDExOlB1bGxSZXF1ZXN0NDE4MjI0MDYw | 109 | [Reclor] fix reclor | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/109",
"html_url": "https://github.com/huggingface/datasets/pull/109",
"diff_url": "https://github.com/huggingface/datasets/pull/109.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/109.patch",
"merged_at": 1589487548000
} | - That's probably one me. Could have made the manual data test more flexible. @mariamabarham | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/109/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/109/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/108 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/108/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/108/comments | https://api.github.com/repos/huggingface/datasets/issues/108/events | https://github.com/huggingface/datasets/pull/108 | 618,386,394 | MDExOlB1bGxSZXF1ZXN0NDE4MTIzMzc3 | 108 | convert can use manual dir as second argument | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/108",
"html_url": "https://github.com/huggingface/datasets/pull/108",
"diff_url": "https://github.com/huggingface/datasets/pull/108.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/108.patch",
"merged_at": 1589475162000
} | @mariamabarham | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/108/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/108/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/107/comments | https://api.github.com/repos/huggingface/datasets/issues/107/events | https://github.com/huggingface/datasets/pull/107 | 618,373,045 | MDExOlB1bGxSZXF1ZXN0NDE4MTEyNzcx | 107 | add writer_batch_size to GeneratorBasedBuilder | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Awesome that's great!"
] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/107",
"html_url": "https://github.com/huggingface/datasets/pull/107",
"diff_url": "https://github.com/huggingface/datasets/pull/107.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/107.patch",
"merged_at": 1589475029000
} | You can now specify `writer_batch_size` in the builder arguments or directly in `load_dataset` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/107/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/107/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/106/comments | https://api.github.com/repos/huggingface/datasets/issues/106/events | https://github.com/huggingface/datasets/pull/106 | 618,361,418 | MDExOlB1bGxSZXF1ZXN0NDE4MTAzMjM3 | 106 | Add data dir test command | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Nice - I think we can merge this. I will update the checksums for `wikihow` then as well"
] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/106",
"html_url": "https://github.com/huggingface/datasets/pull/106",
"diff_url": "https://github.com/huggingface/datasets/pull/106.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/106.patch",
"merged_at": 1589474950000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/106/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/106/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/105 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/105/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/105/comments | https://api.github.com/repos/huggingface/datasets/issues/105/events | https://github.com/huggingface/datasets/pull/105 | 618,345,191 | MDExOlB1bGxSZXF1ZXN0NDE4MDg5Njgz | 105 | [New structure on AWS] Adapt paths | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/105",
"html_url": "https://github.com/huggingface/datasets/pull/105",
"diff_url": "https://github.com/huggingface/datasets/pull/105.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/105.patch",
"merged_at": 1589471787000
} | Some small changes so that we have the correct paths. @julien-c | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/105/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/105/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/104 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/104/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/104/comments | https://api.github.com/repos/huggingface/datasets/issues/104/events | https://github.com/huggingface/datasets/pull/104 | 618,277,081 | MDExOlB1bGxSZXF1ZXN0NDE4MDMzOTY0 | 104 | Add trivia_q | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,594 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/104",
"html_url": "https://github.com/huggingface/datasets/pull/104",
"diff_url": "https://github.com/huggingface/datasets/pull/104.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/104.patch",
"merged_at": 1589487812000
} | Currently tested only for one config to pass tests. Needs to add more dummy data later. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/104/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/104/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/103 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/103/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/103/comments | https://api.github.com/repos/huggingface/datasets/issues/103/events | https://github.com/huggingface/datasets/pull/103 | 618,233,637 | MDExOlB1bGxSZXF1ZXN0NDE3OTk5MDIy | 103 | [Manual downloads] add logic proposal for manual downloads and add wikihow | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset.\r\n> \r\n> The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`.\r\n> \r\n> The dataset can then be loaded via:\r\n> \r\n> ```python\r\n> import nlp\r\n> nlp.load_dataset(\"wikihow\", data_dir=\"~/wikihow/manual_dir\")\r\n> ```\r\n> \r\n> I added/changed so that there are explicit error messages when using manually downloaded files.\r\n\r\nwouldn't be nicer if we can have `manual_dir/wikihow`? ",
"> > Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset.\r\n> > The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`.\r\n> > The dataset can then be loaded via:\r\n> > ```python\r\n> > import nlp\r\n> > nlp.load_dataset(\"wikihow\", data_dir=\"~/wikihow/manual_dir\")\r\n> > ```\r\n> > \r\n> > \r\n> > I added/changed so that there are explicit error messages when using manually downloaded files.\r\n> \r\n> wouldn't be nicer if we can have `manual_dir/wikihow`?\r\n\r\nSure, I mean the user can decide whatever he likes best :-) The path one puts in `data_dir` will be used as the path to the manual dir. `nlp.load_dataset(\"wikihow\", data_dir=\"~/manual_dir/wikihow\")` would work as well as any other path ;-) ",
"Perfect! You can merge!"
] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/103",
"html_url": "https://github.com/huggingface/datasets/pull/103",
"diff_url": "https://github.com/huggingface/datasets/pull/103.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/103.patch",
"merged_at": 1589466460000
} | Wikihow is an example that needs to manually download two files as stated in: https://github.com/mahnazkoupaee/WikiHow-Dataset.
The user can then store these files under a hard-coded name: `wikihowAll.csv` and `wikihowSep.csv` in this case in a directory of his choice, e.g. `~/wikihow/manual_dir`.
The dataset can then be loaded via:
```python
import nlp
nlp.load_dataset("wikihow", data_dir="~/wikihow/manual_dir")
```
I added/changed so that there are explicit error messages when using manually downloaded files.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/103/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/103/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/102 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/102/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/102/comments | https://api.github.com/repos/huggingface/datasets/issues/102/events | https://github.com/huggingface/datasets/pull/102 | 618,231,216 | MDExOlB1bGxSZXF1ZXN0NDE3OTk3MDQz | 102 | Run save infos | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Haha that cornell dialogue dataset - that ran for 3h on my computer as well. The `generate_examples` method in this script is one of the most inefficient code samples I've ever seen :D ",
"Indeed it's been 3 hours already\r\n```73111 examples [3:07:48, 2.40 examples/s]```"
] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/102",
"html_url": "https://github.com/huggingface/datasets/pull/102",
"diff_url": "https://github.com/huggingface/datasets/pull/102.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/102.patch",
"merged_at": 1589470983000
} | I replaced the old checksum file with the new `dataset_infos.json` by running the script on almost all the datasets we have. The only one that is still running on my side is the cornell dialog | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/102/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/102/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/101 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/101/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/101/comments | https://api.github.com/repos/huggingface/datasets/issues/101/events | https://github.com/huggingface/datasets/pull/101 | 618,111,651 | MDExOlB1bGxSZXF1ZXN0NDE3ODk5OTQ2 | 101 | [Reddit] add reddit | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/101",
"html_url": "https://github.com/huggingface/datasets/pull/101",
"diff_url": "https://github.com/huggingface/datasets/pull/101.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/101.patch",
"merged_at": 1589452044000
} | - Everything worked fine @mariamabarham. Made my computer nearly crash, but all seems to be working :-) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/101/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/101/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/100/comments | https://api.github.com/repos/huggingface/datasets/issues/100/events | https://github.com/huggingface/datasets/pull/100 | 618,081,602 | MDExOlB1bGxSZXF1ZXN0NDE3ODc1MjE2 | 100 | Add per type scores in seqeval metric | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"LGTM :-) Some small suggestions to shorten the code a bit :-) ",
"Can you put the kwargs as normal kwargs instead of a dict? (And add them to the kwargs description As well)",
"@thom Is-it what you meant?",
"Yes and there is a dynamically generated doc string in the metric script KWARGS DESCRIPTION"
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/100",
"html_url": "https://github.com/huggingface/datasets/pull/100",
"diff_url": "https://github.com/huggingface/datasets/pull/100.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/100.patch",
"merged_at": 1589498494000
} | This PR add a bit more detail in the seqeval metric. Now the usage and output are:
```python
import nlp
met = nlp.load_metric('metrics/seqeval')
references = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
predictions = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
met.compute(predictions, references)
#Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8}
```
It is also possible to compute scores for non IOB notations, POS tagging for example hasn't this kind of notation. Add `suffix` parameter:
```python
import nlp
met = nlp.load_metric('metrics/seqeval')
references = [['O', 'O', 'O', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']]
predictions = [['O', 'O', 'MISC', 'MISC', 'MISC', 'MISC', 'O'], ['PER', 'PER', 'O']]
met.compute(predictions, references, metrics_kwargs={"suffix": True})
#Output: {'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.9}
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/100/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/100/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/99 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/99/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/99/comments | https://api.github.com/repos/huggingface/datasets/issues/99/events | https://github.com/huggingface/datasets/pull/99 | 618,026,700 | MDExOlB1bGxSZXF1ZXN0NDE3ODMxNjky | 99 | [Cmrc 2018] fix cmrc2018 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/99",
"html_url": "https://github.com/huggingface/datasets/pull/99",
"diff_url": "https://github.com/huggingface/datasets/pull/99.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/99.patch",
"merged_at": 1589446181000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/99/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/99/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/98 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/98/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/98/comments | https://api.github.com/repos/huggingface/datasets/issues/98/events | https://github.com/huggingface/datasets/pull/98 | 617,957,739 | MDExOlB1bGxSZXF1ZXN0NDE3Nzc3NDcy | 98 | Webis tl-dr | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Should that rather be in an organization scope, @thomwolf @patrickvonplaten ?",
"> Should that rather be in an organization scope, @thomwolf @patrickvonplaten ?\r\n\r\nI'm a bit indifferent - both would be fine for me!",
"@jplu - if creating the dummy_data is too tedious, I can do it as well :-) ",
"There is dummy_data here, no ?",
"Yeah I think naming it webis/tl_dr would be best @jplu if that works for you",
"No problem at all!! On it^^",
"> There is dummy_data here, no ?\r\n\r\nSome paths were wrong - the structure is really confusing and the error messages don't really help either - I have to think about how to make this easier to understand!\r\n\r\nHope it was ok that I fiddled with your PR !",
"> Some paths were wrong - the structure is really confusing and the error message don't really help either - I have to think about how to make this easier to understand!\r\n\r\nOh ok! I haven't noticed that sorry :(\r\n\r\n> Hope it was ok that I fiddled with your PR !\r\n\r\nOf course it was ok :)",
"@julien-c Looks like what you have in mind?\r\n\r\n```python\r\nimport nlp\r\nnlp.load_dataset(\"datasets/webis\", \"tl_dr\")\r\n\r\n#Output: Downloading and preparing dataset webis/tl_dr (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/jplu/.cache/huggingface/datasets/webis/tl_dr/1.0.0...\r\n```",
"Merging this for now. Maybe we can see whether to rename it in a different PR @julien-c ? \r\n",
"Hi, \r\nAuthor here of the webis-tldr corpus. Any plans on integrating this dataset into the hub? I remember we could access it in the previous versions of the library. If there is a particular issue that I can help with, do let me know.\r\n\r\nThanks!",
"Hi @shahbazsyed, this dataset _is_ inside the hub but it's namespaced by the organization name `webis`.\r\n\r\nYou can load it following the steps described in https://huggingface.co/datasets/webis/tl_dr\r\n\r\nHere's a Colab showcasing that it works: https://colab.research.google.com/drive/11IrzRVpnMLJZ8_UFFHLR8FhiajjAHRUU?usp=sharing\r\n\r\nThe reason the code is in S3 and not in this repo is that the dataset is namespaced under the `webis` organization. We don't have a lot of namespaced datasets yet but this should become the main way we add more datasets in the future.\r\nLet us know if that's an issue for you. Thank you!"
] | 1,589 | 1,599 | 1,589 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/98",
"html_url": "https://github.com/huggingface/datasets/pull/98",
"diff_url": "https://github.com/huggingface/datasets/pull/98.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/98.patch",
"merged_at": 1589489655000
} | Add the Webid TL:DR dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/98/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/98/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/97 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/97/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/97/comments | https://api.github.com/repos/huggingface/datasets/issues/97/events | https://github.com/huggingface/datasets/pull/97 | 617,809,431 | MDExOlB1bGxSZXF1ZXN0NDE3NjU4MDcy | 97 | [Csv] add tests for csv dataset script | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@thomwolf - can you check and merge if ok? "
] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/97",
"html_url": "https://github.com/huggingface/datasets/pull/97",
"diff_url": "https://github.com/huggingface/datasets/pull/97.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/97.patch",
"merged_at": 1589412195000
} | Adds dummy data tests for csv. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/97/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/97/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/96 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/96/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/96/comments | https://api.github.com/repos/huggingface/datasets/issues/96/events | https://github.com/huggingface/datasets/pull/96 | 617,739,521 | MDExOlB1bGxSZXF1ZXN0NDE3NjAwMjY4 | 96 | lm1b | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I might have a different version of `isort` than others. It seems like I'm always reordering the imports of others. But isn't really a problem..."
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/96",
"html_url": "https://github.com/huggingface/datasets/pull/96",
"diff_url": "https://github.com/huggingface/datasets/pull/96.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/96.patch",
"merged_at": 1589465609000
} | Add lm1b dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/96/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/96/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/95 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/95/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/95/comments | https://api.github.com/repos/huggingface/datasets/issues/95/events | https://github.com/huggingface/datasets/pull/95 | 617,703,037 | MDExOlB1bGxSZXF1ZXN0NDE3NTY5NzA4 | 95 | Replace checksums files by Dataset infos json | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Great! LGTM :-) ",
"> Ok, really clean!\r\n> I like the logic (not a huge fan of using `_asdict_inner` but it makes sense).\r\n> I think it's a nice improvement!\r\n> \r\n> How should we update the files in the repo? Run a big job on a server or on somebody's computer who has most of the datasets already downloaded?\r\n\r\nMaybe we can split the updates among us...IMO most datasets run very quickly. \r\nI think I've downloaded 50 datasets and 80% are loaded in <5min, 15% in <1h and then `wmt` which is still downloading (since 12h). \r\nI deleted my cache because the `wmt` downloads require quite a lot of space, so I only have parts of the `wmt` datasets on my computer. \r\n\r\n@mariamabarham I guess you have downloaded most of the datasets no? "
] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/95",
"html_url": "https://github.com/huggingface/datasets/pull/95",
"diff_url": "https://github.com/huggingface/datasets/pull/95.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/95.patch",
"merged_at": 1589446722000
} | ### Better verifications when loading a dataset
I replaced the `urls_checksums` directory that used to contain `checksums.txt` and `cached_sizes.txt`, by a single file `dataset_infos.json`. It's just a dict `config_name` -> `DatasetInfo`.
It simplifies and improves how verifications of checksums and splits sizes are done, as they're all stored in `DatasetInfo` (one per config). Also, having already access to `DatasetInfo` enables to check disk space before running `download_and_prepare` for a given config.
The dataset infos json file is user readable, you can take a look at the squad one that I generated in this PR.
### Renaming
According to these changes, I did some renaming:
`save_checksums` -> `save_infos`
`ignore_checksums` -> `ignore_verifications`
for example, when you are creating a dataset you have to run
```nlp-cli test path/to/my/dataset --save_infos --all_configs```
instead of
```nlp-cli test path/to/my/dataset --save_checksums --all_configs```
### And now, the fun part
We'll have to rerun the `nlp-cli test ... --save_infos --all_configs` for all the datasets
-----------------
feedback appreciated ! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/95/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/95/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/94 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/94/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/94/comments | https://api.github.com/repos/huggingface/datasets/issues/94/events | https://github.com/huggingface/datasets/pull/94 | 617,571,340 | MDExOlB1bGxSZXF1ZXN0NDE3NDYyMTIw | 94 | Librispeech | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@jplu - I changed this weird archieve - iter method to something simpler. It's only one file to download anyways so I don't see the point of using weird iter methods...It's a huge file though :D 30 million lines of text. Took me quite some time to download :D "
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/94",
"html_url": "https://github.com/huggingface/datasets/pull/94",
"diff_url": "https://github.com/huggingface/datasets/pull/94.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/94.patch",
"merged_at": 1589405342000
} | Add librispeech dataset and remove some useless content. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/94/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/94/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/93 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/93/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/93/comments | https://api.github.com/repos/huggingface/datasets/issues/93/events | https://github.com/huggingface/datasets/pull/93 | 617,522,029 | MDExOlB1bGxSZXF1ZXN0NDE3NDIxODUy | 93 | Cleanup notebooks and various fixes | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/93",
"html_url": "https://github.com/huggingface/datasets/pull/93",
"diff_url": "https://github.com/huggingface/datasets/pull/93.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/93.patch",
"merged_at": 1589382107000
} | Fixes on dataset (more flexible) metrics (fix) and general clean ups | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/93/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/93/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/92 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/92/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/92/comments | https://api.github.com/repos/huggingface/datasets/issues/92/events | https://github.com/huggingface/datasets/pull/92 | 617,341,505 | MDExOlB1bGxSZXF1ZXN0NDE3Mjc1ODky | 92 | [WIP] add wmt14 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/92",
"html_url": "https://github.com/huggingface/datasets/pull/92",
"diff_url": "https://github.com/huggingface/datasets/pull/92.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/92.patch",
"merged_at": 1589627857000
} | WMT14 takes forever to download :-/
- WMT is the first dataset that uses an abstract class IMO, so I had to modify the `load_dataset_module` a bit. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/92/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/92/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/91 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/91/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/91/comments | https://api.github.com/repos/huggingface/datasets/issues/91/events | https://github.com/huggingface/datasets/pull/91 | 617,339,484 | MDExOlB1bGxSZXF1ZXN0NDE3Mjc0MjA0 | 91 | [Paracrawl] add paracrawl | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/91",
"html_url": "https://github.com/huggingface/datasets/pull/91",
"diff_url": "https://github.com/huggingface/datasets/pull/91.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/91.patch",
"merged_at": 1589366414000
} | - Huge dataset - took ~1h to download
- Also this PR reformats all dataset scripts and adds `datasets` to `make style` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/91/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/91/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/90 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/90/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/90/comments | https://api.github.com/repos/huggingface/datasets/issues/90/events | https://github.com/huggingface/datasets/pull/90 | 617,311,877 | MDExOlB1bGxSZXF1ZXN0NDE3MjUxODE0 | 90 | Add download gg drive | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"awesome - so no manual downloaded needed here? ",
"Yes exactly. It works like a standard download"
] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/90",
"html_url": "https://github.com/huggingface/datasets/pull/90",
"diff_url": "https://github.com/huggingface/datasets/pull/90.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/90.patch",
"merged_at": 1589364331000
} | We can now add datasets that download from google drive | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/90/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/90/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/89 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/89/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/89/comments | https://api.github.com/repos/huggingface/datasets/issues/89/events | https://github.com/huggingface/datasets/pull/89 | 617,295,069 | MDExOlB1bGxSZXF1ZXN0NDE3MjM4MjU4 | 89 | Add list and inspect methods - cleanup hf_api | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/89",
"html_url": "https://github.com/huggingface/datasets/pull/89",
"diff_url": "https://github.com/huggingface/datasets/pull/89.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/89.patch",
"merged_at": 1589362390000
} | Add a bunch of methods to easily list and inspect the processing scripts up-loaded on S3:
```python
nlp.list_datasets()
nlp.list_metrics()
# Copy and prepare the scripts at `local_path` for easy inspection/modification.
nlp.inspect_dataset(path, local_path)
# Copy and prepare the scripts at `local_path` for easy inspection/modification.
nlp.inspect_metric(path, local_path)
```
Also clean up the `HfAPI` to use `dataclasses` for better user-experience | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/89/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/89/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/88 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/88/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/88/comments | https://api.github.com/repos/huggingface/datasets/issues/88/events | https://github.com/huggingface/datasets/pull/88 | 617,284,664 | MDExOlB1bGxSZXF1ZXN0NDE3MjI5ODQw | 88 | Add wiki40b | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks good to me. I have not really looked too much into the Beam Datasets yet though - so I think you can merge whenever you think is good for Beam datasets :-) "
] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/88",
"html_url": "https://github.com/huggingface/datasets/pull/88",
"diff_url": "https://github.com/huggingface/datasets/pull/88.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/88.patch",
"merged_at": 1589373114000
} | This one is a beam dataset that downloads files using tensorflow.
I tested it on a small config and it works fine | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/88/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/88/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/87 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/87/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/87/comments | https://api.github.com/repos/huggingface/datasets/issues/87/events | https://github.com/huggingface/datasets/pull/87 | 617,267,118 | MDExOlB1bGxSZXF1ZXN0NDE3MjE1NzA0 | 87 | Add Flores | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/87",
"html_url": "https://github.com/huggingface/datasets/pull/87",
"diff_url": "https://github.com/huggingface/datasets/pull/87.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/87.patch",
"merged_at": 1589361813000
} | Beautiful language for sure! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/87/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/87/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/86 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/86/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/86/comments | https://api.github.com/repos/huggingface/datasets/issues/86/events | https://github.com/huggingface/datasets/pull/86 | 617,260,972 | MDExOlB1bGxSZXF1ZXN0NDE3MjEwNzY2 | 86 | [Load => load_dataset] change naming | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/86",
"html_url": "https://github.com/huggingface/datasets/pull/86",
"diff_url": "https://github.com/huggingface/datasets/pull/86.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/86.patch",
"merged_at": 1589359857000
} | Rename leftovers @thomwolf | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/86/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/86/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/85 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/85/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/85/comments | https://api.github.com/repos/huggingface/datasets/issues/85/events | https://github.com/huggingface/datasets/pull/85 | 617,253,428 | MDExOlB1bGxSZXF1ZXN0NDE3MjA0ODA4 | 85 | Add boolq | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Awesome :-) Thanks for adding the function to the Mock DL Manager"
] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/85",
"html_url": "https://github.com/huggingface/datasets/pull/85",
"diff_url": "https://github.com/huggingface/datasets/pull/85.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/85.patch",
"merged_at": 1589360978000
} | I just added the dummy data for this dataset.
This one was uses `tf.io.gfile.copy` to download the data but I added the support for custom download in the mock_download_manager. I also had to add a `tensorflow` dependency for tests. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/85/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/85/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/84 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/84/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/84/comments | https://api.github.com/repos/huggingface/datasets/issues/84/events | https://github.com/huggingface/datasets/pull/84 | 617,249,815 | MDExOlB1bGxSZXF1ZXN0NDE3MjAxODcz | 84 | [TedHrLr] add left dummy data | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/84",
"html_url": "https://github.com/huggingface/datasets/pull/84",
"diff_url": "https://github.com/huggingface/datasets/pull/84.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/84.patch",
"merged_at": 1589358561000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/84/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/84/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/83 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/83/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/83/comments | https://api.github.com/repos/huggingface/datasets/issues/83/events | https://github.com/huggingface/datasets/pull/83 | 616,863,601 | MDExOlB1bGxSZXF1ZXN0NDE2ODkyOTUz | 83 | New datasets | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/83",
"html_url": "https://github.com/huggingface/datasets/pull/83",
"diff_url": "https://github.com/huggingface/datasets/pull/83.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/83.patch",
"merged_at": 1589307765000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/83/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/83/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/82 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/82/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/82/comments | https://api.github.com/repos/huggingface/datasets/issues/82/events | https://github.com/huggingface/datasets/pull/82 | 616,805,194 | MDExOlB1bGxSZXF1ZXN0NDE2ODQ1Njc5 | 82 | [Datasets] add ted_hrlr | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/82",
"html_url": "https://github.com/huggingface/datasets/pull/82",
"diff_url": "https://github.com/huggingface/datasets/pull/82.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/82.patch",
"merged_at": 1589356372000
} | @thomwolf - After looking at `xnli` I think it's better to leave the translation features and add a `translation` key to make them work in our framework.
The result looks like this:
![Screenshot from 2020-05-12 18-34-43](https://user-images.githubusercontent.com/23423619/81721933-ee1faf00-9480-11ea-9e95-d6557cbd0ce0.png)
you can see that each split has a `translation` key which value is the nlp.features.Translation object.
That's a simple change. If it's ok for you, I will add dummy data for the other configs and treat the other translation scripts in the same way. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/82/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/82/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/81 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/81/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/81/comments | https://api.github.com/repos/huggingface/datasets/issues/81/events | https://github.com/huggingface/datasets/pull/81 | 616,793,010 | MDExOlB1bGxSZXF1ZXN0NDE2ODM1NzE1 | 81 | add tests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/81",
"html_url": "https://github.com/huggingface/datasets/pull/81",
"diff_url": "https://github.com/huggingface/datasets/pull/81.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/81.patch",
"merged_at": 1589355836000
} | Tests for py_utils functions and for the BaseReader used to read from arrow and parquet.
I also removed unused utils functions. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/81/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/81/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/80 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/80/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/80/comments | https://api.github.com/repos/huggingface/datasets/issues/80/events | https://github.com/huggingface/datasets/pull/80 | 616,786,803 | MDExOlB1bGxSZXF1ZXN0NDE2ODMwNjk3 | 80 | Add nbytes + nexamples check | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks good to me! Should we hard code those numbers in the config classes and make sure that when loading a dataset that the numbers match? "
] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/80",
"html_url": "https://github.com/huggingface/datasets/pull/80",
"diff_url": "https://github.com/huggingface/datasets/pull/80.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/80.patch",
"merged_at": 1589356353000
} | ### Save size and number of examples
Now when you do `save_checksums`, it also create `cached_sizes.txt` right next to the checksum file.
This new file stores the bytes sizes and the number of examples of each split that has been prepared and stored in the cache. Example:
```
# Cached sizes: <full_config_name> <num_bytes> <num_examples>
hansards/house/1.0.0/test 22906629 122290
hansards/house/1.0.0/train 191459584 947969
hansards/senate/1.0.0/test 5711686 25553
hansards/senate/1.0.0/train 40324278 182135
```
### Check processing output
If there is a `caches_sizes.txt`, then each time we run `download_and_prepare` it will make sure that the sizes match. You can set `ignore_checksums=True` if you don't want that to happen.
### Fill Dataset Info
All the split infos and the checksums are now stored correctly in DatasetInfo after `download_and_prepare`
### Check space on disk before running `download_and_prepare`
Check if the space is lower than the sum of the sizes of the files in `checksums.txt` and `cached_files.txt`. This is not ideal though as it considers the files for all configs.
TODO:
A better way to do it would be to have save the `DatasetInfo` instead of the `checksums.txt` and `cached_sizes.txt`, in order to have one file per dataset config (and therefore consider only the sizes of the files for one config and not all of them). It can also be the occasion to factorize all the `download_and_prepare` verifications. Maybe next PR ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/80/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/80/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/79 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/79/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/79/comments | https://api.github.com/repos/huggingface/datasets/issues/79/events | https://github.com/huggingface/datasets/pull/79 | 616,785,613 | MDExOlB1bGxSZXF1ZXN0NDE2ODI5NzMy | 79 | [Convert] add new pattern | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/79",
"html_url": "https://github.com/huggingface/datasets/pull/79",
"diff_url": "https://github.com/huggingface/datasets/pull/79.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/79.patch",
"merged_at": 1589300229000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/79/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/79/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/78 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/78/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/78/comments | https://api.github.com/repos/huggingface/datasets/issues/78/events | https://github.com/huggingface/datasets/pull/78 | 616,774,275 | MDExOlB1bGxSZXF1ZXN0NDE2ODIwNzU5 | 78 | [Tests] skip beam dataset tests for now | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq - I moved the wkipedia file to the \"correct\" folder. ",
"Nice thanks !"
] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/78",
"html_url": "https://github.com/huggingface/datasets/pull/78",
"diff_url": "https://github.com/huggingface/datasets/pull/78.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/78.patch",
"merged_at": 1589300182000
} | For now we will skip tests for Beam Datasets | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/78/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/78/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/77 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/77/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/77/comments | https://api.github.com/repos/huggingface/datasets/issues/77/events | https://github.com/huggingface/datasets/pull/77 | 616,674,601 | MDExOlB1bGxSZXF1ZXN0NDE2NzQwMjAz | 77 | New datasets | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/77",
"html_url": "https://github.com/huggingface/datasets/pull/77",
"diff_url": "https://github.com/huggingface/datasets/pull/77.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/77.patch",
"merged_at": 1589292135000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/77/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/77/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/76 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/76/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/76/comments | https://api.github.com/repos/huggingface/datasets/issues/76/events | https://github.com/huggingface/datasets/pull/76 | 616,579,228 | MDExOlB1bGxSZXF1ZXN0NDE2NjYyMTk2 | 76 | pin flake 8 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/76",
"html_url": "https://github.com/huggingface/datasets/pull/76",
"diff_url": "https://github.com/huggingface/datasets/pull/76.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/76.patch",
"merged_at": 1589282854000
} | Flake 8's new version does not like our format. Pinning the version for now. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/76/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/76/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/75 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/75/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/75/comments | https://api.github.com/repos/huggingface/datasets/issues/75/events | https://github.com/huggingface/datasets/pull/75 | 616,520,163 | MDExOlB1bGxSZXF1ZXN0NDE2NjE0MzU1 | 75 | WIP adding metrics | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"It's all about my metric stuff so I'll probably merge it unless you want to have a look.\r\n\r\nTook the occasion to remove the old doc and requirements.txt"
] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/75",
"html_url": "https://github.com/huggingface/datasets/pull/75",
"diff_url": "https://github.com/huggingface/datasets/pull/75.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/75.patch",
"merged_at": 1589355850000
} | Adding the following metrics as identified by @mariamabarham:
1. BLEU: BiLingual Evaluation Understudy: https://github.com/tensorflow/nmt/blob/master/nmt/scripts/bleu.py, https://github.com/chakki-works/sumeval/blob/master/sumeval/metrics/bleu.py (multilingual)
2. GLEU: Google-BLEU: https://github.com/cnap/gec-ranking/blob/master/scripts/compute_gleu
3. Sacrebleu: https://pypi.org/project/sacrebleu/1.4.8/ (pypi package), https://github.com/mjpost/sacrebleu (github implementation)
4. ROUGE: Recall-Oriented Understudy for Gisting Evaluation: https://github.com/google-research/google-research/tree/master/rouge, https://github.com/chakki-works/sumeval/blob/master/sumeval/metrics/rouge.py (multilingual)
5. Seqeval: https://github.com/chakki-works/seqeval (github implementation), https://pypi.org/project/seqeval/0.0.12/ (pypi package)
6. Coval: coreference evaluation package for the CoNLL and ARRAU datasets https://github.com/ns-moosavi/coval
7. SQuAD v1 evaluation script
8. SQuAD V2 evaluation script: https://worksheets.codalab.org/rest/bundles/0x6b567e1cf2e041ec80d7098f031c5c9e/contents/blob/
9. GLUE
10. XNLI
Not now:
1. Perplexity: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/perplexity.py
2. Spearman: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/spearman_correlation.py
3. F1_measure: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/f1_measure.py
4. Pearson_corelation: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/pearson_correlation.py
5. AUC: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/auc.py
6. Entropy: https://github.com/allenai/allennlp/blob/master/allennlp/training/metrics/entropy.py | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/75/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/75/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/74 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/74/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/74/comments | https://api.github.com/repos/huggingface/datasets/issues/74/events | https://github.com/huggingface/datasets/pull/74 | 616,511,101 | MDExOlB1bGxSZXF1ZXN0NDE2NjA3MDcy | 74 | fix overflow check | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/74",
"html_url": "https://github.com/huggingface/datasets/pull/74",
"diff_url": "https://github.com/huggingface/datasets/pull/74.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/74.patch",
"merged_at": 1589277877000
} | I did some tests and unfortunately the test
```
pa_array.nbytes > MAX_BATCH_BYTES
```
doesn't work. Indeed for a StructArray, `nbytes` can be less 2GB even if there is an overflow (it loops...).
I don't think we can do a proper overflow test for the limit of 2GB...
For now I replaced it with a sanity check on the first element. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/74/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/74/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/73 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/73/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/73/comments | https://api.github.com/repos/huggingface/datasets/issues/73/events | https://github.com/huggingface/datasets/pull/73 | 616,417,845 | MDExOlB1bGxSZXF1ZXN0NDE2NTMyMTg1 | 73 | JSON script | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.github.com/users/jplu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jplu/subscriptions",
"organizations_url": "https://api.github.com/users/jplu/orgs",
"repos_url": "https://api.github.com/users/jplu/repos",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"received_events_url": "https://api.github.com/users/jplu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The tests for the Wikipedia dataset do not pass anymore with the error:\r\n```\r\nTo be able to use this dataset, you need to install the following dependencies ['mwparserfromhell'] using 'pip install mwparserfromhell' for instance'\r\n```",
"This was an issue on master. You can just rebase from master.",
"Perfect! Indeed, it worked^^ Thanks @lhoestq ",
"Currently the dummy_data tests are always green because in a PR the dataset is not yet synchronized with aws. This PR fixes this: https://github.com/huggingface/nlp/pull/140 . \r\n\r\nCould you test `json` locally or wait until the PR: https://github.com/huggingface/nlp/pull/140 is merged ? :-) ",
"Ok, I will wait #140 to be merged and then rebase :) "
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/73",
"html_url": "https://github.com/huggingface/datasets/pull/73",
"diff_url": "https://github.com/huggingface/datasets/pull/73.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/73.patch",
"merged_at": 1589784636000
} | Add a JSONS script to read JSON datasets from files. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/73/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/73/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/72 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/72/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/72/comments | https://api.github.com/repos/huggingface/datasets/issues/72/events | https://github.com/huggingface/datasets/pull/72 | 616,225,010 | MDExOlB1bGxSZXF1ZXN0NDE2Mzc4Mjg4 | 72 | [README dummy data tests] README to better understand how the dummy data structure works | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/72",
"html_url": "https://github.com/huggingface/datasets/pull/72",
"diff_url": "https://github.com/huggingface/datasets/pull/72.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/72.patch",
"merged_at": 1589235961000
} | In this PR a README.md is added to tests to shine more light on how the dummy data structure works. I try to explain the different possible cases. IMO the best way to understand the logic is to checkout the dummy data structure of the different datasets I mention in the README.md since those are the "edge cases".
@mariamabarham @thomwolf @lhoestq @jplu - I'd be happy to checkout the dummy data structure and get some feedback on possible improvements. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/72/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/72/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/71 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/71/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/71/comments | https://api.github.com/repos/huggingface/datasets/issues/71/events | https://github.com/huggingface/datasets/pull/71 | 615,942,180 | MDExOlB1bGxSZXF1ZXN0NDE2MTUxODM4 | 71 | Fix arrow writer for big datasets using writer_batch_size | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"After a quick chat with Yacine : the 2Go test may not be sufficient actually, as I'm looking at the size of the array and not the size of the current_rows. If the test doesn't do the job I think I'll remove it and lower the batch size a bit to be sure that it never exceeds 2Go. I'll do more tests later"
] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/71",
"html_url": "https://github.com/huggingface/datasets/pull/71",
"diff_url": "https://github.com/huggingface/datasets/pull/71.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/71.patch",
"merged_at": 1589227238000
} | This PR fixes Yacine's bug.
According to [this](https://github.com/apache/arrow/blob/master/docs/source/cpp/arrays.rst#size-limitations-and-recommendations), it is not recommended to have pyarrow arrays bigger than 2Go.
Therefore I set a default batch size of 100 000 examples per batch. In general it shouldn't exceed 2Go. If it does, I reduce the batch_size on the fly, and I notify the user with a warning. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/71/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/71/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/70 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/70/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/70/comments | https://api.github.com/repos/huggingface/datasets/issues/70/events | https://github.com/huggingface/datasets/pull/70 | 615,679,102 | MDExOlB1bGxSZXF1ZXN0NDE1OTM3NDgw | 70 | adding RACE, QASC, Super_glue and Tiny_shakespear datasets | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.github.com/users/mariamabarham/followers",
"following_url": "https://api.github.com/users/mariamabarham/following{/other_user}",
"gists_url": "https://api.github.com/users/mariamabarham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariamabarham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariamabarham/subscriptions",
"organizations_url": "https://api.github.com/users/mariamabarham/orgs",
"repos_url": "https://api.github.com/users/mariamabarham/repos",
"events_url": "https://api.github.com/users/mariamabarham/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariamabarham/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think rebasing to master will solve the quality test and the datasets that don't have a testing structure yet because of the manual download - maybe you can put them in `datasets under construction`? Then would also make it easier for me to see how to add tests for them :-) "
] | 1,589 | 1,589 | 1,589 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/70",
"html_url": "https://github.com/huggingface/datasets/pull/70",
"diff_url": "https://github.com/huggingface/datasets/pull/70.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/70.patch",
"merged_at": 1589289711000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/70/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/70/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/69 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/69/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/69/comments | https://api.github.com/repos/huggingface/datasets/issues/69/events | https://github.com/huggingface/datasets/pull/69 | 615,450,534 | MDExOlB1bGxSZXF1ZXN0NDE1NzYyNTQ4 | 69 | fix cache dir in builder tests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Nice, is that the reason one cannot rerun the tests without deleting the cache? \r\n",
"Yes exactly. It was not using the temporary dir for tests."
] | 1,589 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/69",
"html_url": "https://github.com/huggingface/datasets/pull/69",
"diff_url": "https://github.com/huggingface/datasets/pull/69.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/69.patch",
"merged_at": 1589181568000
} | minor fix | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/69/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/69/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/68 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/68/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/68/comments | https://api.github.com/repos/huggingface/datasets/issues/68/events | https://github.com/huggingface/datasets/pull/68 | 614,882,655 | MDExOlB1bGxSZXF1ZXN0NDE1MzQ3NTgw | 68 | [CSV] re-add csv | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,588 | 1,588 | 1,588 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/68",
"html_url": "https://github.com/huggingface/datasets/pull/68",
"diff_url": "https://github.com/huggingface/datasets/pull/68.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/68.patch",
"merged_at": 1588959646000
} | Re-adding csv under the datasets under construction to keep circle ci happy - will have to see how to include it in the tests.
@lhoestq noticed that I accidently deleted it in https://github.com/huggingface/nlp/pull/63#discussion_r422263729. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/68/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/68/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/67 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/67/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/67/comments | https://api.github.com/repos/huggingface/datasets/issues/67/events | https://github.com/huggingface/datasets/pull/67 | 614,798,483 | MDExOlB1bGxSZXF1ZXN0NDE1Mjc5NjI0 | 67 | [Tests] Test files locally | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Super nice, good job @patrickvonplaten!"
] | 1,588 | 1,588 | 1,588 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/67",
"html_url": "https://github.com/huggingface/datasets/pull/67",
"diff_url": "https://github.com/huggingface/datasets/pull/67.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/67.patch",
"merged_at": 1588951020000
} | This PR adds a `aws` and a `local` decorator to the tests so that tests now run on the local datasets.
By default, the `aws` is deactivated and `local` is activated and `slow` is deactivated, so that only 1 test per dataset runs on circle ci.
**When local is activated all folders in `./datasets` are tested.**
**Important** When adding a dataset, we should no longer upload it to AWS. The steps are:
1. Open a PR
2. Add a dataset as described in `datasets/README.md`
3. If all tests pass, push to master
Currently we have 49 functional datasets in our code base.
We have 6 datasets "under-construction" that don't pass the tests - so I put them in a folder "datasets_under_construction" - it would be nice to open a PR to fix them and put them in the `datasets` folder.
**Important** when running tests locally, the datasets are cached so to rerun them delete your local cache via:
`rm -r ~/.cache/huggingface/datasets/*`
@thomwolf @mariamabarham @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/67/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/67/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/66 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/66/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/66/comments | https://api.github.com/repos/huggingface/datasets/issues/66/events | https://github.com/huggingface/datasets/pull/66 | 614,748,552 | MDExOlB1bGxSZXF1ZXN0NDE1MjM5Njgy | 66 | [Datasets] ReadME | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,588 | 1,588 | 1,588 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/66",
"html_url": "https://github.com/huggingface/datasets/pull/66",
"diff_url": "https://github.com/huggingface/datasets/pull/66.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/66.patch",
"merged_at": 1588945162000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/66/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/66/timeline | null | null | true |
|
https://api.github.com/repos/huggingface/datasets/issues/65 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/65/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/65/comments | https://api.github.com/repos/huggingface/datasets/issues/65/events | https://github.com/huggingface/datasets/pull/65 | 614,746,516 | MDExOlB1bGxSZXF1ZXN0NDE1MjM4MDEw | 65 | fix math dataset and xcopa | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,588 | 1,588 | 1,588 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/65",
"html_url": "https://github.com/huggingface/datasets/pull/65",
"diff_url": "https://github.com/huggingface/datasets/pull/65.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/65.patch",
"merged_at": 1588944940000
} | - fixes math dataset and xcopa, uploaded both of the to S3 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/65/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/65/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/64 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/64/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/64/comments | https://api.github.com/repos/huggingface/datasets/issues/64/events | https://github.com/huggingface/datasets/pull/64 | 614,737,057 | MDExOlB1bGxSZXF1ZXN0NDE1MjMwMjYy | 64 | [Datasets] Make master ready for datasets adding | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,588 | 1,588 | 1,588 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/64",
"html_url": "https://github.com/huggingface/datasets/pull/64",
"diff_url": "https://github.com/huggingface/datasets/pull/64.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/64.patch",
"merged_at": 1588943850000
} | Add all relevant files so that datasets can now be added on master | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/64/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/64/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/63 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/63/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/63/comments | https://api.github.com/repos/huggingface/datasets/issues/63/events | https://github.com/huggingface/datasets/pull/63 | 614,666,365 | MDExOlB1bGxSZXF1ZXN0NDE1MTczODU5 | 63 | [Dataset scripts] add all datasets scripts | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,588 | 1,588 | 1,588 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/63",
"html_url": "https://github.com/huggingface/datasets/pull/63",
"diff_url": "https://github.com/huggingface/datasets/pull/63.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/63.patch",
"merged_at": 1588937640000
} | As mentioned, we can have the canonical datasets in the master. For now I also want to include all the data as present on S3 to make the synchronization easier when uploading new datastes.
@mariamabarham @lhoestq @thomwolf - what do you think?
If this is ok for you, I can sync up the master with the `add_dataset` branch: https://github.com/huggingface/nlp/pull/37 so that master is up to date. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/63/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/63/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/62 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/62/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/62/comments | https://api.github.com/repos/huggingface/datasets/issues/62/events | https://github.com/huggingface/datasets/pull/62 | 614,630,830 | MDExOlB1bGxSZXF1ZXN0NDE1MTQ1NDAx | 62 | [Cached Path] Better error message | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,588 | 1,588 | 1,588 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/62",
"html_url": "https://github.com/huggingface/datasets/pull/62",
"diff_url": "https://github.com/huggingface/datasets/pull/62.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/62.patch",
"merged_at": null
} | IMO returning `None` in this function only leads to confusion and is never helpful. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/62/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/62/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/61 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/61/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/61/comments | https://api.github.com/repos/huggingface/datasets/issues/61/events | https://github.com/huggingface/datasets/pull/61 | 614,607,474 | MDExOlB1bGxSZXF1ZXN0NDE1MTI3MTU4 | 61 | [Load] rename setup_module to prepare_module | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,588 | 1,588 | 1,588 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/61",
"html_url": "https://github.com/huggingface/datasets/pull/61",
"diff_url": "https://github.com/huggingface/datasets/pull/61.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/61.patch",
"merged_at": 1588928176000
} | rename setup_module to prepare_module due to issues with pytests `setup_module` function.
See: PR #59. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/61/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/61/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/60 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/60/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/60/comments | https://api.github.com/repos/huggingface/datasets/issues/60/events | https://github.com/huggingface/datasets/pull/60 | 614,372,553 | MDExOlB1bGxSZXF1ZXN0NDE0OTQyNjEy | 60 | Update to simplify some datasets conversion | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Awesome! ",
"Also we should convert `tf.io.gfile.exists` into `os.path.exists` , `tf.io.gfile.listdir`into `os.listdir` and `tf.io.gfile.glob` into `glob.glob` (will need to add `import glob`)",
"> Also we should convert `tf.io.gfile.exists` into `os.path.exists` , `tf.io.gfile.listdir`into `os.listdir` and `tf.io.gfile.glob` into `glob.glob` (will need to add `import glob`)\r\n\r\nWe should probably open a new PR about this",
"I think it might be a good idea to both change the supervised keys to a named tuple and also handle the translation features specifically.",
"Just noticed that `pyarrow` apparently does not have a `is_boolean` function. Or do I have the wrong `pyarrow` version? ",
"Ah, it was a typo `pa.types.is_boolean` is the correct name. Will fix in: https://github.com/huggingface/nlp/pull/59"
] | 1,588 | 1,588 | 1,588 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/60",
"html_url": "https://github.com/huggingface/datasets/pull/60",
"diff_url": "https://github.com/huggingface/datasets/pull/60.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/60.patch",
"merged_at": 1588933104000
} | This PR updates the encoding of `Values` like `integers`, `boolean` and `float` to use python casting and avoid having to cast in the dataset scripts, as mentioned here: https://github.com/huggingface/nlp/pull/37#discussion_r420176626
We could also change (not included in this PR yet):
- `supervized_keys` to make them a NamedTuple instead of a dataclass, and
- handle specifically the `Translation` features.
as mentioned here: https://github.com/huggingface/nlp/pull/37#discussion_r421740236
@patrickvonplaten @mariamabarham tell me if you want these two last changes as well. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/60/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/60/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/59 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/59/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/59/comments | https://api.github.com/repos/huggingface/datasets/issues/59/events | https://github.com/huggingface/datasets/pull/59 | 614,366,045 | MDExOlB1bGxSZXF1ZXN0NDE0OTM3NTgx | 59 | Fix tests | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I can fix the tests tomorrow :-) ",
"Very weird bug indeed! I think the problem was that when importing `setup_module` we overwrote `pytest's` setup_module function. I think this is the relevant code in pytest: https://github.com/pytest-dev/pytest/blob/9d2eabb397b059b75b746259daeb20ee5588f559/src/_pytest/python.py#L460.",
"Also PR: #25 introduced some renaming: `DatasetBuilder.builder_config` -> `DatasetBuilder.config` so that we will have to change most of the dataset scripts (Just replace the \"builder_config\" with \"config\").\r\n\r\nI think the renaming is a good idea and I can do the fix with a bash regex, but will have to re-upload most of the datasets. @thomwolf @mariamabarham \r\n\r\n",
"> Also PR: #25 introduced some renaming: `DatasetBuilder.builder_config` -> `DatasetBuilder.config` so that we will have to change most of the dataset scripts (Just replace the \"builder_config\" with \"config\").\r\n> \r\n> I think the renaming is a good idea and I can do the fix with a bash regex, but will have to re-upload most of the datasets. @thomwolf @mariamabarham\r\n\r\nI think if it only needs a re-uploading, we can rename it, `DatasetBuilder.config` is easier and sounds better",
"Ok seems to be fine. Most tests work - merging."
] | 1,588 | 1,588 | 1,588 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/59",
"html_url": "https://github.com/huggingface/datasets/pull/59",
"diff_url": "https://github.com/huggingface/datasets/pull/59.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/59.patch",
"merged_at": 1588934811000
} | @patrickvonplaten I've broken a bit the tests with #25 while simplifying and re-organizing the `load.py` and `download_manager.py` scripts.
I'm trying to fix them here but I have a weird error, do you think you can have a look?
```bash
(datasets) MacBook-Pro-de-Thomas:datasets thomwolf$ python -m pytest -sv ./tests/test_dataset_common.py::DatasetTest::test_builder_class_snli
============================================================================= test session starts =============================================================================
platform darwin -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /Users/thomwolf/miniconda2/envs/datasets/bin/python
cachedir: .pytest_cache
rootdir: /Users/thomwolf/Documents/GitHub/datasets
plugins: xdist-1.31.0, forked-1.1.3
collected 1 item
tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ERROR
=================================================================================== ERRORS ====================================================================================
____________________________________________________________ ERROR at setup of DatasetTest.test_builder_class_snli ____________________________________________________________
file_path = <module 'tests.test_dataset_common' from '/Users/thomwolf/Documents/GitHub/datasets/tests/test_dataset_common.py'>
download_config = DownloadConfig(cache_dir=None, force_download=False, resume_download=False, local_files_only=False, proxies=None, user_agent=None, extract_compressed_file=True, force_extract=True)
download_kwargs = {}
def setup_module(file_path: str, download_config: Optional[DownloadConfig] = None, **download_kwargs,) -> DatasetBuilder:
r"""
Download/extract/cache a dataset to add to the lib from a path or url which can be:
- a path to a local directory containing the dataset processing python script
- an url to a S3 directory with a dataset processing python script
Dataset codes are cached inside the lib to allow easy import (avoid ugly sys.path tweaks)
and using cloudpickle (among other things).
Return: tuple of
the unique id associated to the dataset
the local path to the dataset
"""
if download_config is None:
download_config = DownloadConfig(**download_kwargs)
download_config.extract_compressed_file = True
download_config.force_extract = True
> name = list(filter(lambda x: x, file_path.split("/")))[-1] + ".py"
E AttributeError: module 'tests.test_dataset_common' has no attribute 'split'
src/nlp/load.py:169: AttributeError
============================================================================== warnings summary ===============================================================================
/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15
/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
-- Docs: https://docs.pytest.org/en/latest/warnings.html
=========================================================================== short test summary info ===========================================================================
ERROR tests/test_dataset_common.py::DatasetTest::test_builder_class_snli - AttributeError: module 'tests.test_dataset_common' has no attribute 'split'
========================================================================= 1 warning, 1 error in 3.63s =========================================================================
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/59/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/59/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/58 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/58/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/58/comments | https://api.github.com/repos/huggingface/datasets/issues/58/events | https://github.com/huggingface/datasets/pull/58 | 614,362,308 | MDExOlB1bGxSZXF1ZXN0NDE0OTM0NTY4 | 58 | Aborted PR - Fix tests | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Wait I messed up my branch, let me clean this."
] | 1,588 | 1,588 | 1,588 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/58",
"html_url": "https://github.com/huggingface/datasets/pull/58",
"diff_url": "https://github.com/huggingface/datasets/pull/58.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/58.patch",
"merged_at": null
} | @patrickvonplaten I've broken a bit the tests with #25 while simplifying and re-organizing the `load.py` and `download_manager.py` scripts.
I'm trying to fix them here but I have a weird error, do you think you can have a look?
```bash
(datasets) MacBook-Pro-de-Thomas:datasets thomwolf$ python -m pytest -sv ./tests/test_dataset_common.py::DatasetTest::test_builder_class_snli
============================================================================= test session starts =============================================================================
platform darwin -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /Users/thomwolf/miniconda2/envs/datasets/bin/python
cachedir: .pytest_cache
rootdir: /Users/thomwolf/Documents/GitHub/datasets
plugins: xdist-1.31.0, forked-1.1.3
collected 1 item
tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ERROR
=================================================================================== ERRORS ====================================================================================
____________________________________________________________ ERROR at setup of DatasetTest.test_builder_class_snli ____________________________________________________________
file_path = <module 'tests.test_dataset_common' from '/Users/thomwolf/Documents/GitHub/datasets/tests/test_dataset_common.py'>
download_config = DownloadConfig(cache_dir=None, force_download=False, resume_download=False, local_files_only=False, proxies=None, user_agent=None, extract_compressed_file=True, force_extract=True)
download_kwargs = {}
def setup_module(file_path: str, download_config: Optional[DownloadConfig] = None, **download_kwargs,) -> DatasetBuilder:
r"""
Download/extract/cache a dataset to add to the lib from a path or url which can be:
- a path to a local directory containing the dataset processing python script
- an url to a S3 directory with a dataset processing python script
Dataset codes are cached inside the lib to allow easy import (avoid ugly sys.path tweaks)
and using cloudpickle (among other things).
Return: tuple of
the unique id associated to the dataset
the local path to the dataset
"""
if download_config is None:
download_config = DownloadConfig(**download_kwargs)
download_config.extract_compressed_file = True
download_config.force_extract = True
> name = list(filter(lambda x: x, file_path.split("/")))[-1] + ".py"
E AttributeError: module 'tests.test_dataset_common' has no attribute 'split'
src/nlp/load.py:169: AttributeError
============================================================================== warnings summary ===============================================================================
/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15
/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
-- Docs: https://docs.pytest.org/en/latest/warnings.html
=========================================================================== short test summary info ===========================================================================
ERROR tests/test_dataset_common.py::DatasetTest::test_builder_class_snli - AttributeError: module 'tests.test_dataset_common' has no attribute 'split'
========================================================================= 1 warning, 1 error in 3.63s =========================================================================
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/58/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/58/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/57 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/57/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/57/comments | https://api.github.com/repos/huggingface/datasets/issues/57/events | https://github.com/huggingface/datasets/pull/57 | 614,261,638 | MDExOlB1bGxSZXF1ZXN0NDE0ODUzMDM5 | 57 | Better cached path | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I should have read this PR before doing my own: https://github.com/huggingface/nlp/pull/62 :D \r\nwill close mine. Looks great :-) ",
"> Awesome, this is really nice!\r\n> \r\n> By the way, we should improve the `cached_path` method of the `transformers` repo similarly, don't you think (@patrickvonplaten in particular).\r\n\r\nYeah, we should do the same in `transformers` I think - will note it down."
] | 1,588 | 1,588 | 1,588 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/57",
"html_url": "https://github.com/huggingface/datasets/pull/57",
"diff_url": "https://github.com/huggingface/datasets/pull/57.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/57.patch",
"merged_at": 1588944028000
} | ### Changes:
- The `cached_path` no longer returns None if the file is missing/the url doesn't work. Instead, it can raise `FileNotFoundError` (missing file), `ConnectionError` (no cache and unreachable url) or `ValueError` (parsing error)
- Fix requests to firebase API that doesn't handle HEAD requests...
- Allow custom download in datasets script: it allows to use `tf.io.gfile.copy` for example, to download from google storage. I added an example: the `boolq` script | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/57/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/57/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/56 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/56/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/56/comments | https://api.github.com/repos/huggingface/datasets/issues/56/events | https://github.com/huggingface/datasets/pull/56 | 614,236,869 | MDExOlB1bGxSZXF1ZXN0NDE0ODMyODY4 | 56 | [Dataset] Tester add mock function | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,588 | 1,588 | 1,588 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/56",
"html_url": "https://github.com/huggingface/datasets/pull/56",
"diff_url": "https://github.com/huggingface/datasets/pull/56.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/56.patch",
"merged_at": 1588873970000
} | need to add an empty `extract()` function to make `hansard` dataset test work. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/56/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/56/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/55 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/55/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/55/comments | https://api.github.com/repos/huggingface/datasets/issues/55/events | https://github.com/huggingface/datasets/pull/55 | 613,968,072 | MDExOlB1bGxSZXF1ZXN0NDE0NjE0MjE1 | 55 | Beam datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Right now the changes are a bit hard to read as the one from #25 are also included. You can wait until #25 is merged before looking at the implementation details",
"Nice!! I tested it a bit and works quite well. I will do a my review once the #25 will be merged because there are several overlaps.\r\n\r\nAt least I can share my thoughts on your **Next** section:\r\n1) I don't think it is a good thing to rely on tfds preprocessed datasets uploaded in their online storage, because they might be updated or deleted at any moment by Google and then possibly break our own processing.\r\n2) Improves the pipeline is always a good direction, but in the meantime we might also share the preprocessed dataset in S3 storage. Which might be another way to see 1), instead of downloading Google preprocessed datasets, using our own ones.\r\n3) Apache Beam can be easily integrated in Spark, so I don't see the need to replace Beam by Spark.",
"Ok I've merged #25 so you can rebase or merge if you want.\r\n\r\nI fully agree with @jplu notes for the \"next section\".\r\n\r\nDon't hesitate to use some credit on Google Dataflow if you think it would be useful to give it a try.",
"Pr is ready for review !\r\n\r\nNew minor changes:\r\n- re-added the csv dataset builder (it was on my branch from #25 but disappeared from master)\r\n- move the csv script and the wikipedia script to \"under construction\" for now\r\n- some renaming in the `nlp-cli test` command"
] | 1,588 | 1,589 | 1,589 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/55",
"html_url": "https://github.com/huggingface/datasets/pull/55",
"diff_url": "https://github.com/huggingface/datasets/pull/55.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/55.patch",
"merged_at": 1589181600000
} | # Beam datasets
## Intro
Beam Datasets are using beam pipelines for preprocessing (basically lots of `.map` over objects called PCollections).
The advantage of apache beam is that you can choose which type of runner you want to use to preprocess your data. The main runners are:
- the `DirectRunner` to run the pipeline locally (default). However I encountered memory issues for big datasets (like the french or english wikipedia). Small dataset work fine
- Google Dataflow. I didn't play with it.
- Spark or Flink, two well known data processing frameworks. I tried to use the Spark/Flink local runners provided by apache beam for python and wasn't able to make them work properly though...
## From tfds beam datasets to our own beam datasets
Tensorflow datasets used beam and a complicated pipeline to shard the TFRecords files.
To allow users to download beam datasets and not having to preprocess them, they also allow to download the already preprocessed datasets from their google storage (the beam pipeline doesn't run in that case).
On our side, we replace TFRecords by something else. Arrow or Parquet do the job but I chose Parquet as: 1) there is a builtin apache beam parquet writer that is quite convenient, and 2) reading parquet from the pyarrow library is also simple and effective (there is a mmap option !)
Moreover we don't shard datasets in many many files like tfds (they were doing probably doing that mainly because of the limit of 2Gb per TFRecord file). Therefore we have a simpler pipeline that saves each split into one parquet file. We also removed the utilities to use their google storage (for now maybe ? we'll have to discuss it).
## Main changes
- Added a BeamWriter to save the output of beam pipelines into parquet files and fill dataset infos
- Create a ParquetReader and refactor a bit the arrow_reader.py
\> **With this, we can now try to add beam datasets from tfds**
I already added the wikipedia one, and I will also try to add the Wiki40b dataset
## Test the wikipedia script
You can download and run the beam pipeline for wikipedia (using the `DirectRunner` by default) like this:
```
>>> import nlp
>>> nlp.load("datasets/nlp/wikipedia", dataset_config="20200501.frr")
```
This wikipedia dataset (lang: frr, North Frisian) is a small one (~10Mb), but feel free to try bigger ones (and fill 20Gb of swap memory if you try the english one lol)
## Next
Should we allow to download preprocessed datasets from the tfds google storage ?
Should we try to optimize the beam pipelines to run locally without memory issues ?
Should we try other data processing frameworks for big datasets, like spark ?
## About this PR
It should be merged after #25
-----------------
I'd be happy to have your feedback and your ideas to improve the processing of big datasets like wikipedia :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/55/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/55/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/54 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/54/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/54/comments | https://api.github.com/repos/huggingface/datasets/issues/54/events | https://github.com/huggingface/datasets/pull/54 | 613,513,348 | MDExOlB1bGxSZXF1ZXN0NDE0MjUyODkw | 54 | [Tests] Improved Error message for dummy folder structure | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,588 | 1,588 | 1,588 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/54",
"html_url": "https://github.com/huggingface/datasets/pull/54",
"diff_url": "https://github.com/huggingface/datasets/pull/54.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/54.patch",
"merged_at": 1588788779000
} | Improved Error message | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/54/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/54/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/53 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/53/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/53/comments | https://api.github.com/repos/huggingface/datasets/issues/53/events | https://github.com/huggingface/datasets/pull/53 | 613,436,158 | MDExOlB1bGxSZXF1ZXN0NDE0MTkwMzkz | 53 | [Features] Typo in generate_from_dict | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,588 | 1,588 | 1,588 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/53",
"html_url": "https://github.com/huggingface/datasets/pull/53",
"diff_url": "https://github.com/huggingface/datasets/pull/53.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/53.patch",
"merged_at": 1588865325000
} | Change `isinstance` test in features when generating features from dict. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/53/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/53/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/52 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/52/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/52/comments | https://api.github.com/repos/huggingface/datasets/issues/52/events | https://github.com/huggingface/datasets/pull/52 | 613,339,071 | MDExOlB1bGxSZXF1ZXN0NDE0MTEyMDAy | 52 | allow dummy folder structure to handle dict of lists | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,588 | 1,588 | 1,588 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/52",
"html_url": "https://github.com/huggingface/datasets/pull/52",
"diff_url": "https://github.com/huggingface/datasets/pull/52.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/52.patch",
"merged_at": 1588773318000
} | `esnli.py` needs that extension of the dummy data testing. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/52/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/52/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/51 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/51/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/51/comments | https://api.github.com/repos/huggingface/datasets/issues/51/events | https://github.com/huggingface/datasets/pull/51 | 613,266,668 | MDExOlB1bGxSZXF1ZXN0NDE0MDUyOTYw | 51 | [Testing] Improved testing structure | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Awesome!\r\nLet's have this in the doc at the end :-)"
] | 1,588 | 1,588 | 1,588 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/51",
"html_url": "https://github.com/huggingface/datasets/pull/51",
"diff_url": "https://github.com/huggingface/datasets/pull/51.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/51.patch",
"merged_at": 1588771217000
} | This PR refactors the test design a bit and puts the mock download manager in the `utils` files as it is just a test helper class.
as @mariamabarham pointed out, creating a dummy folder structure can be quite hard to grasp.
This PR tries to change that to some extent.
It follows the following logic for the `dummy` folder structure now:
1.) The data bulider has no config -> the `dummy` folder structure is:
`dummy/<version>/dummy_data.zip`
2) The data builder has >= 1 configs -> the `dummy` folder structure is:
`dummy/<config_name_1>/<version>/dummy_data.zip`
`dummy/<config_name_2>/<version>/dummy_data.zip`
Now, the difficult part is how to create the `dummy_data.zip` file. There are two cases:
A) The `data_urs` parameter inserted into the `download_and_extract` fn is a **string**:
-> the `dummy_data.zip` file zips the folder:
`dummy_data/<relative_path_of_folder_structure_of_url>`
B) The `data_urs` parameter inserted into the `download_and_extract` fn is a **dict**:
-> the `dummy_data.zip` file zips the folder:
`dummy_data/<relative_path_of_folder_structure_of_url_behind _key_1>`
`dummy_data/<relative_path_of_folder_structure_of_url_behind _key_2>`
By relative folder structure I mean `url_path.split('./')[-1]`. As an example the dataset **xquad** by deepmind has the following url path behind the key `de`: `https://github.com/deepmind/xquad/blob/master/xquad.de.json`
-> This means that the relative url path should be `xquad.de.json`.
@mariamabarham B) is a change from how is was before and I think is makes more sense.
While before the `dummy_data.zip` file for xquad with config `de` looked like:
`dummy_data/de` it would now look like `dummy_data/xquad.de.json`. I think this is better and easier to understand.
Therefore there are currently 6 tests that would have to have changed their dummy folder structure, but which can easily be done (30min).
I also added a function: `print_dummy_data_folder_structure` that prints out the expected structures when testing which should be quite helpful. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/51/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/51/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/50 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/50/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/50/comments | https://api.github.com/repos/huggingface/datasets/issues/50/events | https://github.com/huggingface/datasets/pull/50 | 612,583,126 | MDExOlB1bGxSZXF1ZXN0NDEzNTAwMjE0 | 50 | [Tests] test only for fast test as a default | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Test failure is not related to change in test file.\r\n"
] | 1,588 | 1,588 | 1,588 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/50",
"html_url": "https://github.com/huggingface/datasets/pull/50",
"diff_url": "https://github.com/huggingface/datasets/pull/50.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/50.patch",
"merged_at": 1588683736000
} | Test only for one config on circle ci to speed up testing. Add all config test as a slow test.
@mariamabarham @thomwolf | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/50/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/50/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/49 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/49/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/49/comments | https://api.github.com/repos/huggingface/datasets/issues/49/events | https://github.com/huggingface/datasets/pull/49 | 612,545,483 | MDExOlB1bGxSZXF1ZXN0NDEzNDY5ODg0 | 49 | fix flatten nested | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,588 | 1,588 | 1,588 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/49",
"html_url": "https://github.com/huggingface/datasets/pull/49",
"diff_url": "https://github.com/huggingface/datasets/pull/49.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/49.patch",
"merged_at": 1588687165000
} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/49/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/49/timeline | null | null | true |