url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.09B
| node_id
stringlengths 18
32
| number
int64 1
3.49k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,641B
| updated_at
int64 1,587B
1,641B
| closed_at
int64 1,587B
1,641B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2581 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2581/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2581/comments | https://api.github.com/repos/huggingface/datasets/issues/2581/events | https://github.com/huggingface/datasets/pull/2581 | 935,783,588 | MDExOlB1bGxSZXF1ZXN0NjgyNjQwMDY4 | 2,581 | Faster search_batch for ElasticsearchIndex due to threading | {
"login": "mwrzalik",
"id": 1376337,
"node_id": "MDQ6VXNlcjEzNzYzMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1376337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mwrzalik",
"html_url": "https://github.com/mwrzalik",
"followers_url": "https://api.github.com/users/mwrzalik/followers",
"following_url": "https://api.github.com/users/mwrzalik/following{/other_user}",
"gists_url": "https://api.github.com/users/mwrzalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mwrzalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mwrzalik/subscriptions",
"organizations_url": "https://api.github.com/users/mwrzalik/orgs",
"repos_url": "https://api.github.com/users/mwrzalik/repos",
"events_url": "https://api.github.com/users/mwrzalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/mwrzalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"id": 6836458,
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"title": "1.10",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 29,
"state": "closed",
"created_at": 1623178113000,
"updated_at": 1626881809000,
"due_on": 1628146800000,
"closed_at": 1626881809000
} | [] | 1,625,233,327,000 | 1,626,099,226,000 | 1,626,083,571,000 | CONTRIBUTOR | null | Hey,
I think it makes sense to perform search_batch threaded, so ES can perform search in parallel.
Cheers! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2581/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2581/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2581",
"html_url": "https://github.com/huggingface/datasets/pull/2581",
"diff_url": "https://github.com/huggingface/datasets/pull/2581.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2581.patch",
"merged_at": 1626083571000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2580/comments | https://api.github.com/repos/huggingface/datasets/issues/2580/events | https://github.com/huggingface/datasets/pull/2580 | 935,767,421 | MDExOlB1bGxSZXF1ZXN0NjgyNjI2MTkz | 2,580 | Fix Counter import | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,625,232,108,000 | 1,625,236,667,000 | 1,625,236,666,000 | MEMBER | null | Import from `collections` instead of `typing`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2580/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2580/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2580",
"html_url": "https://github.com/huggingface/datasets/pull/2580",
"diff_url": "https://github.com/huggingface/datasets/pull/2580.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2580.patch",
"merged_at": 1625236666000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2579 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2579/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2579/comments | https://api.github.com/repos/huggingface/datasets/issues/2579/events | https://github.com/huggingface/datasets/pull/2579 | 935,486,894 | MDExOlB1bGxSZXF1ZXN0NjgyMzkyNjYx | 2,579 | Fix BibTeX entry | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,625,209,840,000 | 1,625,211,224,000 | 1,625,211,224,000 | MEMBER | null | Add missing contributor to BibTeX entry.
cc: @abhishekkrthakur @thomwolf | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2579/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2579/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2579",
"html_url": "https://github.com/huggingface/datasets/pull/2579",
"diff_url": "https://github.com/huggingface/datasets/pull/2579.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2579.patch",
"merged_at": 1625211224000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2578 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2578/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2578/comments | https://api.github.com/repos/huggingface/datasets/issues/2578/events | https://github.com/huggingface/datasets/pull/2578 | 935,187,497 | MDExOlB1bGxSZXF1ZXN0NjgyMTQ0OTY2 | 2,578 | Support Zstandard compressed files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> What if people want to run some tests without having zstandard ?\r\n> Usually what we do is add a decorator @require_zstandard for example\r\n\r\n@lhoestq I think I'm missing something here...\r\n\r\nTests are a *development* tool (to ensure we deliver a good quality lib), not something we offer to the end users of the lib. Users of the lib just `pip install datasets` and no tests are delivered with the lib (`tests` directory is outside the `src` code dir). \r\n\r\nOn the contrary, developers (contributors) of the lib do need to be able to run tests (TDD). And because of that, they are required to install datasets differently: `pip install -e .[dev]`, so that all required developing (and testing) dependencies are properly installed (included `zstandard`).\r\n\r\nApart from `zsatandard`, there are many other dev/test required dependencies for running tests, and we do not have a `@require_toto` for each and every of these dependencies in our tests: \r\n- `pytest` and `absl-py` (they are not dependencies in install_requires, but only in TEST_REQUIRE extras_require), \r\n- `boto3` (in test_filesystem.py), \r\n- `seqeval` (in test_metric_common.py), \r\n- `bs4` (used by eli5 and tested in test_hf_gcp.py)\r\n- ...\r\n\r\nSo IMHO, to run tests you should previously install datasets with dev or tests dependencies: either `pip install -e .[dev]` or `pip install -e .[tests]` (the latter to be used in CI testing-only part of the development cycle). And the tests should be written accordingly, assuming all tests dependencies are installed.",
"Hi !\r\nI was saying that because the other dependencies you mentioned are only required for _some_ tests. While here zstd is required for _all_ tests since it's imported in the conftest.py\r\nFeel free to keep it as it is right now, or maybe move the fixture to test_file_utils.py to allow users without zstd to run tests for their builders, dataset card etc. without issues",
"Thank you ! I think we can merge now",
"@lhoestq does this mean that the pile could have streaming support in the future? Afaik streaming doesnt support zstandard compressed type",
"> @lhoestq does this mean that the pile could have streaming support in the future? Afaik streaming doesnt support zstandard compressed type\r\n\r\njust for reference, i tried to stream one of the `.zst` files from [the pile](https://the-eye.eu/public/AI/pile/) using\r\n\r\n```python\r\ndata_files = [\"https://the-eye.eu/public/AI/pile/train/00.jsonl.zst\"]\r\nstreamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True)\r\n```\r\n\r\nand got the following error:\r\n\r\n```\r\nUsing custom data configuration default-4e71acadc389c254\r\n---------------------------------------------------------------------------\r\nNotImplementedError Traceback (most recent call last)\r\n/tmp/ipykernel_1187680/10848115.py in <module>\r\n 1 data_files = [\"https://the-eye.eu/public/AI/pile/train/00.jsonl.zst\"]\r\n 2 \r\n----> 3 streamed_dataset = load_dataset('json', split='train', data_files=data_files, streaming=True)\r\n 4 \r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)\r\n 835 # this extends the open and os.path.join functions for data streaming\r\n 836 extend_module_for_streaming(builder_instance.__module__, use_auth_token=use_auth_token)\r\n--> 837 return builder_instance.as_streaming_dataset(\r\n 838 split=split,\r\n 839 use_auth_token=use_auth_token,\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/builder.py in as_streaming_dataset(self, split, base_path, use_auth_token)\r\n 922 data_dir=self.config.data_dir,\r\n 923 )\r\n--> 924 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 925 # By default, return all splits\r\n 926 if split is None:\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py in _split_generators(self, dl_manager)\r\n 50 if not self.config.data_files:\r\n 51 raise ValueError(f\"At least one data file must be specified, but got data_files={self.config.data_files}\")\r\n---> 52 data_files = dl_manager.download_and_extract(self.config.data_files)\r\n 53 if isinstance(data_files, (str, list, tuple)):\r\n 54 files = data_files\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in download_and_extract(self, url_or_urls)\r\n 140 \r\n 141 def download_and_extract(self, url_or_urls):\r\n--> 142 return self.extract(self.download(url_or_urls))\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in extract(self, path_or_paths)\r\n 115 \r\n 116 def extract(self, path_or_paths):\r\n--> 117 urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n 118 return urlpaths\r\n 119 \r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)\r\n 202 num_proc = 1\r\n 203 if num_proc <= 1 or len(iterable) <= num_proc:\r\n--> 204 mapped = [\r\n 205 _single_map_nested((function, obj, types, None, True))\r\n 206 for obj in utils.tqdm(iterable, disable=disable_tqdm)\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)\r\n 203 if num_proc <= 1 or len(iterable) <= num_proc:\r\n 204 mapped = [\r\n--> 205 _single_map_nested((function, obj, types, None, True))\r\n 206 for obj in utils.tqdm(iterable, disable=disable_tqdm)\r\n 207 ]\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)\r\n 141 # Singleton first to spare some computation\r\n 142 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):\r\n--> 143 return function(data_struct)\r\n 144 \r\n 145 # Reduce logging to keep things readable in multiprocessing with tqdm\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _extract(self, urlpath)\r\n 119 \r\n 120 def _extract(self, urlpath):\r\n--> 121 protocol = self._get_extraction_protocol(urlpath)\r\n 122 if protocol is None:\r\n 123 # no extraction\r\n\r\n~/miniconda3/envs/hf/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol(self, urlpath)\r\n 137 elif path.endswith(\".zip\"):\r\n 138 return \"zip\"\r\n--> 139 raise NotImplementedError(f\"Extraction protocol for file at {urlpath} is not implemented yet\")\r\n 140 \r\n 141 def download_and_extract(self, url_or_urls):\r\n\r\nNotImplementedError: Extraction protocol for file at https://the-eye.eu/public/AI/pile/train/00.jsonl.zst is not implemented yet\r\n```\r\n\r\ni'm not sure whether @Shashi456 is referring to a fundamental limitation with \"streaming\" zstandard compression files or simply that we need to support the protocol in the streaming api of `datasets`\r\n\r\n",
"@lewtun our streaming mode patches the Python `open` function. I could have a look tomorrow if it is easily implementable for this case.",
"@lewtun, I have tested and yes, it is easily implementable. I've created a draft Pull Request with an implementation proposal: #2786.",
"thanks a lot @albertvillanova - now i can stream the pile :)"
] | 1,625,170,954,000 | 1,628,693,184,000 | 1,625,482,227,000 | MEMBER | null | Close #2572.
cc: @thomwolf | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2578/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2578/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2578",
"html_url": "https://github.com/huggingface/datasets/pull/2578",
"diff_url": "https://github.com/huggingface/datasets/pull/2578.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2578.patch",
"merged_at": 1625482227000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2576 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2576/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2576/comments | https://api.github.com/repos/huggingface/datasets/issues/2576/events | https://github.com/huggingface/datasets/pull/2576 | 934,986,761 | MDExOlB1bGxSZXF1ZXN0NjgxOTc5MTA1 | 2,576 | Add mC4 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,625,154,685,000 | 1,625,237,456,000 | 1,625,237,455,000 | MEMBER | null | AllenAI is now hosting the processed C4 and mC4 dataset in this repo: https://huggingface.co/datasets/allenai/c4
Thanks a lot to them !
In this PR I added the mC4 dataset builder. It supports 108 languages
You can load it with
```python
from datasets import load_dataset
en_mc4 = load_dataset("mc4", "en")
fr_mc4 = load_dataset("mc4", "fr")
en_and_fr_mc4 = load_dataset("mc4", languages=["en", "fr"])
```
It also supports streaming, if you don't want to download hundreds of GB of data:
```python
en_mc4 = load_dataset("mc4", "en", streaming=True)
```
Regarding the dataset_infos.json, I will add them once I have them.
Also we can work on the dataset card at that will be at https://huggingface.co/datasets/mc4
For now I just added a link to https://huggingface.co/datasets/allenai/c4 as well as a few sections | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2576/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2576/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2576",
"html_url": "https://github.com/huggingface/datasets/pull/2576",
"diff_url": "https://github.com/huggingface/datasets/pull/2576.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2576.patch",
"merged_at": 1625237455000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2575 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2575/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2575/comments | https://api.github.com/repos/huggingface/datasets/issues/2575/events | https://github.com/huggingface/datasets/pull/2575 | 934,876,496 | MDExOlB1bGxSZXF1ZXN0NjgxODg0OTgy | 2,575 | Add C4 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,625,147,888,000 | 1,625,237,423,000 | 1,625,237,423,000 | MEMBER | null | The old code for the C4 dataset was to generate the C4 with Apache Beam, as in Tensorflow Datasets.
However AllenAI is now hosting the processed C4 dataset in this repo: https://huggingface.co/datasets/allenai/c4
Thanks a lot to them for their amazing work !
In this PR I changed the script to download and prepare the data directly from this repo.
It has 4 variants: en, en.noblocklist, en.noclean, realnewslike
You can load it with
```python
from datasets import load_dataset
c4 = load_dataset("c4", "en")
```
It also supports streaming, if you don't want to download hundreds of GB of data:
```python
c4 = load_dataset("c4", "en", streaming=True)
```
Regarding the dataset_infos.json, I haven't added the infos for en.noclean. I will add them once I have them.
Also we can work on the dataset card at https://huggingface.co/datasets/c4
For now I just added a link to https://huggingface.co/datasets/allenai/c4 as well as a few sections | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2575/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2575/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2575",
"html_url": "https://github.com/huggingface/datasets/pull/2575",
"diff_url": "https://github.com/huggingface/datasets/pull/2575.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2575.patch",
"merged_at": 1625237423000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2574 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2574/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2574/comments | https://api.github.com/repos/huggingface/datasets/issues/2574/events | https://github.com/huggingface/datasets/pull/2574 | 934,632,378 | MDExOlB1bGxSZXF1ZXN0NjgxNjczMzYy | 2,574 | Add streaming in load a dataset docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,625,131,973,000 | 1,625,148,742,000 | 1,625,148,741,000 | MEMBER | null | Mention dataset streaming on the "loading a dataset" page of the documentation | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2574/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2574/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2574",
"html_url": "https://github.com/huggingface/datasets/pull/2574",
"diff_url": "https://github.com/huggingface/datasets/pull/2574.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2574.patch",
"merged_at": 1625148741000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2573 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2573/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2573/comments | https://api.github.com/repos/huggingface/datasets/issues/2573/events | https://github.com/huggingface/datasets/issues/2573 | 934,584,745 | MDU6SXNzdWU5MzQ1ODQ3NDU= | 2,573 | Finding right block-size with JSON loading difficult for user | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"This was actually a second error arising from a too small block-size in the json reader.\r\n\r\nFinding the right block size is difficult for the layman user"
] | 1,625,129,315,000 | 1,625,166,653,000 | null | MEMBER | null | As reported by @thomwolf, while loading a JSON Lines file with "json" loading script, he gets
> json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 383)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2573/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2573/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2572 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2572/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2572/comments | https://api.github.com/repos/huggingface/datasets/issues/2572/events | https://github.com/huggingface/datasets/issues/2572 | 934,573,767 | MDU6SXNzdWU5MzQ1NzM3Njc= | 2,572 | Support Zstandard compressed files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,625,128,624,000 | 1,625,482,227,000 | 1,625,482,227,000 | MEMBER | null | Add support for Zstandard compressed files: https://facebook.github.io/zstd/ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2572/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2572/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2571 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2571/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2571/comments | https://api.github.com/repos/huggingface/datasets/issues/2571/events | https://github.com/huggingface/datasets/pull/2571 | 933,791,018 | MDExOlB1bGxSZXF1ZXN0NjgwOTQ2NzQ1 | 2,571 | Filter expected warning log from transformers | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think the failing test has nothing to do with my PR..."
] | 1,625,064,499,000 | 1,625,198,897,000 | 1,625,198,897,000 | MEMBER | null | Close #2569. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2571/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2571/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2571",
"html_url": "https://github.com/huggingface/datasets/pull/2571",
"diff_url": "https://github.com/huggingface/datasets/pull/2571.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2571.patch",
"merged_at": 1625198896000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2570 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2570/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2570/comments | https://api.github.com/repos/huggingface/datasets/issues/2570/events | https://github.com/huggingface/datasets/pull/2570 | 933,402,521 | MDExOlB1bGxSZXF1ZXN0NjgwNjEzNzc0 | 2,570 | Minor fix docs format for bertscore | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,625,038,932,000 | 1,625,067,061,000 | 1,625,067,061,000 | MEMBER | null | Minor fix docs format for bertscore:
- link to README
- format of KWARGS_DESCRIPTION | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2570/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2570/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2570",
"html_url": "https://github.com/huggingface/datasets/pull/2570",
"diff_url": "https://github.com/huggingface/datasets/pull/2570.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2570.patch",
"merged_at": 1625067061000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2569 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2569/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2569/comments | https://api.github.com/repos/huggingface/datasets/issues/2569/events | https://github.com/huggingface/datasets/issues/2569 | 933,015,797 | MDU6SXNzdWU5MzMwMTU3OTc= | 2,569 | Weights of model checkpoint not initialized for RobertaModel for Bertscore | {
"login": "suzyahyah",
"id": 2980993,
"node_id": "MDQ6VXNlcjI5ODA5OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/2980993?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suzyahyah",
"html_url": "https://github.com/suzyahyah",
"followers_url": "https://api.github.com/users/suzyahyah/followers",
"following_url": "https://api.github.com/users/suzyahyah/following{/other_user}",
"gists_url": "https://api.github.com/users/suzyahyah/gists{/gist_id}",
"starred_url": "https://api.github.com/users/suzyahyah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/suzyahyah/subscriptions",
"organizations_url": "https://api.github.com/users/suzyahyah/orgs",
"repos_url": "https://api.github.com/users/suzyahyah/repos",
"events_url": "https://api.github.com/users/suzyahyah/events{/privacy}",
"received_events_url": "https://api.github.com/users/suzyahyah/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @suzyahyah, thanks for reporting.\r\n\r\nThe message you get is indeed not an error message, but a warning coming from Hugging Face `transformers`. The complete warning message is:\r\n```\r\nSome weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.bias', 'lm_head.layer_norm.weight']\r\n- This IS expected if you are initializing RobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing RobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\n```\r\n\r\nIn this case, this behavior IS expected and you can safely ignore the warning message.\r\n\r\nThe reason is that you are just using RoBERTa to get the contextual embeddings of the input sentences/tokens, thus leaving away its head layer, whose weights are ignored.\r\n\r\nFeel free to reopen this issue if you need further explanations.",
"Hi @suzyahyah, I have created a Pull Request to filter out that warning message in this specific case, since the behavior is as expected and the warning message can only cause confusion for users (as in your case)."
] | 1,624,992,923,000 | 1,625,123,339,000 | 1,625,038,549,000 | NONE | null | When applying bertscore out of the box,
```Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight']```
Following the typical usage from https://huggingface.co/docs/datasets/loading_metrics.html
```
from datasets import load_metric
metric = load_metric('bertscore')
# Example of typical usage
for batch in dataset:
inputs, references = batch
predictions = model(inputs)
metric.add_batch(predictions=predictions, references=references)
score = metric.compute(lang="en")
#score = metric.compute(model_type="roberta-large") # gives the same error
```
I am concerned about this because my usage shouldn't require any further fine-tuning and most people would expect to use BertScore out of the box? I realised the huggingface code is a wrapper around https://github.com/Tiiiger/bert_score, but I think this repo is anyway relying on the model code and weights from huggingface repo....
## Environment info
- `datasets` version: 1.7.0
- Platform: Linux-5.4.0-1041-aws-x86_64-with-glibc2.27
- Python version: 3.9.5
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2569/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2569/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2568 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2568/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2568/comments | https://api.github.com/repos/huggingface/datasets/issues/2568/events | https://github.com/huggingface/datasets/pull/2568 | 932,934,795 | MDExOlB1bGxSZXF1ZXN0NjgwMjE5MDU2 | 2,568 | Add interleave_datasets for map-style datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,987,164,000 | 1,625,132,014,000 | 1,625,132,013,000 | MEMBER | null | ### Add interleave_datasets for map-style datasets
Add support for map-style datasets (i.e. `Dataset` objects) in `interleave_datasets`.
It was only supporting iterable datasets (i.e. `IterableDataset` objects).
### Implementation details
It works by concatenating the datasets and then re-order the indices to make the new dataset.
### TODO
- [x] tests
- [x] docs
Close #2563 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2568/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2568/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2568",
"html_url": "https://github.com/huggingface/datasets/pull/2568",
"diff_url": "https://github.com/huggingface/datasets/pull/2568.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2568.patch",
"merged_at": 1625132012000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2567 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2567/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2567/comments | https://api.github.com/repos/huggingface/datasets/issues/2567/events | https://github.com/huggingface/datasets/pull/2567 | 932,933,536 | MDExOlB1bGxSZXF1ZXN0NjgwMjE3OTY3 | 2,567 | Add ASR task and new languages to resources | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,987,081,000 | 1,625,132,543,000 | 1,625,132,529,000 | MEMBER | null | This PR adds a new `automatic-speech-recognition` task to the list of supported tasks in `tasks.json` and also includes a few new languages missing from `common_voice`.
Note: I used the [Papers with Code list](https://www.paperswithcode.com/area/speech/speech-recognition) as inspiration for the ASR subtasks | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2567/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2567/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2567",
"html_url": "https://github.com/huggingface/datasets/pull/2567",
"diff_url": "https://github.com/huggingface/datasets/pull/2567.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2567.patch",
"merged_at": 1625132529000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2566 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2566/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2566/comments | https://api.github.com/repos/huggingface/datasets/issues/2566/events | https://github.com/huggingface/datasets/pull/2566 | 932,804,725 | MDExOlB1bGxSZXF1ZXN0NjgwMTA2NzM0 | 2,566 | fix Dataset.map when num_procs > num rows | {
"login": "connor-mccarthy",
"id": 55268212,
"node_id": "MDQ6VXNlcjU1MjY4MjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/connor-mccarthy",
"html_url": "https://github.com/connor-mccarthy",
"followers_url": "https://api.github.com/users/connor-mccarthy/followers",
"following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}",
"gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions",
"organizations_url": "https://api.github.com/users/connor-mccarthy/orgs",
"repos_url": "https://api.github.com/users/connor-mccarthy/repos",
"events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}",
"received_events_url": "https://api.github.com/users/connor-mccarthy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,979,227,000 | 1,625,130,673,000 | 1,625,130,673,000 | CONTRIBUTOR | null | closes #2470
## Testing notes
To run updated tests:
```sh
pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s
```
With Python code (to view warning):
```python
from datasets import Dataset
dataset = Dataset.from_dict({"x": ["sample"]})
print(len(dataset))
dataset.map(lambda x: x, num_proc=10)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2566/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2566/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2566",
"html_url": "https://github.com/huggingface/datasets/pull/2566",
"diff_url": "https://github.com/huggingface/datasets/pull/2566.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2566.patch",
"merged_at": 1625130673000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2565 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2565/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2565/comments | https://api.github.com/repos/huggingface/datasets/issues/2565/events | https://github.com/huggingface/datasets/pull/2565 | 932,445,439 | MDExOlB1bGxSZXF1ZXN0Njc5Nzg3NTI4 | 2,565 | Inject templates for ASR datasets | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Wait until #2567 is merged so we can benefit from the tagger :)",
"thanks for the feedback @lhoestq! i've added the new language codes and this PR should be ready for a merge :)"
] | 1,624,960,921,000 | 1,625,495,186,000 | 1,625,495,186,000 | MEMBER | null | This PR adds ASR templates for 5 of the most common speech datasets on the Hub, where "common" is defined by the number of models trained on them.
I also fixed a bunch of the tags in the READMEs 😎 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2565/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2565/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2565",
"html_url": "https://github.com/huggingface/datasets/pull/2565",
"diff_url": "https://github.com/huggingface/datasets/pull/2565.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2565.patch",
"merged_at": 1625495186000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2564 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2564/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2564/comments | https://api.github.com/repos/huggingface/datasets/issues/2564/events | https://github.com/huggingface/datasets/issues/2564 | 932,389,639 | MDU6SXNzdWU5MzIzODk2Mzk= | 2,564 | concatenate_datasets for iterable datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,624,957,181,000 | 1,624,957,181,000 | null | MEMBER | null | Currently `concatenate_datasets` only works for map-style `Dataset`.
It would be nice to have it work for `IterableDataset` objects as well.
It would simply chain the iterables of the iterable datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2564/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2564/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2563 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2563/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2563/comments | https://api.github.com/repos/huggingface/datasets/issues/2563/events | https://github.com/huggingface/datasets/issues/2563 | 932,387,639 | MDU6SXNzdWU5MzIzODc2Mzk= | 2,563 | interleave_datasets for map-style datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,624,957,044,000 | 1,625,132,013,000 | 1,625,132,013,000 | MEMBER | null | Currently the `interleave_datasets` functions only works for `IterableDataset`.
Let's make it work for map-style `Dataset` objects as well.
It would work the same way: either alternate between the datasets in order or randomly given probabilities specified by the user. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2563/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2563/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2562 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2562/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2562/comments | https://api.github.com/repos/huggingface/datasets/issues/2562/events | https://github.com/huggingface/datasets/pull/2562 | 932,333,436 | MDExOlB1bGxSZXF1ZXN0Njc5NjkyMjQ2 | 2,562 | Minor fix in loading metrics docs | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,953,311,000 | 1,624,987,282,000 | 1,624,987,282,000 | MEMBER | null | Make some minor fixes in "Loading metrics" docs. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2562/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2562/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2562",
"html_url": "https://github.com/huggingface/datasets/pull/2562",
"diff_url": "https://github.com/huggingface/datasets/pull/2562.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2562.patch",
"merged_at": 1624987282000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2561 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2561/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2561/comments | https://api.github.com/repos/huggingface/datasets/issues/2561/events | https://github.com/huggingface/datasets/issues/2561 | 932,321,725 | MDU6SXNzdWU5MzIzMjE3MjU= | 2,561 | Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True` | {
"login": "apsdehal",
"id": 3616806,
"node_id": "MDQ6VXNlcjM2MTY4MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apsdehal",
"html_url": "https://github.com/apsdehal",
"followers_url": "https://api.github.com/users/apsdehal/followers",
"following_url": "https://api.github.com/users/apsdehal/following{/other_user}",
"gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions",
"organizations_url": "https://api.github.com/users/apsdehal/orgs",
"repos_url": "https://api.github.com/users/apsdehal/repos",
"events_url": "https://api.github.com/users/apsdehal/events{/privacy}",
"received_events_url": "https://api.github.com/users/apsdehal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! I just tried to reproduce what you said:\r\n- create a local builder class\r\n- use `load_dataset`\r\n- update the builder class code\r\n- use `load_dataset` again (with or without `ignore_verifications=True`)\r\nAnd it creates a new cache, as expected.\r\n\r\nWhat modifications did you do to your builder's code ?",
"Hi @lhoestq. Thanks for your reply. I just did minor modifications for which it should not regenerate cache (for e.g. Adding a print statement). Overall, regardless of cache miss, there should be an explicit option to allow reuse of existing cache if author knows cache shouldn't be affected.",
"The cache is based on the hash of the dataset builder's code, so changing the code makes it recompute the cache.\r\n\r\nYou could still rename the cache directory of your previous computation to the new expected cache directory if you want to avoid having to recompute it and if you're sure that it would generate the exact same result.\r\n\r\nThe verifications are data integrity verifications: it checks the checksums of the downloaded files, as well as the size of the generated splits.",
"Hi @apsdehal,\r\n\r\nIf you decide to follow @lhoestq's suggestion to rename the cache directory of your previous computation to the new expected cache directory, you can do the following to get the name of the new expected cache directory once #2500 is merged:\r\n```python\r\nfrom datasets import load_dataset_builder\r\ndataset_builder = load_dataset_builder(\"path/to/your/dataset\")\r\nprint(dataset_builder.cache_dir)\r\n```\r\n\r\nThis way, you don't have to recompute the hash of the dataset script yourself each time you modify the script."
] | 1,624,952,583,000 | 1,625,057,724,000 | null | NONE | null | ## Describe the bug
If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`. This slows down debugging and cache generator for very large datasets.
## Steps to reproduce the bug
- Create a local dataset builder class
- load the local builder class file using `load_dataset` and let the cache build
- update the file's content
- The cache should rebuilt.
## Expected results
With `ignore_verifications=True`, `load_dataset` should pick up existing cache.
## Actual results
Creates new cache.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: Linux-5.4.0-52-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.7
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2561/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2561/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2560 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2560/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2560/comments | https://api.github.com/repos/huggingface/datasets/issues/2560/events | https://github.com/huggingface/datasets/pull/2560 | 932,143,634 | MDExOlB1bGxSZXF1ZXN0Njc5NTMyODk4 | 2,560 | fix Dataset.map when num_procs > num rows | {
"login": "connor-mccarthy",
"id": 55268212,
"node_id": "MDQ6VXNlcjU1MjY4MjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/connor-mccarthy",
"html_url": "https://github.com/connor-mccarthy",
"followers_url": "https://api.github.com/users/connor-mccarthy/followers",
"following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}",
"gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions",
"organizations_url": "https://api.github.com/users/connor-mccarthy/orgs",
"repos_url": "https://api.github.com/users/connor-mccarthy/repos",
"events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}",
"received_events_url": "https://api.github.com/users/connor-mccarthy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for fixing this :)\r\n\r\nLooks like you have tons of changes due to code formatting.\r\nWe're using `black` for this, with a custom line length. To run our code formatting, you just need to run\r\n```\r\nmake style\r\n```\r\n\r\nThen for the windows error in the CI, I'm looking into it. It's probably just a file that isn't properly closed",
"CI is all green now ! Thanks :)\r\n\r\nThere are still many code formatting changes in your PR - probably due to the first commit you did.\r\nTo avoid conflicts with future PRs it would be nice to only have the changes related to the `num_proc` warning, and not have all those code formatting changes,\r\n\r\nCould you try remove those code formatting changes ?\r\n\r\nIf it's easier for you, you can make a new branch from `master` if needed",
"Thanks, @lhoestq! Apologies for the half-baked commits yesterday! I wasn’t able to step back in to resolve those CI issues until this morning.\r\n\r\nAlso, I’m surprised that `make style` isn’t resolving the formatting changes. I’m a bit stumped on that, so I’m going to re-apply on a new branch and open a PR as you suggested."
] | 1,624,933,451,000 | 1,624,978,818,000 | 1,624,978,411,000 | CONTRIBUTOR | null | closes #2470
## Testing notes
To run updated tests:
```sh
pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s
```
With Python code (to view warning):
```python
from datasets import Dataset
dataset = Dataset.from_dict({"x": ["sample"]})
print(len(dataset))
dataset.map(lambda x: x, num_proc=10)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2560/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2560/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2560",
"html_url": "https://github.com/huggingface/datasets/pull/2560",
"diff_url": "https://github.com/huggingface/datasets/pull/2560.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2560.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2559 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2559/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2559/comments | https://api.github.com/repos/huggingface/datasets/issues/2559/events | https://github.com/huggingface/datasets/issues/2559 | 931,849,724 | MDU6SXNzdWU5MzE4NDk3MjQ= | 2,559 | Memory usage consistently increases when processing a dataset with `.map` | {
"login": "apsdehal",
"id": 3616806,
"node_id": "MDQ6VXNlcjM2MTY4MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3616806?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apsdehal",
"html_url": "https://github.com/apsdehal",
"followers_url": "https://api.github.com/users/apsdehal/followers",
"following_url": "https://api.github.com/users/apsdehal/following{/other_user}",
"gists_url": "https://api.github.com/users/apsdehal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apsdehal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apsdehal/subscriptions",
"organizations_url": "https://api.github.com/users/apsdehal/orgs",
"repos_url": "https://api.github.com/users/apsdehal/repos",
"events_url": "https://api.github.com/users/apsdehal/events{/privacy}",
"received_events_url": "https://api.github.com/users/apsdehal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! Can you share the function you pass to `map` ?\r\nI know you mentioned it would be hard to share some code but this would really help to understand what happened"
] | 1,624,905,118,000 | 1,624,956,180,000 | null | NONE | null | ## Describe the bug
I have a HF dataset with image paths stored in it and I am trying to load those image paths using `.map` with `num_proc=80`. I am noticing that the memory usage consistently keeps on increasing with time. I tried using `DEFAULT_WRITER_BATCH_SIZE=10` in the builder to decrease arrow writer's batch size but that doesn't seem to help.
## Steps to reproduce the bug
Providing code as it is would be hard. I can provide a MVP if that helps.
## Expected results
Memory usage should become consistent after some time following the launch of processing.
## Actual results
Memory usage keeps on increasing.
## Environment info
- `datasets` version: 1.8.0
- Platform: Linux-5.4.0-52-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.7
- PyArrow version: 3.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2559/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2559/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2558 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2558/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2558/comments | https://api.github.com/repos/huggingface/datasets/issues/2558/events | https://github.com/huggingface/datasets/pull/2558 | 931,736,647 | MDExOlB1bGxSZXF1ZXN0Njc5MTg0Njk1 | 2,558 | Update: WebNLG - update checksums | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,896,997,000 | 1,624,900,997,000 | 1,624,900,996,000 | MEMBER | null | The master branch changed so I computed the new checksums.
I also pinned a specific revision so that it doesn't happen again in the future.
Fix https://github.com/huggingface/datasets/issues/2553 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2558/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2558/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2558",
"html_url": "https://github.com/huggingface/datasets/pull/2558",
"diff_url": "https://github.com/huggingface/datasets/pull/2558.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2558.patch",
"merged_at": 1624900996000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2557 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2557/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2557/comments | https://api.github.com/repos/huggingface/datasets/issues/2557/events | https://github.com/huggingface/datasets/pull/2557 | 931,633,823 | MDExOlB1bGxSZXF1ZXN0Njc5MDk4ODg3 | 2,557 | Fix `fever` keys | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,890,422,000 | 1,624,896,690,000 | 1,624,896,689,000 | MEMBER | null | The keys has duplicates since they were reset to 0 after each file.
I fixed it by taking into account the file index as well. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2557/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2557/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2557",
"html_url": "https://github.com/huggingface/datasets/pull/2557",
"diff_url": "https://github.com/huggingface/datasets/pull/2557.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2557.patch",
"merged_at": 1624896689000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2556 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2556/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2556/comments | https://api.github.com/repos/huggingface/datasets/issues/2556/events | https://github.com/huggingface/datasets/issues/2556 | 931,595,872 | MDU6SXNzdWU5MzE1OTU4NzI= | 2,556 | Better DuplicateKeysError error to help the user debug the issue | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,624,888,257,000 | 1,624,888,257,000 | null | MEMBER | null | As mentioned in https://github.com/huggingface/datasets/issues/2552 it would be nice to improve the error message when a dataset fails to build because there are duplicate example keys.
The current one is
```python
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 48
Keys should be unique and deterministic in nature
```
and we could have something that guides the user to debugging the issue:
```python
DuplicateKeysError: both 42th and 1337th examples have the same keys `48`.
Please fix the dataset script at <path/to/the/dataset/script>
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2556/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2556/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2555 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2555/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2555/comments | https://api.github.com/repos/huggingface/datasets/issues/2555/events | https://github.com/huggingface/datasets/pull/2555 | 931,585,485 | MDExOlB1bGxSZXF1ZXN0Njc5MDU4ODM3 | 2,555 | Fix code_search_net keys | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Fix #2552."
] | 1,624,887,623,000 | 1,630,571,083,000 | 1,624,889,435,000 | MEMBER | null | There were duplicate keys in the `code_search_net` dataset, as reported in https://github.com/huggingface/datasets/issues/2552
I fixed the keys (it was an addition of the file and row indices, which was causing collisions)
Fix #2552. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2555/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2555/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2555",
"html_url": "https://github.com/huggingface/datasets/pull/2555",
"diff_url": "https://github.com/huggingface/datasets/pull/2555.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2555.patch",
"merged_at": 1624889435000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2554 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2554/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2554/comments | https://api.github.com/repos/huggingface/datasets/issues/2554/events | https://github.com/huggingface/datasets/issues/2554 | 931,453,855 | MDU6SXNzdWU5MzE0NTM4NTU= | 2,554 | Multilabel metrics not supported | {
"login": "GuillemGSubies",
"id": 37592763,
"node_id": "MDQ6VXNlcjM3NTkyNzYz",
"avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GuillemGSubies",
"html_url": "https://github.com/GuillemGSubies",
"followers_url": "https://api.github.com/users/GuillemGSubies/followers",
"following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}",
"gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions",
"organizations_url": "https://api.github.com/users/GuillemGSubies/orgs",
"repos_url": "https://api.github.com/users/GuillemGSubies/repos",
"events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}",
"received_events_url": "https://api.github.com/users/GuillemGSubies/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @GuillemGSubies, thanks for reporting.\r\n\r\nI have made a PR to fix this issue and allow metrics to be computed also for multilabel classification problems.",
"Looks nice, thank you very much! 🚀 ",
"Sorry for reopening but I just noticed that the `_compute` method for the F1 metric is still not good enough for multilabel problems:\r\n\r\nhttps://github.com/huggingface/datasets/blob/92a3ee549705aa0a107c9fa5caf463b3b3da2616/metrics/f1/f1.py#L115\r\n\r\nSomehow we should be able to change the parameter `average` at least",
"@GuillemGSubies, the parameter `average` passed to `_compute` is then passed to `f1_score`. This is right."
] | 1,624,878,586,000 | 1,634,128,153,000 | 1,625,733,615,000 | NONE | null | When I try to use a metric like F1 macro I get the following error:
```
TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'
```
There is an explicit casting here:
https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075/src/datasets/features.py#L274
And looks like this is because here
https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075/metrics/f1/f1.py#L88
the features can only be integers, so we cannot use that F1 for multilabel. Instead, if I create the following F1 (ints replaced with sequence of ints), it will work:
```python
class F1(datasets.Metric):
def _info(self):
return datasets.MetricInfo(
description=_DESCRIPTION,
citation=_CITATION,
inputs_description=_KWARGS_DESCRIPTION,
features=datasets.Features(
{
"predictions": datasets.Sequence(datasets.Value("int32")),
"references": datasets.Sequence(datasets.Value("int32")),
}
),
reference_urls=["https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html"],
)
def _compute(self, predictions, references, labels=None, pos_label=1, average="binary", sample_weight=None):
return {
"f1": f1_score(
references,
predictions,
labels=labels,
pos_label=pos_label,
average=average,
sample_weight=sample_weight,
),
}
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2554/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2554/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2553 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2553/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2553/comments | https://api.github.com/repos/huggingface/datasets/issues/2553/events | https://github.com/huggingface/datasets/issues/2553 | 931,365,926 | MDU6SXNzdWU5MzEzNjU5MjY= | 2,553 | load_dataset("web_nlg") NonMatchingChecksumError | {
"login": "alexandrethm",
"id": 33730312,
"node_id": "MDQ6VXNlcjMzNzMwMzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/33730312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexandrethm",
"html_url": "https://github.com/alexandrethm",
"followers_url": "https://api.github.com/users/alexandrethm/followers",
"following_url": "https://api.github.com/users/alexandrethm/following{/other_user}",
"gists_url": "https://api.github.com/users/alexandrethm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexandrethm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexandrethm/subscriptions",
"organizations_url": "https://api.github.com/users/alexandrethm/orgs",
"repos_url": "https://api.github.com/users/alexandrethm/repos",
"events_url": "https://api.github.com/users/alexandrethm/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexandrethm/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! Thanks for reporting. This is due to the WebNLG repository that got updated today.\r\nI just pushed a fix at #2558 - this shouldn't happen anymore in the future.",
"This is fixed on `master` now :)\r\nWe'll do a new release soon !"
] | 1,624,872,406,000 | 1,624,901,019,000 | 1,624,900,996,000 | NONE | null | Hi! It seems the WebNLG dataset gives a NonMatchingChecksumError.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('web_nlg', name="release_v3.0_en", split="dev")
```
Gives
```
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip']
```
## Environment info
- `datasets` version: 1.8.0
- Platform: macOS-11.3.1-x86_64-i386-64bit
- Python version: 3.9.4
- PyArrow version: 3.0.0
Also tested on Linux, with python 3.6.8 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2553/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2553/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2552 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2552/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2552/comments | https://api.github.com/repos/huggingface/datasets/issues/2552/events | https://github.com/huggingface/datasets/issues/2552 | 931,354,687 | MDU6SXNzdWU5MzEzNTQ2ODc= | 2,552 | Keys should be unique error on code_search_net | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions",
"organizations_url": "https://api.github.com/users/thomwolf/orgs",
"repos_url": "https://api.github.com/users/thomwolf/repos",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomwolf/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Two questions:\r\n- with `datasets-cli env` we don't have any information on the dataset script version used. Should we give access to this somehow? Either as a note in the Error message or as an argument with the name of the dataset to `datasets-cli env`?\r\n- I don't really understand why the id is duplicated in the code of `code_search_net`, how can I debug this actually?",
"Thanks for reporting. There was indeed an issue with the keys. The key was the addition of the file id and row id, which resulted in collisions. I just opened a PR to fix this at https://github.com/huggingface/datasets/pull/2555\r\n\r\nTo help users debug this kind of errors we could try to show a message like this\r\n```python\r\nDuplicateKeysError: both 42th and 1337th examples have the same keys `48`.\r\nPlease fix the dataset script at <path/to/the/dataset/script>\r\n```\r\n\r\nThis way users who what to look for if they want to debug this issue. I opened an issue to track this: https://github.com/huggingface/datasets/issues/2556",
"and are we sure there are not a lot of datasets which are now broken with this change?",
"Thanks to the dummy data, we know for sure that most of them work as expected.\r\n`code_search_net` wasn't caught because the dummy data only have one dummy data file while the dataset script can actually load several of them using `os.listdir`. Let me take a look at all the other datasets that use `os.listdir` to see if the keys are alright",
"I found one issue on `fever` (PR here: https://github.com/huggingface/datasets/pull/2557)\r\nAll the other ones seem fine :)",
"Hi! Got same error when loading other dataset:\r\n```python3\r\nload_dataset('wikicorpus', 'raw_en')\r\n```\r\n\r\ntb:\r\n```pytb\r\n---------------------------------------------------------------------------\r\nDuplicatedKeysError Traceback (most recent call last)\r\n/opt/conda/lib/python3.8/site-packages/datasets/builder.py in _prepare_split(self, split_generator)\r\n 1109 example = self.info.features.encode_example(record)\r\n-> 1110 writer.write(example, key)\r\n 1111 finally:\r\n\r\n/opt/conda/lib/python3.8/site-packages/datasets/arrow_writer.py in write(self, example, key, writer_batch_size)\r\n 341 if self._check_duplicates:\r\n--> 342 self.check_duplicate_keys()\r\n 343 # Re-intializing to empty list for next batch\r\n\r\n/opt/conda/lib/python3.8/site-packages/datasets/arrow_writer.py in check_duplicate_keys(self)\r\n 352 if hash in tmp_record:\r\n--> 353 raise DuplicatedKeysError(key)\r\n 354 else:\r\n\r\nDuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 519\r\nKeys should be unique and deterministic in nature\r\n```\r\n\r\nVersion: datasets==1.11.0",
"Fixed by #2555.",
"The wikicorpus issue has been fixed by https://github.com/huggingface/datasets/pull/2844\r\n\r\nWe'll do a new release of `datasets` soon :)"
] | 1,624,871,720,000 | 1,630,937,310,000 | 1,630,571,129,000 | MEMBER | null | ## Describe the bug
Loading `code_search_net` seems not possible at the moment.
## Steps to reproduce the bug
```python
>>> load_dataset('code_search_net')
Downloading: 8.50kB [00:00, 3.09MB/s]
Downloading: 19.1kB [00:00, 10.1MB/s]
No config specified, defaulting to: code_search_net/all
Downloading and preparing dataset code_search_net/all (download: 4.77 GiB, generated: 5.99 GiB, post-processed: Unknown size, total: 10.76 GiB) to /Users/thomwolf/.cache/huggingface/datasets/code_search_net/all/1.0.0/b3e8278faf5d67da1d06981efbeac3b76a2900693bd2239bbca7a4a3b0d6e52a...
Traceback (most recent call last):
File "/Users/thomwolf/Documents/GitHub/datasets/src/datasets/builder.py", line 1067, in _prepare_split
writer.write(example, key)
File "/Users/thomwolf/Documents/GitHub/datasets/src/datasets/arrow_writer.py", line 343, in write
self.check_duplicate_keys()
File "/Users/thomwolf/Documents/GitHub/datasets/src/datasets/arrow_writer.py", line 354, in check_duplicate_keys
raise DuplicatedKeysError(key)
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 48
Keys should be unique and deterministic in nature
```
## Environment info
- `datasets` version: 1.8.1.dev0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.5
- PyArrow version: 2.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2552/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2552/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2551 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2551/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2551/comments | https://api.github.com/repos/huggingface/datasets/issues/2551/events | https://github.com/huggingface/datasets/pull/2551 | 930,967,978 | MDExOlB1bGxSZXF1ZXN0Njc4NTQzMjg1 | 2,551 | Fix FileSystems documentation | {
"login": "connor-mccarthy",
"id": 55268212,
"node_id": "MDQ6VXNlcjU1MjY4MjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/55268212?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/connor-mccarthy",
"html_url": "https://github.com/connor-mccarthy",
"followers_url": "https://api.github.com/users/connor-mccarthy/followers",
"following_url": "https://api.github.com/users/connor-mccarthy/following{/other_user}",
"gists_url": "https://api.github.com/users/connor-mccarthy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/connor-mccarthy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/connor-mccarthy/subscriptions",
"organizations_url": "https://api.github.com/users/connor-mccarthy/orgs",
"repos_url": "https://api.github.com/users/connor-mccarthy/repos",
"events_url": "https://api.github.com/users/connor-mccarthy/events{/privacy}",
"received_events_url": "https://api.github.com/users/connor-mccarthy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,810,722,000 | 1,624,885,795,000 | 1,624,885,794,000 | CONTRIBUTOR | null | ### What this fixes:
This PR resolves several issues I discovered in the documentation on the `datasets.filesystems` module ([this page](https://huggingface.co/docs/datasets/filesystems.html)).
### What were the issues?
When I originally tried implementing the code examples I faced several bugs attributed to:
- out of date [botocore](https://github.com/boto/botocore) call signatures
- capitalization errors in the `S3FileSystem` class name (written as `S3Filesystem` in one place)
- call signature errors for the `S3FileSystem` class constructor (uses parameter `sessions` instead of `session` in some places) (see [`s3fs`](https://s3fs.readthedocs.io/en/latest/api.html#s3fs.core.S3FileSystem) for where this constructor signature is defined)
### Testing/reviewing notes
Instructions for generating the documentation locally: [here](https://github.com/huggingface/datasets/tree/master/docs#generating-the-documentation). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2551/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2551/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2551",
"html_url": "https://github.com/huggingface/datasets/pull/2551",
"diff_url": "https://github.com/huggingface/datasets/pull/2551.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2551.patch",
"merged_at": 1624885794000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2550 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2550/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2550/comments | https://api.github.com/repos/huggingface/datasets/issues/2550/events | https://github.com/huggingface/datasets/issues/2550 | 930,951,287 | MDU6SXNzdWU5MzA5NTEyODc= | 2,550 | Allow for incremental cumulative metric updates in a distributed setup | {
"login": "eladsegal",
"id": 13485709,
"node_id": "MDQ6VXNlcjEzNDg1NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eladsegal",
"html_url": "https://github.com/eladsegal",
"followers_url": "https://api.github.com/users/eladsegal/followers",
"following_url": "https://api.github.com/users/eladsegal/following{/other_user}",
"gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions",
"organizations_url": "https://api.github.com/users/eladsegal/orgs",
"repos_url": "https://api.github.com/users/eladsegal/repos",
"events_url": "https://api.github.com/users/eladsegal/events{/privacy}",
"received_events_url": "https://api.github.com/users/eladsegal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [] | 1,624,806,058,000 | 1,632,663,759,000 | 1,632,663,759,000 | CONTRIBUTOR | null | Currently, using a metric allows for one of the following:
- Per example/batch metrics
- Cumulative metrics over the whole data
What I'd like is to have an efficient way to get cumulative metrics over the examples/batches added so far, in order to display it as part of the progress bar during training/evaluation.
Since most metrics are just an average of per-example metrics (which aren't?), an efficient calculation can be done as follows:
`((score_cumulative * n_cumulative) + (score_new * n_new)) / (n_cumulative+ n_new)`
where `n` and `score` refer to number of examples and metric score, `cumulative` refers to the cumulative metric and `new` refers to the addition of new examples.
If you don't want to add this capability in the library, a simple solution exists so users can do it themselves:
It is easy to implement for a single process setup, but in a distributed one there is no way to get the correct `n_new`.
The solution for this is to return the number of examples that was used to compute the metrics in `.compute()` by adding the following line here:
https://github.com/huggingface/datasets/blob/5a3221785311d0ce86c2785b765e86bd6997d516/src/datasets/metric.py#L402-L403
```
output["number_of_examples"] = len(predictions)
```
and also remove the log message here so it won't spam:
https://github.com/huggingface/datasets/blob/3db67f5ff6cbf807b129d2b4d1107af27623b608/src/datasets/metric.py#L411
If this change is ok with you, I'll open a pull request.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2550/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2550/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2549 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2549/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2549/comments | https://api.github.com/repos/huggingface/datasets/issues/2549/events | https://github.com/huggingface/datasets/issues/2549 | 929,819,093 | MDU6SXNzdWU5Mjk4MTkwOTM= | 2,549 | Handling unlabeled datasets | {
"login": "nelson-liu",
"id": 7272031,
"node_id": "MDQ6VXNlcjcyNzIwMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7272031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nelson-liu",
"html_url": "https://github.com/nelson-liu",
"followers_url": "https://api.github.com/users/nelson-liu/followers",
"following_url": "https://api.github.com/users/nelson-liu/following{/other_user}",
"gists_url": "https://api.github.com/users/nelson-liu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nelson-liu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nelson-liu/subscriptions",
"organizations_url": "https://api.github.com/users/nelson-liu/orgs",
"repos_url": "https://api.github.com/users/nelson-liu/repos",
"events_url": "https://api.github.com/users/nelson-liu/events{/privacy}",
"received_events_url": "https://api.github.com/users/nelson-liu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi @nelson-liu,\r\n\r\nYou can pass the parameter `features` to `load_dataset`: https://huggingface.co/docs/datasets/_modules/datasets/load.html#load_dataset\r\n\r\nIf you look at the code of the MNLI script you referred in your question (https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py#L62-L77), you can see how the Features were originally specified. \r\n\r\nFeel free to use it as a template, customize it and pass it to `load_dataset` using the parameter `features`.",
"ah got it, thanks!"
] | 1,624,595,543,000 | 1,624,655,277,000 | 1,624,655,276,000 | NONE | null | Hi!
Is there a way for datasets to produce unlabeled instances (e.g., the `ClassLabel` can be nullable).
For example, I want to use the MNLI dataset reader ( https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py ) on a file that doesn't have the `gold_label` field. I tried setting `"label": data.get("gold_label")`, but got the following error:
```
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/load.py", line 748, in load_dataset
use_auth_token=use_auth_token,
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 652, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 989, in _prepare_split
example = self.info.features.encode_example(record)
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 953, in encode_example
return encode_nested_example(self, example)
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 848, in encode_nested_example
k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 848, in <dictcomp>
k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 875, in encode_nested_example
return schema.encode_example(obj)
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 653, in encode_example
if not -1 <= example_data < self.num_classes:
TypeError: '<=' not supported between instances of 'int' and 'NoneType'
```
What's the proper way to handle reading unlabeled datasets, especially for downstream usage with Transformers? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2549/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2549/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2548 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2548/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2548/comments | https://api.github.com/repos/huggingface/datasets/issues/2548/events | https://github.com/huggingface/datasets/issues/2548 | 929,232,831 | MDU6SXNzdWU5MjkyMzI4MzE= | 2,548 | Field order issue in loading json | {
"login": "luyug",
"id": 55288513,
"node_id": "MDQ6VXNlcjU1Mjg4NTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/55288513?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/luyug",
"html_url": "https://github.com/luyug",
"followers_url": "https://api.github.com/users/luyug/followers",
"following_url": "https://api.github.com/users/luyug/following{/other_user}",
"gists_url": "https://api.github.com/users/luyug/gists{/gist_id}",
"starred_url": "https://api.github.com/users/luyug/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/luyug/subscriptions",
"organizations_url": "https://api.github.com/users/luyug/orgs",
"repos_url": "https://api.github.com/users/luyug/repos",
"events_url": "https://api.github.com/users/luyug/events{/privacy}",
"received_events_url": "https://api.github.com/users/luyug/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @luyug, thanks for reporting.\r\n\r\nThe good news is that we fixed this issue only 9 days ago: #2507.\r\n\r\nThe patch is already in the master branch of our repository and it will be included in our next `datasets` release version 1.9.0.\r\n\r\nFeel free to reopen the issue if the problem persists."
] | 1,624,541,393,000 | 1,624,545,403,000 | 1,624,545,245,000 | NONE | null | ## Describe the bug
The `load_dataset` function expects columns in alphabetical order when loading json files.
Similar bug was previously reported for csv in #623 and fixed in #684.
## Steps to reproduce the bug
For a json file `j.json`,
```
{"c":321, "a": 1, "b": 2}
```
Running the following,
```
f= datasets.Features({'a': Value('int32'), 'b': Value('int32'), 'c': Value('int32')})
json_data = datasets.load_dataset('json', data_files='j.json', features=f)
```
## Expected results
A successful load.
## Actual results
```
File "pyarrow/table.pxi", line 1409, in pyarrow.lib.Table.cast
ValueError: Target schema's field names are not matching the table's field names: ['c', 'a', 'b'], ['a', 'b', 'c']
```
## Environment info
- `datasets` version: 1.8.0
- Platform: Linux-3.10.0-957.1.3.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2548/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2548/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2547 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2547/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2547/comments | https://api.github.com/repos/huggingface/datasets/issues/2547/events | https://github.com/huggingface/datasets/issues/2547 | 929,192,329 | MDU6SXNzdWU5MjkxOTIzMjk= | 2,547 | Dataset load_from_disk is too slow | {
"login": "alexvaca0",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexvaca0",
"html_url": "https://github.com/alexvaca0",
"followers_url": "https://api.github.com/users/alexvaca0/followers",
"following_url": "https://api.github.com/users/alexvaca0/following{/other_user}",
"gists_url": "https://api.github.com/users/alexvaca0/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexvaca0/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexvaca0/subscriptions",
"organizations_url": "https://api.github.com/users/alexvaca0/orgs",
"repos_url": "https://api.github.com/users/alexvaca0/repos",
"events_url": "https://api.github.com/users/alexvaca0/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexvaca0/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! It looks like an issue with the virtual disk you are using.\r\n\r\nWe load datasets using memory mapping. In general it makes it possible to load very big files instantaneously since it doesn't have to read the file (it just assigns virtual memory to the file on disk).\r\nHowever there happens to be issues with virtual disks (for example on spot instances), for which memory mapping does a pass over the entire file, and this takes a while. We are discussing about this issue here: #2252 \r\n\r\nMemory mapping is something handled by the OS so we can't do much about it, though we're still trying to figure out what's causing this behavior exactly to see what we can do.",
"Okay, that's exactly my case, with spot instances... Therefore this isn't something we can change in any way to be able to load the dataset faster? I mean, what do you do internally at huggingface for being able to use spot instances with datasets efficiently?",
"There are no solutions yet unfortunately.\r\nWe're still trying to figure out a way to make the loading instantaneous on such disks, I'll keep you posted"
] | 1,624,538,744,000 | 1,624,632,998,000 | null | NONE | null | @lhoestq
## Describe the bug
It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usage is at 1%... This is happening in the context of a language model training, therefore I'm wasting 100$ each time I have to load the dataset from disk again (because the spot instance was stopped by aws and I need to relaunch it for example).
## Steps to reproduce the bug
Just get the oscar in spanish (around 150GGB) and try to first save in disk and then load the processed dataset. It's not dependent on the task you're doing, it just depends on the size of the text dataset.
## Expected results
I expect the dataset to be loaded in a normal time, by using the whole machine for loading it, I mean if you store the dataset in multiple files (.arrow) and then load it from multiple files, you can use multiprocessing for that and therefore don't waste so much time.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: Ubuntu 18
- Python version: 3.8
I've seen you're planning to include a streaming mode for load_dataset, but that only saves the downloading and processing time, that's not being a problem for me, you cannot save the pure loading from disk time, therefore that's not a solution for my use case or for anyone who wants to use your library for training a language model. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2547/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2547/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2546 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2546/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2546/comments | https://api.github.com/repos/huggingface/datasets/issues/2546/events | https://github.com/huggingface/datasets/pull/2546 | 929,091,689 | MDExOlB1bGxSZXF1ZXN0Njc2OTk2MjQ0 | 2,546 | Add license to the Cambridge English Write & Improve + LOCNESS dataset card | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,531,169,000 | 1,624,531,921,000 | 1,624,531,921,000 | MEMBER | null | As noticed in https://github.com/huggingface/datasets/pull/2539, the licensing information was missing for this dataset.
I added it and I also filled a few other empty sections. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2546/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2546/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2546",
"html_url": "https://github.com/huggingface/datasets/pull/2546",
"diff_url": "https://github.com/huggingface/datasets/pull/2546.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2546.patch",
"merged_at": 1624531921000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2545 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2545/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2545/comments | https://api.github.com/repos/huggingface/datasets/issues/2545/events | https://github.com/huggingface/datasets/pull/2545 | 929,016,580 | MDExOlB1bGxSZXF1ZXN0Njc2OTMxOTYw | 2,545 | Fix DuplicatedKeysError in drop dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,525,839,000 | 1,624,546,628,000 | 1,624,546,628,000 | MEMBER | null | Close #2542.
cc: @VictorSanh. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2545/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2545/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2545",
"html_url": "https://github.com/huggingface/datasets/pull/2545",
"diff_url": "https://github.com/huggingface/datasets/pull/2545.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2545.patch",
"merged_at": 1624546628000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2544 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2544/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2544/comments | https://api.github.com/repos/huggingface/datasets/issues/2544/events | https://github.com/huggingface/datasets/pull/2544 | 928,900,827 | MDExOlB1bGxSZXF1ZXN0Njc2ODM1MjYz | 2,544 | Fix logging levels | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,516,896,000 | 1,624,628,419,000 | 1,624,628,419,000 | MEMBER | null | Sometimes default `datasets` logging can be too verbose. One approach could be reducing some logging levels, from info to debug, or from warning to info.
Close #2543.
cc: @stas00 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2544/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2544/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2544",
"html_url": "https://github.com/huggingface/datasets/pull/2544",
"diff_url": "https://github.com/huggingface/datasets/pull/2544.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2544.patch",
"merged_at": 1624628419000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2543 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2543/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2543/comments | https://api.github.com/repos/huggingface/datasets/issues/2543/events | https://github.com/huggingface/datasets/issues/2543 | 928,571,915 | MDU6SXNzdWU5Mjg1NzE5MTU= | 2,543 | switching some low-level log.info's to log.debug? | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @stas00, thanks for pointing out this issue with logging.\r\n\r\nI agree that `datasets` can sometimes be too verbose... I can create a PR and we could discuss there the choice of the log levels for different parts of the code."
] | 1,624,476,415,000 | 1,624,628,419,000 | 1,624,628,419,000 | CONTRIBUTOR | null | In https://github.com/huggingface/transformers/pull/12276 we are now changing the examples to have `datasets` on the same log level as `transformers`, so that one setting can do a consistent logging across all involved components.
The trouble is that now we get a ton of these:
```
06/23/2021 12:15:31 - INFO - datasets.utils.filelock - Lock 139627640431136 acquired on /home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock
06/23/2021 12:15:31 - INFO - datasets.arrow_writer - Done writing 50 examples in 12280 bytes /home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.
06/23/2021 12:15:31 - INFO - datasets.arrow_dataset - Set __getitem__(key) output type to python objects for no columns (when key is int or slice) and don't output other (un-formatted) columns.
06/23/2021 12:15:31 - INFO - datasets.utils.filelock - Lock 139627640431136 released on /home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock
```
May I suggest that these can be `log.debug` as it's no informative to the user.
More examples: these are not informative - too much information:
```
06/23/2021 12:14:26 - INFO - datasets.load - Checking /home/stas/.cache/huggingface/datasets/downloads/459933f1fe47711fad2f6ff8110014ff189120b45ad159ef5b8e90ea43a174fa.e23e7d1259a8c6274a82a42a8936dd1b87225302c6dc9b7261beb3bc2daac640.py for additional imports.
06/23/2021 12:14:27 - INFO - datasets.builder - Constructing Dataset for split train, validation, test, from /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a
```
While these are:
```
06/23/2021 12:14:27 - INFO - datasets.info - Loading Dataset Infos from /home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt16/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a
06/23/2021 12:14:27 - WARNING - datasets.builder - Reusing dataset wmt16 (/home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a)
```
I also realize that `transformers` examples don't have do use `info` for `datasets` to let the default `warning` keep logging to less noisy.
But I think currently the log levels are slightly misused and skewed by 1 level. Many `warnings` will better be `info`s and most `info`s be `debug`.
e.g.:
```
06/23/2021 12:14:27 - WARNING - datasets.builder - Reusing dataset wmt16 (/home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a)
```
why is this a warning? it is informing me that the cache is used, there is nothing to be worried about. I'd have it as `info`.
Warnings are typically something that's bordering error or the first thing to check when things don't work as expected.
infrequent info is there to inform of the different stages or important events.
Everything else is debug.
At least the way I understand things.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2543/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2543/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2542 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2542/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2542/comments | https://api.github.com/repos/huggingface/datasets/issues/2542/events | https://github.com/huggingface/datasets/issues/2542 | 928,540,382 | MDU6SXNzdWU5Mjg1NDAzODI= | 2,542 | `datasets.keyhash.DuplicatedKeysError` for `drop` and `adversarial_qa/adversarialQA` | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"very much related: https://github.com/huggingface/datasets/pull/2333",
"Hi @VictorSanh, thank you for reporting this issue with duplicated keys.\r\n\r\n- The issue with \"adversarial_qa\" was fixed 23 days ago: #2433. Current version of `datasets` (1.8.0) includes the patch.\r\n- I am investigating the issue with `drop`. I'll ping you to keep you informed.",
"Hi @VictorSanh, the issue is already fixed and merged into master branch and will be included in our next release version 1.9.0.",
"thank you!"
] | 1,624,473,676,000 | 1,624,657,805,000 | 1,624,546,628,000 | MEMBER | null | ## Describe the bug
Failure to generate the datasets (`drop` and subset `adversarialQA` from `adversarial_qa`) because of duplicate keys.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("drop")
load_dataset("adversarial_qa", "adversarialQA")
```
## Expected results
The examples keys should be unique.
## Actual results
```bash
>>> load_dataset("drop")
Using custom data configuration default
Downloading and preparing dataset drop/default (download: 7.92 MiB, generated: 111.88 MiB, post-processed: Unknown size, total: 119.80 MiB) to /home/hf/.cache/huggingface/datasets/drop/default/0.1.0/7a94f1e2bb26c4b5c75f89857c06982967d7416e5af935a9374b9bccf5068026...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/load.py", line 751, in load_dataset
use_auth_token=use_auth_token,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 652, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 992, in _prepare_split
num_examples, num_bytes = writer.finalize()
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/arrow_writer.py", line 409, in finalize
self.check_duplicate_keys()
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys
raise DuplicatedKeysError(key)
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 28553293-d719-441b-8f00-ce3dc6df5398
Keys should be unique and deterministic in nature
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.7.0
- Platform: Linux-5.4.0-1044-gcp-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2542/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2542/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2541 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2541/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2541/comments | https://api.github.com/repos/huggingface/datasets/issues/2541/events | https://github.com/huggingface/datasets/pull/2541 | 928,529,078 | MDExOlB1bGxSZXF1ZXN0Njc2NTIwNDgx | 2,541 | update discofuse link cc @ekQ | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The CI is failing because the dataset tags for `discofuse` are missing. I'm merging this PR since this is unrelated to this PR, but feel free to open another PR to add the tags here if you have some time:\r\n\r\nhttps://github.com/huggingface/datasets/blob/19408f9fab85c79b966085574cd2da3b90959179/datasets/discofuse/README.md#L1-L5\r\n\r\nThe missing tags are:\r\n```\r\n'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'pretty_name', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n```\r\nThanks again !"
] | 1,624,472,698,000 | 1,624,890,891,000 | 1,624,890,890,000 | MEMBER | null | Updating the discofuse link: https://github.com/google-research-datasets/discofuse/commit/fd4b120cb3dd19a417e7f3b5432010b574b5eeee | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2541/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2541/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2541",
"html_url": "https://github.com/huggingface/datasets/pull/2541",
"diff_url": "https://github.com/huggingface/datasets/pull/2541.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2541.patch",
"merged_at": 1624890890000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2540 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2540/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2540/comments | https://api.github.com/repos/huggingface/datasets/issues/2540/events | https://github.com/huggingface/datasets/pull/2540 | 928,433,892 | MDExOlB1bGxSZXF1ZXN0Njc2NDM5NTM1 | 2,540 | Remove task templates if required features are removed during `Dataset.map` | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,465,225,000 | 1,624,545,675,000 | 1,624,541,643,000 | MEMBER | null | This PR fixes a bug reported by @craffel where removing a dataset's columns during `Dataset.map` triggered a `KeyError` because the `TextClassification` template tried to access the removed columns during `DatasetInfo.__post_init__`:
```python
from datasets import load_dataset
# `yelp_polarity` comes with a `TextClassification` template
ds = load_dataset("yelp_polarity", split="test")
ds
# Dataset({
# features: ['text', 'label'],
# num_rows: 38000
# })
# Triggers KeyError: 'label' - oh noes!
ds.map(lambda x: {"inputs": 0}, remove_columns=ds.column_names)
```
I wrote a unit test to make sure I could reproduce the error and then patched a fix. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2540/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2540/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2540",
"html_url": "https://github.com/huggingface/datasets/pull/2540",
"diff_url": "https://github.com/huggingface/datasets/pull/2540.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2540.patch",
"merged_at": 1624541643000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2539 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2539/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2539/comments | https://api.github.com/repos/huggingface/datasets/issues/2539/events | https://github.com/huggingface/datasets/pull/2539 | 927,952,429 | MDExOlB1bGxSZXF1ZXN0Njc2MDI5MDY5 | 2,539 | remove wi_locness dataset due to licensing issues | {
"login": "aseifert",
"id": 4944799,
"node_id": "MDQ6VXNlcjQ5NDQ3OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/4944799?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aseifert",
"html_url": "https://github.com/aseifert",
"followers_url": "https://api.github.com/users/aseifert/followers",
"following_url": "https://api.github.com/users/aseifert/following{/other_user}",
"gists_url": "https://api.github.com/users/aseifert/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aseifert/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aseifert/subscriptions",
"organizations_url": "https://api.github.com/users/aseifert/orgs",
"repos_url": "https://api.github.com/users/aseifert/repos",
"events_url": "https://api.github.com/users/aseifert/events{/privacy}",
"received_events_url": "https://api.github.com/users/aseifert/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! I'm sorry to hear that.\r\nThough we are not redistributing the dataset, we just provide a python script that downloads and process the dataset from its original source hosted at https://www.cl.cam.ac.uk\r\n\r\nTherefore I'm not sure what's the issue with licensing. What do you mean exactly ?",
"I think that the main issue is that the licesenses of the data are not made clear in the huggingface hub – other people wrongly assumed that the data was license-free, which resulted in commercial use, which is against the licenses.\r\nIs it possible to add the licenses from the original download to huggingface? that would help clear any confusion (licenses can be found here: https://www.cl.cam.ac.uk/research/nl/bea2019st/data/wi+locness_v2.1.bea19.tar.gz)",
"Thanks for the clarification @SimonHFL \r\nYou're completely right, we need to show the licenses.\r\nI just added them here: https://huggingface.co/datasets/wi_locness#licensing-information",
"Hi guys, I'm one of the authors of this dataset. \r\n\r\nTo clarify, we're happy for you to keep the data in the repo on 2 conditions:\r\n1. You don't host the data yourself.\r\n2. You make it clear that anyone who downloads the data via HuggingFace should read and abide by the license. \r\n\r\nI think you've now met these conditions, so we're all good, but I just wanted to make it clear in case there are any issues in the future. Thanks again to @aseifert for bringing this to our attention! :)",
"Thanks for your message @chrisjbryant :)\r\nI'm closing this PR then.\r\n\r\nAnd thanks for reporting @aseifert"
] | 1,624,433,732,000 | 1,624,632,762,000 | 1,624,632,762,000 | CONTRIBUTOR | null | It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2539/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2539/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2539",
"html_url": "https://github.com/huggingface/datasets/pull/2539",
"diff_url": "https://github.com/huggingface/datasets/pull/2539.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2539.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2538 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2538/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2538/comments | https://api.github.com/repos/huggingface/datasets/issues/2538/events | https://github.com/huggingface/datasets/issues/2538 | 927,940,691 | MDU6SXNzdWU5Mjc5NDA2OTE= | 2,538 | Loading partial dataset when debugging | {
"login": "reachtarunhere",
"id": 9061913,
"node_id": "MDQ6VXNlcjkwNjE5MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9061913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/reachtarunhere",
"html_url": "https://github.com/reachtarunhere",
"followers_url": "https://api.github.com/users/reachtarunhere/followers",
"following_url": "https://api.github.com/users/reachtarunhere/following{/other_user}",
"gists_url": "https://api.github.com/users/reachtarunhere/gists{/gist_id}",
"starred_url": "https://api.github.com/users/reachtarunhere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/reachtarunhere/subscriptions",
"organizations_url": "https://api.github.com/users/reachtarunhere/orgs",
"repos_url": "https://api.github.com/users/reachtarunhere/repos",
"events_url": "https://api.github.com/users/reachtarunhere/events{/privacy}",
"received_events_url": "https://api.github.com/users/reachtarunhere/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! `load_dataset` downloads the full dataset once and caches it, so that subsequent calls to `load_dataset` just reloads the dataset from your disk.\r\nThen when you specify a `split` in `load_dataset`, it will just load the requested split from the disk. If your specified split is a sliced split (e.g. `\"train[:10]\"`), then it will load the 10 first rows of the train split that you have on disk.\r\n\r\nTherefore, as long as you don't delete your cache, all your calls to `load_dataset` will be very fast. Except the first call that downloads the dataset of course ^^",
"That’s a use case for the new streaming feature, no?",
"Hi @reachtarunhere.\r\n\r\nBesides the above insights provided by @lhoestq and @thomwolf, there is also a Dataset feature in progress (I plan to finish it this week): #2249, which will allow you, when calling `load_dataset`, to pass the option to download/preprocess/cache only some specific split(s), which will definitely speed up your workflow.\r\n\r\nIf this feature is interesting for you, I can ping you once it will be merged into the master branch.",
"Thanks all for responding.\r\n\r\nHey @albertvillanova \r\n\r\nThanks. Yes, I would be interested.\r\n\r\n@lhoestq I think even if a small split is specified it loads up the full dataset from the disk (please correct me if this is not the case). Because it does seem to be slow to me even on subsequent calls. There is no repeated downloading so it seems that the cache is working.\r\n\r\nI am not aware of the streaming feature @thomwolf mentioned. So I might need to read up on it.",
"@reshinthadithyan I use the .select function to have a fraction of indices."
] | 1,624,432,792,000 | 1,627,567,833,000 | null | NONE | null | I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits).
Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing as per the other issues.
Is there a way to only load part of the dataset on load_dataset? This would really speed up my workflow.
Something like a debug mode would really help. Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2538/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2538/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2537 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2537/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2537/comments | https://api.github.com/repos/huggingface/datasets/issues/2537/events | https://github.com/huggingface/datasets/pull/2537 | 927,472,659 | MDExOlB1bGxSZXF1ZXN0Njc1NjI1OTY3 | 2,537 | Add Parquet loader + from_parquet and to_parquet | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"`pyarrow` 1.0.0 doesn't support some types in parquet, we'll have to bump its minimum version.\r\n\r\nAlso I still need to add dummy data to test the parquet builder.",
"I had to bump the minimum pyarrow version to 3.0.0 to properly support parquet.\r\n\r\nEverything is ready for review now :)\r\nI reused pretty much the same tests we had for CSV",
"Done !\r\nNow we're still allowing pyarrow>=1.0.0, but when users want to use parquet features they're asked to update to pyarrow>=3.0.0"
] | 1,624,382,903,000 | 1,625,070,663,000 | 1,625,070,658,000 | MEMBER | null | Continuation of #2247
I added a "parquet" dataset builder, as well as the methods `Dataset.from_parquet` and `Dataset.to_parquet`.
As usual, the data are converted to arrow in a batched way to avoid loading everything in memory. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2537/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2537/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2537",
"html_url": "https://github.com/huggingface/datasets/pull/2537",
"diff_url": "https://github.com/huggingface/datasets/pull/2537.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2537.patch",
"merged_at": 1625070658000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2536 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2536/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2536/comments | https://api.github.com/repos/huggingface/datasets/issues/2536/events | https://github.com/huggingface/datasets/issues/2536 | 927,338,639 | MDU6SXNzdWU5MjczMzg2Mzk= | 2,536 | Use `Audio` features for `AutomaticSpeechRecognition` task template | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I'm just retaking and working on #2324. 😉 "
] | 1,624,374,441,000 | 1,624,375,011,000 | null | MEMBER | null | In #2533 we added a task template for speech recognition that relies on the file paths to the audio files. As pointed out by @SBrandeis this is brittle as it doesn't port easily across different OS'.
The solution is to use dedicated `Audio` features when casting the dataset. These features are not yet available in `datasets`, but should be included in the `AutomaticSpeechRecognition` template once they are. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2536/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2536/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2535 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2535/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2535/comments | https://api.github.com/repos/huggingface/datasets/issues/2535/events | https://github.com/huggingface/datasets/pull/2535 | 927,334,349 | MDExOlB1bGxSZXF1ZXN0Njc1NTA3MTAw | 2,535 | Improve Features docs | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,374,207,000 | 1,624,455,643,000 | 1,624,455,643,000 | MEMBER | null | - Fix rendering and cross-references in Features docs
- Add docstrings to Features methods | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2535/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2535/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2535",
"html_url": "https://github.com/huggingface/datasets/pull/2535",
"diff_url": "https://github.com/huggingface/datasets/pull/2535.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2535.patch",
"merged_at": 1624455643000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2534 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2534/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2534/comments | https://api.github.com/repos/huggingface/datasets/issues/2534/events | https://github.com/huggingface/datasets/pull/2534 | 927,201,435 | MDExOlB1bGxSZXF1ZXN0Njc1MzkzODg0 | 2,534 | Sync with transformers disabling NOTSET | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Nice thanks ! I think there are other places with\r\n```python\r\nnot_verbose = bool(logger.getEffectiveLevel() > WARNING)\r\n```\r\n\r\nCould you replace them as well ?",
"Sure @lhoestq! I was not sure if this change should only be circumscribed to `http_get`..."
] | 1,624,366,461,000 | 1,624,545,767,000 | 1,624,545,767,000 | MEMBER | null | Close #2528. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2534/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2534/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2534",
"html_url": "https://github.com/huggingface/datasets/pull/2534",
"diff_url": "https://github.com/huggingface/datasets/pull/2534.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2534.patch",
"merged_at": 1624545767000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2533 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2533/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2533/comments | https://api.github.com/repos/huggingface/datasets/issues/2533/events | https://github.com/huggingface/datasets/pull/2533 | 927,193,264 | MDExOlB1bGxSZXF1ZXN0Njc1Mzg2OTMw | 2,533 | Add task template for automatic speech recognition | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@SBrandeis @lhoestq i've integrated your suggestions, so this is ready for another review :)",
"Merging if it's good for you @lewtun :)"
] | 1,624,365,902,000 | 1,624,464,886,000 | 1,624,463,817,000 | MEMBER | null | This PR adds a task template for automatic speech recognition. In this task, the input is a path to an audio file which the model consumes to produce a transcription.
Usage:
```python
from datasets import load_dataset
from datasets.tasks import AutomaticSpeechRecognition
ds = load_dataset("timit_asr", split="train[:10]")
# Dataset({
# features: ['file', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'],
# num_rows: 10
# })
task = AutomaticSpeechRecognition(audio_file_column="file", transcription_column="text")
ds.prepare_for_task(task)
# Dataset({
# features: ['audio_file', 'transcription'],
# num_rows: 10
# })
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2533/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2533/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2533",
"html_url": "https://github.com/huggingface/datasets/pull/2533",
"diff_url": "https://github.com/huggingface/datasets/pull/2533.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2533.patch",
"merged_at": 1624463817000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2532 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2532/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2532/comments | https://api.github.com/repos/huggingface/datasets/issues/2532/events | https://github.com/huggingface/datasets/issues/2532 | 927,063,196 | MDU6SXNzdWU5MjcwNjMxOTY= | 2,532 | Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task | {
"login": "jerryIsHere",
"id": 50871412,
"node_id": "MDQ6VXNlcjUwODcxNDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jerryIsHere",
"html_url": "https://github.com/jerryIsHere",
"followers_url": "https://api.github.com/users/jerryIsHere/followers",
"following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}",
"gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions",
"organizations_url": "https://api.github.com/users/jerryIsHere/orgs",
"repos_url": "https://api.github.com/users/jerryIsHere/repos",
"events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}",
"received_events_url": "https://api.github.com/users/jerryIsHere/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @jerryIsHere, thanks for reporting the issue. But are you sure this is a bug in HuggingFace **Datasets**?",
"> Hi @jerryIsHere, thanks for reporting the issue. But are you sure this is a bug in HuggingFace **Datasets**?\r\n\r\nOh, I am sorry\r\nI would reopen the post on huggingface/transformers"
] | 1,624,356,498,000 | 1,624,425,445,000 | 1,624,425,445,000 | CONTRIBUTOR | null | [This colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging example](https://huggingface.co/transformers/custom_datasets.html#tok-ner).
The pipeline works fine with most instance in different languages, but unfortunately, [the Japanese Kana ligature (a form of abbreviation? I don't know Japanese well)](https://en.wikipedia.org/wiki/Kana_ligature) break the alignment of `return_offsets_mapping`:
![image](https://user-images.githubusercontent.com/50871412/122904371-db192700-d382-11eb-8917-1775db76db69.png)
Without the try catch block, it riase `ValueError: NumPy boolean array indexing assignment cannot assign 88 input values to the 87 output values where the mask is true`, example shown here [(another colab notebook)](https://colab.research.google.com/drive/1MmOqf3ppzzdKKyMWkn0bJy6DqzOO0SSm?usp=sharing)
It is clear that the normalizer is the process that break the alignment, as it is observed that `tokenizer._tokenizer.normalizer.normalize_str('ヿ')` return 'コト'.
One workaround is to include `tokenizer._tokenizer.normalizer.normalize_str` before the tokenizer preprocessing pipeline, which is also provided in the [first colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) with the name `udposTestDatasetWorkaround`.
I guess similar logics should be included inside the tokenizer and the offsets_mapping generation process such that user don't need to include them in their code. But I don't understand the code of tokenizer well that I think I am not able to do this.
p.s.
**I am using my own dataset building script in the provided example, but the script should be equivalent to the changes made by this [update](https://github.com/huggingface/datasets/pull/2466)**
`get_dataset `is just a simple wrapping for `load_dataset`
and the `tokenizer` is just `XLMRobertaTokenizerFast.from_pretrained("xlm-roberta-large")` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2532/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2532/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2531 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2531/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2531/comments | https://api.github.com/repos/huggingface/datasets/issues/2531/events | https://github.com/huggingface/datasets/pull/2531 | 927,017,924 | MDExOlB1bGxSZXF1ZXN0Njc1MjM2MDYz | 2,531 | Fix dev version | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,353,430,000 | 1,624,355,230,000 | 1,624,355,229,000 | MEMBER | null | The dev version that ends in `.dev0` should be greater than the current version.
However it happens that `1.8.0 > 1.8.0.dev0` for example.
Therefore we need to use `1.8.1.dev0` for example in this case.
I updated the dev version to use `1.8.1.dev0`, and I also added a comment in the setup.py in the release steps about this. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2531/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2531/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2531",
"html_url": "https://github.com/huggingface/datasets/pull/2531",
"diff_url": "https://github.com/huggingface/datasets/pull/2531.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2531.patch",
"merged_at": 1624355229000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2530 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2530/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2530/comments | https://api.github.com/repos/huggingface/datasets/issues/2530/events | https://github.com/huggingface/datasets/pull/2530 | 927,013,773 | MDExOlB1bGxSZXF1ZXN0Njc1MjMyNDk0 | 2,530 | Fixed label parsing in the ProductReviews dataset | {
"login": "yavuzKomecoglu",
"id": 5150963,
"node_id": "MDQ6VXNlcjUxNTA5NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yavuzKomecoglu",
"html_url": "https://github.com/yavuzKomecoglu",
"followers_url": "https://api.github.com/users/yavuzKomecoglu/followers",
"following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}",
"gists_url": "https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yavuzKomecoglu/subscriptions",
"organizations_url": "https://api.github.com/users/yavuzKomecoglu/orgs",
"repos_url": "https://api.github.com/users/yavuzKomecoglu/repos",
"events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}",
"received_events_url": "https://api.github.com/users/yavuzKomecoglu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq, can you please review this PR?\r\nWhat exactly is the problem in the test case? Should it matter?",
"Hi ! Thanks for fixing this :)\r\n\r\nThe CI fails for two reasons:\r\n- the `pretty_name` tag is missing in yaml tags in ./datasets/turkish_product_reviews/README.md. You can fix that by adding this in the yaml tags:\r\n```yaml\r\npretty_name: Turkish Product Reviews\r\n```\r\n- The test that runs the turkish_product_reviews.py file on the dummy_data.zip data returned 0 examples. Indeed it looks like you changed dummy_data.zip file and now it is an empty zip file. I think you can fix that by reverting your change to the dummy_data.zip file",
"> Hi ! Thanks for fixing this :)\r\n> \r\n> The CI fails for two reasons:\r\n> \r\n> * the `pretty_name` tag is missing in yaml tags in ./datasets/turkish_product_reviews/README.md. You can fix that by adding this in the yaml tags:\r\n> \r\n> \r\n> ```yaml\r\n> pretty_name: Turkish Product Reviews\r\n> ```\r\n> \r\n> * The test that runs the turkish_product_reviews.py file on the dummy_data.zip data returned 0 examples. Indeed it looks like you changed dummy_data.zip file and now it is an empty zip file. I think you can fix that by reverting your change to the dummy_data.zip file\r\n\r\nMany thanks for the quick feedback.\r\nI made the relevant fixes but still got the error :(",
"> Thanks !\r\n> The CI was failing because of the dataset card that was missing some sections. I fixed that.\r\n> \r\n> It's all good now\r\n\r\nSuper. Thanks for the support."
] | 1,624,353,165,000 | 1,624,366,520,000 | 1,624,366,360,000 | CONTRIBUTOR | null | Fixed issue with parsing dataset labels. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2530/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2530/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2530",
"html_url": "https://github.com/huggingface/datasets/pull/2530",
"diff_url": "https://github.com/huggingface/datasets/pull/2530.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2530.patch",
"merged_at": 1624366360000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2529 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2529/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2529/comments | https://api.github.com/repos/huggingface/datasets/issues/2529/events | https://github.com/huggingface/datasets/pull/2529 | 926,378,812 | MDExOlB1bGxSZXF1ZXN0Njc0NjkxNjA5 | 2,529 | Add summarization template | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> Nice thanks !\r\n> Could you just move the test outside of the BaseDatasetTest class please ? Otherwise it will unnecessarily be run twice.\r\n\r\nsure, on it! thanks for the explanations about the `self._to` method :)",
"@lhoestq i've moved all the task template tests outside of `BaseDatasetTest` and collected them in their dedicated test case. (at some point i'll revisit this so we can just use `pytest` natively, but the PR is already getting out-of-scope :))"
] | 1,624,291,711,000 | 1,624,458,131,000 | 1,624,455,010,000 | MEMBER | null | This PR adds a task template for text summarization. As far as I can tell, we do not need to distinguish between "extractive" or "abstractive" summarization - both can be handled with this template.
Usage:
```python
from datasets import load_dataset
from datasets.tasks import Summarization
ds = load_dataset("xsum", split="train")
# Dataset({
# features: ['document', 'summary', 'id'],
# num_rows: 204045
# })
summarization = Summarization(text_column="document", summary_column="summary")
ds.prepare_for_task(summarization)
# Dataset({
# features: ['text', 'summary'],
# num_rows: 204045
# })
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2529/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2529/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2529",
"html_url": "https://github.com/huggingface/datasets/pull/2529",
"diff_url": "https://github.com/huggingface/datasets/pull/2529.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2529.patch",
"merged_at": 1624455010000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2528 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2528/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2528/comments | https://api.github.com/repos/huggingface/datasets/issues/2528/events | https://github.com/huggingface/datasets/issues/2528 | 926,314,656 | MDU6SXNzdWU5MjYzMTQ2NTY= | 2,528 | Logging cannot be set to NOTSET similar to transformers | {
"login": "joshzwiebel",
"id": 34662010,
"node_id": "MDQ6VXNlcjM0NjYyMDEw",
"avatar_url": "https://avatars.githubusercontent.com/u/34662010?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joshzwiebel",
"html_url": "https://github.com/joshzwiebel",
"followers_url": "https://api.github.com/users/joshzwiebel/followers",
"following_url": "https://api.github.com/users/joshzwiebel/following{/other_user}",
"gists_url": "https://api.github.com/users/joshzwiebel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/joshzwiebel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/joshzwiebel/subscriptions",
"organizations_url": "https://api.github.com/users/joshzwiebel/orgs",
"repos_url": "https://api.github.com/users/joshzwiebel/repos",
"events_url": "https://api.github.com/users/joshzwiebel/events{/privacy}",
"received_events_url": "https://api.github.com/users/joshzwiebel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @joshzwiebel, thanks for reporting. We are going to align with `transformers`."
] | 1,624,287,894,000 | 1,624,545,767,000 | 1,624,545,767,000 | NONE | null | ## Describe the bug
In the transformers library you can set the verbosity level to logging.NOTSET to work around the usage of tqdm and IPywidgets, however in Datasets this is no longer possible. This is because transformers set the verbosity level of tqdm with [this](https://github.com/huggingface/transformers/blob/b53bc55ba9bb10d5ee279eab51a2f0acc5af2a6b/src/transformers/file_utils.py#L1449)
`disable=bool(logging.get_verbosity() == logging.NOTSET)`
and datasets accomplishes this like [so](https://github.com/huggingface/datasets/blob/83554e410e1ab8c6f705cfbb2df7953638ad3ac1/src/datasets/utils/file_utils.py#L493)
`not_verbose = bool(logger.getEffectiveLevel() > WARNING)`
## Steps to reproduce the bug
```python
import datasets
import logging
datasets.logging.get_verbosity = lambda : logging.NOTSET
datasets.load_dataset("patrickvonplaten/librispeech_asr_dummy")
```
## Expected results
The code should download and load the dataset as normal without displaying progress bars
## Actual results
```ImportError Traceback (most recent call last)
<ipython-input-4-aec65c0509c6> in <module>
----> 1 datasets.load_dataset("patrickvonplaten/librispeech_asr_dummy")
~/venv/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, **config_kwargs)
713 dataset=True,
714 return_resolved_file_path=True,
--> 715 use_auth_token=use_auth_token,
716 )
717 # Set the base path for downloads as the parent of the script location
~/venv/lib/python3.7/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, **download_kwargs)
350 file_path = hf_bucket_url(path, filename=name, dataset=False)
351 try:
--> 352 local_path = cached_path(file_path, download_config=download_config)
353 except FileNotFoundError:
354 raise FileNotFoundError(
~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
289 use_etag=download_config.use_etag,
290 max_retries=download_config.max_retries,
--> 291 use_auth_token=download_config.use_auth_token,
292 )
293 elif os.path.exists(url_or_filename):
~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
668 headers=headers,
669 cookies=cookies,
--> 670 max_retries=max_retries,
671 )
672
~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in http_get(url, temp_file, proxies, resume_size, headers, cookies, timeout, max_retries)
493 initial=resume_size,
494 desc="Downloading",
--> 495 disable=not_verbose,
496 )
497 for chunk in response.iter_content(chunk_size=1024):
~/venv/lib/python3.7/site-packages/tqdm/notebook.py in __init__(self, *args, **kwargs)
217 total = self.total * unit_scale if self.total else self.total
218 self.container = self.status_printer(
--> 219 self.fp, total, self.desc, self.ncols)
220 self.sp = self.display
221
~/venv/lib/python3.7/site-packages/tqdm/notebook.py in status_printer(_, total, desc, ncols)
95 if IProgress is None: # #187 #451 #558 #872
96 raise ImportError(
---> 97 "IProgress not found. Please update jupyter and ipywidgets."
98 " See https://ipywidgets.readthedocs.io/en/stable"
99 "/user_install.html")
ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: Linux-5.4.95-42.163.amzn2.x86_64-x86_64-with-debian-10.8
- Python version: 3.7.10
- PyArrow version: 3.0.0
I am running this code on Deepnote and which important to this issue **does not** support IPywidgets
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2528/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2528/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2527 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2527/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2527/comments | https://api.github.com/repos/huggingface/datasets/issues/2527/events | https://github.com/huggingface/datasets/pull/2527 | 926,031,525 | MDExOlB1bGxSZXF1ZXN0Njc0MzkzNjQ5 | 2,527 | Replace bad `n>1M` size tag | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,268,555,000 | 1,624,288,010,000 | 1,624,288,009,000 | MEMBER | null | Some datasets were still using the old `n>1M` tag which has been replaced with tags `1M<n<10M`, etc.
This resulted in unexpected results when searching for datasets bigger than 1M on the hub, since it was only showing the ones with the tag `n>1M`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2527/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2527/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2527",
"html_url": "https://github.com/huggingface/datasets/pull/2527",
"diff_url": "https://github.com/huggingface/datasets/pull/2527.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2527.patch",
"merged_at": 1624288009000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2526 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2526/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2526/comments | https://api.github.com/repos/huggingface/datasets/issues/2526/events | https://github.com/huggingface/datasets/issues/2526 | 925,929,228 | MDU6SXNzdWU5MjU5MjkyMjg= | 2,526 | Add COCO datasets | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | open | false | {
"login": "merveenoyan",
"id": 53175384,
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/merveenoyan",
"html_url": "https://github.com/merveenoyan",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "merveenoyan",
"id": 53175384,
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/merveenoyan",
"html_url": "https://github.com/merveenoyan",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I'm currently adding it, the entire dataset is quite big around 30 GB so I add splits separately. You can take a look here https://huggingface.co/datasets/merve/coco",
"I talked to @lhoestq and it's best if I download this dataset through TensorFlow datasets instead, so I'll be implementing that one really soon.\r\n@NielsRogge ",
"I started adding COCO, will be done tomorrow EOD\r\nmy work so far https://github.com/merveenoyan/datasets (my fork)",
"Hi Merve @merveenoyan , thank you so much for your great contribution! May I ask about the current progress of your implementation? Cuz I see the pull request is still in progess here. Or can I just run the COCO scripts in your fork repo?",
"Hello @yixuanren I had another prioritized project about to be merged, but I'll start continuing today will finish up soon. ",
"> Hello @yixuanren I had another prioritized project about to be merged, but I'll start continuing today will finish up soon.\r\n\r\nIt's really nice of you!! I see you've commited another version just now",
"@yixuanren we're working on it, will be available soon, thanks a lot for your patience"
] | 1,624,261,712,000 | 1,640,007,218,000 | null | NONE | null | ## Adding a Dataset
- **Name:** COCO
- **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset.
- **Paper + website:** https://cocodataset.org/#home
- **Data:** https://cocodataset.org/#download
- **Motivation:** It would be great to have COCO available in HuggingFace datasets, as we are moving beyond just text. COCO includes multi-modalities (images + text), as well as a huge amount of images annotated with objects, segmentation masks, keypoints etc., on which models like DETR (which I recently added to HuggingFace Transformers) are trained. Currently, one needs to download everything from the website and place it in a local folder, but it would be much easier if we can directly access it through the datasets API.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2526/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2526/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2525 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2525/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2525/comments | https://api.github.com/repos/huggingface/datasets/issues/2525/events | https://github.com/huggingface/datasets/pull/2525 | 925,896,358 | MDExOlB1bGxSZXF1ZXN0Njc0Mjc5MTgy | 2,525 | Use scikit-learn package rather than sklearn in setup.py | {
"login": "lesteve",
"id": 1680079,
"node_id": "MDQ6VXNlcjE2ODAwNzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1680079?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lesteve",
"html_url": "https://github.com/lesteve",
"followers_url": "https://api.github.com/users/lesteve/followers",
"following_url": "https://api.github.com/users/lesteve/following{/other_user}",
"gists_url": "https://api.github.com/users/lesteve/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lesteve/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lesteve/subscriptions",
"organizations_url": "https://api.github.com/users/lesteve/orgs",
"repos_url": "https://api.github.com/users/lesteve/repos",
"events_url": "https://api.github.com/users/lesteve/events{/privacy}",
"received_events_url": "https://api.github.com/users/lesteve/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,259,065,000 | 1,624,269,673,000 | 1,624,265,853,000 | CONTRIBUTOR | null | The sklearn package is an historical thing and should probably not be used by anyone, see https://github.com/scikit-learn/scikit-learn/issues/8215#issuecomment-344679114 for some caveats.
Note: this affects only TESTS_REQUIRE so I guess only developers not end users. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2525/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2525/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2525",
"html_url": "https://github.com/huggingface/datasets/pull/2525",
"diff_url": "https://github.com/huggingface/datasets/pull/2525.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2525.patch",
"merged_at": 1624265853000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2524 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2524/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2524/comments | https://api.github.com/repos/huggingface/datasets/issues/2524/events | https://github.com/huggingface/datasets/pull/2524 | 925,610,934 | MDExOlB1bGxSZXF1ZXN0Njc0MDQzNzk1 | 2,524 | Raise FileNotFoundError in WindowsFileLock | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Could you clarify what it fixes exactly and give more details please ? Especially why this is related to the windows hanging error ?",
"This has already been merged, but I'll clarify the idea of this PR. Before this merge, FileLock was the only component affected by the max path limit on Windows (that came to my notice) because of its infinite loop that would suppress errors. So instead of suppressing the `FileNotFoundError` that is thrown by `os.open` if the file name is longer than the max allowed path length, this PR reraises it to notify the user."
] | 1,624,199,111,000 | 1,624,874,182,000 | 1,624,870,059,000 | CONTRIBUTOR | null | Closes #2443 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2524/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2524/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2524",
"html_url": "https://github.com/huggingface/datasets/pull/2524",
"diff_url": "https://github.com/huggingface/datasets/pull/2524.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2524.patch",
"merged_at": 1624870059000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2523 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2523/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2523/comments | https://api.github.com/repos/huggingface/datasets/issues/2523/events | https://github.com/huggingface/datasets/issues/2523 | 925,421,008 | MDU6SXNzdWU5MjU0MjEwMDg= | 2,523 | Fr | {
"login": "aDrIaNo34500",
"id": 71971234,
"node_id": "MDQ6VXNlcjcxOTcxMjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/71971234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aDrIaNo34500",
"html_url": "https://github.com/aDrIaNo34500",
"followers_url": "https://api.github.com/users/aDrIaNo34500/followers",
"following_url": "https://api.github.com/users/aDrIaNo34500/following{/other_user}",
"gists_url": "https://api.github.com/users/aDrIaNo34500/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aDrIaNo34500/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aDrIaNo34500/subscriptions",
"organizations_url": "https://api.github.com/users/aDrIaNo34500/orgs",
"repos_url": "https://api.github.com/users/aDrIaNo34500/repos",
"events_url": "https://api.github.com/users/aDrIaNo34500/events{/privacy}",
"received_events_url": "https://api.github.com/users/aDrIaNo34500/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,118,192,000 | 1,624,128,503,000 | 1,624,128,503,000 | NONE | null | __Originally posted by @lewtun in https://github.com/huggingface/datasets/pull/2469__ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2523/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2523/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2522 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2522/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2522/comments | https://api.github.com/repos/huggingface/datasets/issues/2522/events | https://github.com/huggingface/datasets/issues/2522 | 925,334,379 | MDU6SXNzdWU5MjUzMzQzNzk= | 2,522 | Documentation Mistakes in Dataset: emotion | {
"login": "GDGauravDutta",
"id": 62606251,
"node_id": "MDQ6VXNlcjYyNjA2MjUx",
"avatar_url": "https://avatars.githubusercontent.com/u/62606251?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/GDGauravDutta",
"html_url": "https://github.com/GDGauravDutta",
"followers_url": "https://api.github.com/users/GDGauravDutta/followers",
"following_url": "https://api.github.com/users/GDGauravDutta/following{/other_user}",
"gists_url": "https://api.github.com/users/GDGauravDutta/gists{/gist_id}",
"starred_url": "https://api.github.com/users/GDGauravDutta/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/GDGauravDutta/subscriptions",
"organizations_url": "https://api.github.com/users/GDGauravDutta/orgs",
"repos_url": "https://api.github.com/users/GDGauravDutta/repos",
"events_url": "https://api.github.com/users/GDGauravDutta/events{/privacy}",
"received_events_url": "https://api.github.com/users/GDGauravDutta/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi,\r\n\r\nthis issue has been already reported in the dataset repo (https://github.com/dair-ai/emotion_dataset/issues/2), so this is a bug on their side."
] | 1,624,086,537,000 | 1,624,124,296,000 | null | NONE | null | As per documentation,
Dataset: emotion
Homepage: https://github.com/dair-ai/emotion_dataset
Dataset: https://github.com/huggingface/datasets/blob/master/datasets/emotion/emotion.py
Permalink: https://huggingface.co/datasets/viewer/?dataset=emotion
Emotion is a dataset of English Twitter messages with eight basic emotions: anger, anticipation, disgust, fear, joy, sadness, surprise, and trust. For more detailed information please refer to the paper.
But when we view the data, there are only 6 emotions, anger, fear, joy, sadness, surprise, and trust. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2522/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2522/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2521 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2521/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2521/comments | https://api.github.com/repos/huggingface/datasets/issues/2521/events | https://github.com/huggingface/datasets/pull/2521 | 925,030,685 | MDExOlB1bGxSZXF1ZXN0NjczNTgxNzQ4 | 2,521 | Insert text classification template for Emotion dataset | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,031,779,000 | 1,624,267,351,000 | 1,624,267,351,000 | MEMBER | null | This PR includes a template and updated `dataset_infos.json` for the `emotion` dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2521/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2521",
"html_url": "https://github.com/huggingface/datasets/pull/2521",
"diff_url": "https://github.com/huggingface/datasets/pull/2521.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2521.patch",
"merged_at": 1624267351000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2520 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2520/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2520/comments | https://api.github.com/repos/huggingface/datasets/issues/2520/events | https://github.com/huggingface/datasets/issues/2520 | 925,015,004 | MDU6SXNzdWU5MjUwMTUwMDQ= | 2,520 | Datasets with tricky task templates | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067401494,
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion",
"name": "Dataset discussion",
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets"
}
] | open | false | null | [] | null | [] | 1,624,030,437,000 | 1,624,031,186,000 | null | MEMBER | null | I'm collecting a list of datasets here that don't follow the "standard" taxonomy and require further investigation to implement task templates for.
## Text classification
* [hatexplain](https://huggingface.co/datasets/hatexplain): ostensibly a form of text classification, but not in the standard `(text, target)` format and each sample appears to be tokenized.
* [muchocine](https://huggingface.co/datasets/muchocine): contains two candidate text columns (long-form and summary) which in principle requires two `TextClassification` templates which is not currently supported | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2520/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2519 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2519/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2519/comments | https://api.github.com/repos/huggingface/datasets/issues/2519/events | https://github.com/huggingface/datasets/pull/2519 | 924,903,240 | MDExOlB1bGxSZXF1ZXN0NjczNDcyMzYy | 2,519 | Improve performance of pandas arrow extractor | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks like this change\r\n```\r\npa_table[pa_table.column_names[0]].to_pandas(types_mapper=pandas_types_mapper)\r\n```\r\ndoesn't return a Series with the correct type.\r\nThis is related to https://issues.apache.org/jira/browse/ARROW-9664\r\n\r\nSince the types_mapper isn't taken into account, the ArrayXD types are not converted to the correct pandas extension dtype",
"@lhoestq I think I found a workaround... 😉 ",
"For some reason the benchmarks are not run Oo",
"Anyway, merging.\r\nWe'll see on master how much speed ups we got"
] | 1,624,022,681,000 | 1,624,266,366,000 | 1,624,266,366,000 | MEMBER | null | While reviewing PR #2505, I noticed that pandas arrow extractor could be refactored to be faster. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2519/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2519",
"html_url": "https://github.com/huggingface/datasets/pull/2519",
"diff_url": "https://github.com/huggingface/datasets/pull/2519.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2519.patch",
"merged_at": 1624266366000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2518 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2518/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2518/comments | https://api.github.com/repos/huggingface/datasets/issues/2518/events | https://github.com/huggingface/datasets/pull/2518 | 924,654,100 | MDExOlB1bGxSZXF1ZXN0NjczMjU5Nzg1 | 2,518 | Add task templates for tydiqa and xquad | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Just tested TydiQA and it works fine :)"
] | 1,624,003,594,000 | 1,624,028,477,000 | 1,624,027,833,000 | MEMBER | null | This PR adds question-answering templates to the remaining datasets that are linked to a model on the Hub.
Notes:
* I could not test the tydiqa implementation since I don't have enough disk space 😢 . But I am confident the template works :)
* there exist other datasets like `fquad` and `mlqa` which are candidates for question-answering templates, but some work is needed to handle the ordering of nested column described in #2434
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2518/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2518",
"html_url": "https://github.com/huggingface/datasets/pull/2518",
"diff_url": "https://github.com/huggingface/datasets/pull/2518.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2518.patch",
"merged_at": 1624027833000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2517 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2517/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2517/comments | https://api.github.com/repos/huggingface/datasets/issues/2517/events | https://github.com/huggingface/datasets/pull/2517 | 924,643,345 | MDExOlB1bGxSZXF1ZXN0NjczMjUwODk1 | 2,517 | Fix typo in MatthewsCorrelation class name | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,624,002,786,000 | 1,624,005,835,000 | 1,624,005,835,000 | MEMBER | null | Close #2513. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2517/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2517",
"html_url": "https://github.com/huggingface/datasets/pull/2517",
"diff_url": "https://github.com/huggingface/datasets/pull/2517.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2517.patch",
"merged_at": 1624005835000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2516 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2516/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2516/comments | https://api.github.com/repos/huggingface/datasets/issues/2516/events | https://github.com/huggingface/datasets/issues/2516 | 924,597,470 | MDU6SXNzdWU5MjQ1OTc0NzA= | 2,516 | datasets.map pickle issue resulting in invalid mapping function | {
"login": "david-waterworth",
"id": 5028974,
"node_id": "MDQ6VXNlcjUwMjg5NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/david-waterworth",
"html_url": "https://github.com/david-waterworth",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user}",
"gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}",
"starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions",
"organizations_url": "https://api.github.com/users/david-waterworth/orgs",
"repos_url": "https://api.github.com/users/david-waterworth/repos",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"received_events_url": "https://api.github.com/users/david-waterworth/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! `map` calls `__getstate__` using `dill` to hash your map function. This is used by the caching mechanism to recover previously computed results. That's why you don't see any `__setstate__` call.\r\n\r\nWhy do you change an attribute of your tokenizer when `__getstate__` is called ?",
"@lhoestq because if I try to pickle my custom tokenizer (it contains a pure python pretokenization step in an otherwise rust backed tokenizer) I get\r\n\r\n> Exception: Error while attempting to pickle Tokenizer: Custom PreTokenizer cannot be serialized\r\n\r\nSo I remove the Custom PreTokenizer in `__getstate__` and then restore it in `__setstate__` (since it doesn't contain any state). This is what my `__getstate__` / `__setstate__` looks like:\r\n\r\n def __getstate__(self):\r\n \"\"\"\r\n Removes pre_tokenizer since it cannot be pickled\r\n \"\"\"\r\n logger.debug(\"Copy state dict\")\r\n out = self.__dict__.copy()\r\n logger.debug(\"Detaching pre_tokenizer\")\r\n out['_tokenizer'].pre_tokenizer = tokenizers.pre_tokenizers.Sequence([]) \r\n return out\r\n\r\n def __setstate__(self, d):\r\n \"\"\"\r\n Reinstates pre_tokenizer\r\n \"\"\"\r\n logger.debug(\"Reattaching pre_tokenizer\")\r\n self.__dict__ = d\r\n self.backend_tokenizer.pre_tokenizer = self._pre_tokenizer()\r\n\r\nIf this is the case can you think of another way of avoiding my issue?",
"Actually, maybe I need to deep copy `self.__dict__`? That way `self` isn't modified. That was my intention and I thought it was working - I'll double-check after the weekend.",
"Doing a deep copy results in the warning:\r\n\r\n> 06/20/2021 16:02:15 - WARNING - datasets.fingerprint - Parameter 'function'=<function tokenize_function at 0x7f1e95f05d40> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\r\n\r\n\r\n```\r\ndef __getstate__(self):\r\n \"\"\"\r\n Removes pre_tokenizer since it cannot be pickled\r\n \"\"\"\r\n logger.debug(\"Copy state dict\")\r\n out = copy.deepcopy(self.__dict__)\r\n logger.debug(\"Detaching pre_tokenizer\")\r\n out['_tokenizer'].pre_tokenizer = tokenizers.pre_tokenizers.Sequence([]) \r\n return out\r\n```",
"Looks like there is still an object that is not pickable in your `tokenize_function` function.\r\n\r\nYou can test if an object can be pickled and hashed by using \r\n```python\r\nfrom datasets.fingerprint import Hasher\r\n\r\nHasher.hash(my_object)\r\n```\r\n\r\nUnder the hood it pickles the object to compute its hash, so it calls `__getstate__` when applicable.",
"I figured it out, the problem is deep copy itself uses pickle (unless you implement `__deepcopy__`). So when I changed `__getstate__` it started throwing an error.\r\n\r\nI'm sure there's a better way of doing this, but in order to return the `__dict__` without the non-pikelable pre-tokeniser and without modifying self I removed the pre-tokenizers, did a deep copy and then re-generated it.\r\n\r\nIt does work - although I noticed Hasher doesn't call `__hash__` if the object being hashed implements it which I feel it should? If it did I could return a hash of the tokenizers.json file instead.\r\n\r\n```\r\n def __getstate__(self):\r\n \"\"\"\r\n Removes pre_tokenizer since it cannot be pickled\r\n \"\"\"\r\n logger.debug(\"Copy state dict\")\r\n self.backend_tokenizer.pre_tokenizer = tokenizers.pre_tokenizers.Sequence([])\r\n out = copy.deepcopy(self.__dict__) #self.__dict__.copy()\r\n self.backend_tokenizer.pre_tokenizer = self._pre_tokenizer()\r\n\r\n return out\r\n```\r\n",
"I'm glad you figured something out :)\r\n\r\nRegarding hashing: we're not using hashing for the same purpose as the python `__hash__` purpose (which is in general for dictionary lookups). For example it is allowed for python hashing to not return the same hash across sessions, while our hashing must return the same hashes across sessions for the caching to work properly."
] | 1,623,998,846,000 | 1,624,456,069,000 | null | NONE | null | I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is mapped to a dataset, i.e. in the manner of run_mlm.py and other huggingface scripts.
The following reproduces the issue - most likely I'm missing something
A simulated tokeniser which can be pickled
```
class CustomTokenizer:
def __init__(self):
self.state = "init"
def __getstate__(self):
print("__getstate__ called")
out = self.__dict__.copy()
self.state = "pickled"
return out
def __setstate__(self, d):
print("__setstate__ called")
self.__dict__ = d
self.state = "restored"
tokenizer = CustomTokenizer()
```
Test that it actually works - prints "__getstate__ called" and "__setstate__ called"
```
import pickle
serialized = pickle.dumps(tokenizer)
restored = pickle.loads(serialized)
assert restored.state == "restored"
```
Simulate a function that tokenises examples, when dataset.map is called, this function
```
def tokenize_function(examples):
assert tokenizer.state == "restored" # this shouldn't fail but it does
output = tokenizer(examples) # this will fail as tokenizer isn't really a tokenizer
return output
```
Use map to simulate tokenization
```
import glob
from datasets import load_dataset
assert tokenizer.state == "restored"
train_files = glob.glob('train*.csv')
validation_files = glob.glob('validation*.csv')
datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files))
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
)
```
What's happening is I can see that __getstate__ is called but not __setstate__, so the state of `tokenize_function` is invalid at the point that it's actually executed. This doesn't matter as far as I can see for the standard tokenizers as they don't use __getstate__ / __setstate__. I'm not sure if there's another hook I'm supposed to implement as well?
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-22-a2aef4f74aaa> in <module>
8 tokenized_datasets = datasets.map(
9 tokenize_function,
---> 10 batched=True,
11 )
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc)
487 desc=desc,
488 )
--> 489 for k, dataset in self.items()
490 }
491 )
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0)
487 desc=desc,
488 )
--> 489 for k, dataset in self.items()
490 }
491 )
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1633 fn_kwargs=fn_kwargs,
1634 new_fingerprint=new_fingerprint,
-> 1635 desc=desc,
1636 )
1637 else:
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
184 }
185 # apply actual function
--> 186 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
187 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
188 # re-apply format to the output
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
395 # Call actual function
396
--> 397 out = func(self, *args, **kwargs)
398
399 # Update fingerprint of in-place transforms + update in-place history of transforms
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, desc)
1961 indices,
1962 check_same_num_examples=len(input_dataset.list_indexes()) > 0,
-> 1963 offset=offset,
1964 )
1965 except NumExamplesMismatch:
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
1853 effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset
1854 processed_inputs = (
-> 1855 function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
1856 )
1857 if update_data is None:
<ipython-input-21-8ee4a8ba5b1b> in tokenize_function(examples)
1 def tokenize_function(examples):
----> 2 assert tokenizer.state == "restored"
3 tokenizer(examples)
4 return examples
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2516/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2515 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2515/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2515/comments | https://api.github.com/repos/huggingface/datasets/issues/2515/events | https://github.com/huggingface/datasets/pull/2515 | 924,435,447 | MDExOlB1bGxSZXF1ZXN0NjczMDc3NTIx | 2,515 | CRD3 dataset card | {
"login": "wilsonyhlee",
"id": 1937386,
"node_id": "MDQ6VXNlcjE5MzczODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1937386?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wilsonyhlee",
"html_url": "https://github.com/wilsonyhlee",
"followers_url": "https://api.github.com/users/wilsonyhlee/followers",
"following_url": "https://api.github.com/users/wilsonyhlee/following{/other_user}",
"gists_url": "https://api.github.com/users/wilsonyhlee/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wilsonyhlee/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wilsonyhlee/subscriptions",
"organizations_url": "https://api.github.com/users/wilsonyhlee/orgs",
"repos_url": "https://api.github.com/users/wilsonyhlee/repos",
"events_url": "https://api.github.com/users/wilsonyhlee/events{/privacy}",
"received_events_url": "https://api.github.com/users/wilsonyhlee/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,623,975,847,000 | 1,624,270,724,000 | 1,624,270,724,000 | CONTRIBUTOR | null | This PR adds additional information to the CRD3 dataset card. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2515/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2515",
"html_url": "https://github.com/huggingface/datasets/pull/2515",
"diff_url": "https://github.com/huggingface/datasets/pull/2515.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2515.patch",
"merged_at": 1624270724000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2514 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2514/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2514/comments | https://api.github.com/repos/huggingface/datasets/issues/2514/events | https://github.com/huggingface/datasets/issues/2514 | 924,417,172 | MDU6SXNzdWU5MjQ0MTcxNzI= | 2,514 | Can datasets remove duplicated rows? | {
"login": "liuxinglan",
"id": 16516583,
"node_id": "MDQ6VXNlcjE2NTE2NTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/16516583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liuxinglan",
"html_url": "https://github.com/liuxinglan",
"followers_url": "https://api.github.com/users/liuxinglan/followers",
"following_url": "https://api.github.com/users/liuxinglan/following{/other_user}",
"gists_url": "https://api.github.com/users/liuxinglan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/liuxinglan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liuxinglan/subscriptions",
"organizations_url": "https://api.github.com/users/liuxinglan/orgs",
"repos_url": "https://api.github.com/users/liuxinglan/repos",
"events_url": "https://api.github.com/users/liuxinglan/events{/privacy}",
"received_events_url": "https://api.github.com/users/liuxinglan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! For now this is probably the best option.\r\nWe might add a feature like this in the feature as well.\r\n\r\nDo you know any deduplication method that works on arbitrary big datasets without filling up RAM ?\r\nOtherwise we can have do the deduplication in memory like pandas but I feel like this is going to be limiting for some cases",
"Yes, I'd like to work on this feature once I'm done with #2500, but first I have to do some research, and see if the implementation wouldn't be too complex.\r\n\r\nIn the meantime, maybe [this lib](https://github.com/TomScheffers/pyarrow_ops) can help. However, note that this lib operates directly on pyarrow tables and relies only on `hash` to find duplicates (e.g. `-1` and `-2` have the same hash in Python 3, so this lib will treat them as duplicates), which doesn't make much sense.",
"> Hi ! For now this is probably the best option.\r\n> We might add a feature like this in the feature as well.\r\n> \r\n> Do you know any deduplication method that works on arbitrary big datasets without filling up RAM ?\r\n> Otherwise we can have do the deduplication in memory like pandas but I feel like this is going to be limiting for some cases\r\n\r\nGreat if this is can be done. Thanks!!\r\n\r\nNot sure if you are asking me. In any case I don't know of any unfortunately :( in practice if data is really large we normally do it with spark (only for info. I understand this is not useful in developing this library..)",
"Hello,\r\n\r\nI'm also interested in this feature.\r\nHas there been progress on this issue?\r\n\r\nCould we use a similar trick as above, but with a better hashing algorithm like SHA?\r\n\r\nWe could also use a [bloom filter](https://en.wikipedia.org/wiki/Bloom_filter), should we care a lot about collision in this case?",
"For reference, we can get a solution fairly easily if we assume that we can hold in memory all unique values. \r\n\r\n```python\r\nfrom datasets import Dataset\r\nfrom itertools import cycle\r\nfrom functools import partial\r\n\r\nmemory = set()\r\ndef is_unique(elem:Any , column: str, memory: set) -> bool:\r\n if elem[column] in memory:\r\n return False\r\n else:\r\n memory.add(elem[column])\r\n return True\r\n\r\n# Example dataset\r\nds = Dataset.from_dict({\"col1\" : [sent for i, sent in zip(range(10), cycle([\"apple\", \"orange\", \"pear\"]))],\r\n \"col2\": [i % 5 for i in range(10)]})\r\n\r\n# Drop duplicates in `ds` on \"col1\"\r\nds2 = ds.filter(partial(is_unique, column=\"col1\", memory=memory))\r\n```\r\n\r\nOf course, we can improve the API so that we can introduce `Dataset.drop_duplicates`.\r\nFor the parallel version, we can use a shared memory set.",
"An approach that works assuming you can hold the all the unique document hashes in memory:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndef get_hash(example):\r\n \"\"\"Get hash of content field.\"\"\"\r\n return {\"hash\": hash(example[\"content\"])} # can use any hashing function here\r\n \r\ndef check_uniques(example, uniques):\r\n \"\"\"Check if current hash is still in set of unique hashes and remove if true.\"\"\"\r\n if example[\"hash\"] in uniques:\r\n uniques.remove(example[\"hash\"])\r\n return True\r\n else:\r\n return False\r\n\r\nds = load_dataset(\"some_dataset\")\r\nds = ds.map(get_hash)\r\nuniques = set(ds.unique(\"hash\"))\r\nds_filter = ds.filter(check_uniques, fn_kwargs={\"uniques\": uniques})\r\n```\r\nIf the `uniques` could be stored in arrow then no additional memory would used at all but I don't know if this is possible.\r\n"
] | 1,623,972,938,000 | 1,638,434,361,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that..
**Describe the solution you'd like**
have a functionality of " remove duplicated rows"
**Describe alternatives you've considered**
convert dataset to pandas, remove duplicate, and convert back...
**Additional context**
no | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2514/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2513 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2513/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2513/comments | https://api.github.com/repos/huggingface/datasets/issues/2513/events | https://github.com/huggingface/datasets/issues/2513 | 924,174,413 | MDU6SXNzdWU5MjQxNzQ0MTM= | 2,513 | Corelation should be Correlation | {
"login": "colbym-MM",
"id": 71514164,
"node_id": "MDQ6VXNlcjcxNTE0MTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/71514164?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/colbym-MM",
"html_url": "https://github.com/colbym-MM",
"followers_url": "https://api.github.com/users/colbym-MM/followers",
"following_url": "https://api.github.com/users/colbym-MM/following{/other_user}",
"gists_url": "https://api.github.com/users/colbym-MM/gists{/gist_id}",
"starred_url": "https://api.github.com/users/colbym-MM/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/colbym-MM/subscriptions",
"organizations_url": "https://api.github.com/users/colbym-MM/orgs",
"repos_url": "https://api.github.com/users/colbym-MM/repos",
"events_url": "https://api.github.com/users/colbym-MM/events{/privacy}",
"received_events_url": "https://api.github.com/users/colbym-MM/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @colbym-MM, thanks for reporting. We are fixing it."
] | 1,623,950,928,000 | 1,624,005,835,000 | 1,624,005,835,000 | NONE | null | https://github.com/huggingface/datasets/blob/0e87e1d053220e8ecddfa679bcd89a4c7bc5af62/metrics/matthews_correlation/matthews_correlation.py#L66 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2513/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2512 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2512/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2512/comments | https://api.github.com/repos/huggingface/datasets/issues/2512/events | https://github.com/huggingface/datasets/issues/2512 | 924,069,353 | MDU6SXNzdWU5MjQwNjkzNTM= | 2,512 | seqeval metric does not work with a recent version of sklearn: classification_report() got an unexpected keyword argument 'output_dict' | {
"login": "avidale",
"id": 8642136,
"node_id": "MDQ6VXNlcjg2NDIxMzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8642136?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avidale",
"html_url": "https://github.com/avidale",
"followers_url": "https://api.github.com/users/avidale/followers",
"following_url": "https://api.github.com/users/avidale/following{/other_user}",
"gists_url": "https://api.github.com/users/avidale/gists{/gist_id}",
"starred_url": "https://api.github.com/users/avidale/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avidale/subscriptions",
"organizations_url": "https://api.github.com/users/avidale/orgs",
"repos_url": "https://api.github.com/users/avidale/repos",
"events_url": "https://api.github.com/users/avidale/events{/privacy}",
"received_events_url": "https://api.github.com/users/avidale/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Sorry, I was using an old version of sequeval"
] | 1,623,944,162,000 | 1,623,944,767,000 | 1,623,944,767,000 | NONE | null | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset, load_metric
seqeval = load_metric("seqeval")
seqeval.compute(predictions=[['A']], references=[['A']])
```
## Expected results
The function computes a dict with metrics
## Actual results
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-39-69a57f5cf06f> in <module>
1 from datasets import load_dataset, load_metric
2 seqeval = load_metric("seqeval")
----> 3 seqeval.compute(predictions=[['A']], references=[['A']])
~/p3/lib/python3.7/site-packages/datasets/metric.py in compute(self, *args, **kwargs)
396 references = self.data["references"]
397 with temp_seed(self.seed):
--> 398 output = self._compute(predictions=predictions, references=references, **kwargs)
399
400 if self.buf_writer is not None:
~/.cache/huggingface/modules/datasets_modules/metrics/seqeval/81eda1ff004361d4fa48754a446ec69bb7aa9cf4d14c7215f407d1475941c5ff/seqeval.py in _compute(self, predictions, references, suffix)
95
96 def _compute(self, predictions, references, suffix=False):
---> 97 report = classification_report(y_true=references, y_pred=predictions, suffix=suffix, output_dict=True)
98 report.pop("macro avg")
99 report.pop("weighted avg")
TypeError: classification_report() got an unexpected keyword argument 'output_dict'
```
## Environment info
sklearn=0.24
datasets=1.1.3
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2512/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2511 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2511/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2511/comments | https://api.github.com/repos/huggingface/datasets/issues/2511/events | https://github.com/huggingface/datasets/issues/2511 | 923,762,133 | MDU6SXNzdWU5MjM3NjIxMzM= | 2,511 | Add C4 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Update on this: I'm computing the checksums of the data files. It will be available soon",
"Added in #2575 :)"
] | 1,623,925,864,000 | 1,625,488,618,000 | 1,625,488,617,000 | MEMBER | null | ## Adding a Dataset
- **Name:** *C4*
- **Description:** *https://github.com/allenai/allennlp/discussions/5056*
- **Paper:** *https://arxiv.org/abs/1910.10683*
- **Data:** *https://huggingface.co/datasets/allenai/c4*
- **Motivation:** *Used a lot for pretraining*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
Should fix https://github.com/huggingface/datasets/issues/1710 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2511/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2511/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2510 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2510/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2510/comments | https://api.github.com/repos/huggingface/datasets/issues/2510/events | https://github.com/huggingface/datasets/pull/2510 | 923,735,485 | MDExOlB1bGxSZXF1ZXN0NjcyNDY3MzY3 | 2,510 | Add align_labels_with_mapping to DatasetDict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,623,924,215,000 | 1,623,926,725,000 | 1,623,926,724,000 | MEMBER | null | https://github.com/huggingface/datasets/pull/2457 added the `Dataset.align_labels_with_mapping` method.
In this PR I also added `DatasetDict.align_labels_with_mapping` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2510/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2510/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2510",
"html_url": "https://github.com/huggingface/datasets/pull/2510",
"diff_url": "https://github.com/huggingface/datasets/pull/2510.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2510.patch",
"merged_at": 1623926724000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2509 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2509/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2509/comments | https://api.github.com/repos/huggingface/datasets/issues/2509/events | https://github.com/huggingface/datasets/pull/2509 | 922,846,035 | MDExOlB1bGxSZXF1ZXN0NjcxNjcyMzU5 | 2,509 | Fix fingerprint when moving cache dir | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Windows, why are you doing this to me ?",
"Thanks @lhoestq, I'm starting reviewing this PR.",
"Yea issues on windows are about long paths, not long filenames.\r\nWe can make sure the lock filenames are not too long, but not for the paths",
"Took your suggestions into account @albertvillanova :)"
] | 1,623,861,909,000 | 1,624,287,904,000 | 1,624,287,903,000 | MEMBER | null | The fingerprint of a dataset changes if the cache directory is moved.
I fixed that by setting the fingerprint to be the hash of:
- the relative cache dir (dataset_name/version/config_id)
- the requested split
Close #2496
I had to fix an issue with the filelock filename that was too long (>255). It prevented the tests to run on my machine. I just added `hash_filename_if_too_long` in case this happens, to not get filenames longer than 255.
We usually have long filenames for filelocks because they are named after the path that is being locked. In case the path is a cache directory that has long directory names, then the filelock filename could en up being very long. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2509/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2509",
"html_url": "https://github.com/huggingface/datasets/pull/2509",
"diff_url": "https://github.com/huggingface/datasets/pull/2509.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2509.patch",
"merged_at": 1624287903000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2508 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2508/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2508/comments | https://api.github.com/repos/huggingface/datasets/issues/2508/events | https://github.com/huggingface/datasets/issues/2508 | 921,863,173 | MDU6SXNzdWU5MjE4NjMxNzM= | 2,508 | Load Image Classification Dataset from Local | {
"login": "Jacobsolawetz",
"id": 8428198,
"node_id": "MDQ6VXNlcjg0MjgxOTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8428198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Jacobsolawetz",
"html_url": "https://github.com/Jacobsolawetz",
"followers_url": "https://api.github.com/users/Jacobsolawetz/followers",
"following_url": "https://api.github.com/users/Jacobsolawetz/following{/other_user}",
"gists_url": "https://api.github.com/users/Jacobsolawetz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Jacobsolawetz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Jacobsolawetz/subscriptions",
"organizations_url": "https://api.github.com/users/Jacobsolawetz/orgs",
"repos_url": "https://api.github.com/users/Jacobsolawetz/repos",
"events_url": "https://api.github.com/users/Jacobsolawetz/events{/privacy}",
"received_events_url": "https://api.github.com/users/Jacobsolawetz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! Is this folder structure a standard, a bit like imagenet ?\r\nIn this case maybe we can consider having a dataset loader for cifar-like, imagenet-like, squad-like, conll-like etc. datasets ?\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nmy_custom_cifar = load_dataset(\"cifar_like\", data_dir=\"path/to/data/dir\")\r\n```\r\n\r\nLet me know what you think",
"Yep that would be sweet - closing for now as we found a workaround. ",
"@lhoestq I think we'll want a generic `image-folder` dataset (same as 'imagenet-like'). This is like `torchvision.datasets.ImageFolder`, and is something vision folks are used to seeing.",
"Opening this back up, since I'm planning on tackling this. Already posted a quick version of it on my account on the hub.\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('nateraw/image-folder', data_files='PetImages/')\r\n```"
] | 1,623,797,013,000 | 1,626,113,034,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader.
**Describe the solution you'd like**
Given a folder structure with images of each class in each folder, the ability to load these folders into a HuggingFace dataset like "cifar10".
**Describe alternatives you've considered**
Implement ViT training outside of the HuggingFace Trainer and without datasets (we did this but prefer to stay on the main path)
Write custom data loader logic
**Additional context**
We're training ViT on custom dataset
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2508/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2507 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2507/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2507/comments | https://api.github.com/repos/huggingface/datasets/issues/2507/events | https://github.com/huggingface/datasets/pull/2507 | 921,441,962 | MDExOlB1bGxSZXF1ZXN0NjcwNDQ0MDgz | 2,507 | Rearrange JSON field names to match passed features schema field names | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title": "1.9",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 12,
"state": "closed",
"created_at": 1622477586000,
"updated_at": 1626099120000,
"due_on": 1625727600000,
"closed_at": 1625809807000
} | [] | 1,623,766,202,000 | 1,623,840,469,000 | 1,623,840,469,000 | MEMBER | null | This PR depends on PR #2453 (which must be merged first).
Close #2366. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2507/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2507",
"html_url": "https://github.com/huggingface/datasets/pull/2507",
"diff_url": "https://github.com/huggingface/datasets/pull/2507.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2507.patch",
"merged_at": 1623840469000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2506 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2506/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2506/comments | https://api.github.com/repos/huggingface/datasets/issues/2506/events | https://github.com/huggingface/datasets/pull/2506 | 921,435,598 | MDExOlB1bGxSZXF1ZXN0NjcwNDM4NTgx | 2,506 | Add course banner | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,623,765,834,000 | 1,623,774,336,000 | 1,623,774,335,000 | MEMBER | null | This PR adds a course banner similar to the one you can now see in the [Transformers repo](https://github.com/huggingface/transformers) that links to the course. Let me know if placement seems right to you or not, I can move it just below the badges too. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2506/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2506/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2506",
"html_url": "https://github.com/huggingface/datasets/pull/2506",
"diff_url": "https://github.com/huggingface/datasets/pull/2506.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2506.patch",
"merged_at": 1623774335000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2505/comments | https://api.github.com/repos/huggingface/datasets/issues/2505/events | https://github.com/huggingface/datasets/pull/2505 | 921,234,797 | MDExOlB1bGxSZXF1ZXN0NjcwMjY2NjQy | 2,505 | Make numpy arrow extractor faster | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks like we have a nice speed up in some benchmarks. For example:\r\n- `read_formatted numpy 5000`: 4.584777 sec -> 0.487113 sec\r\n- `read_formatted torch 5000`: 4.565676 sec -> 1.289514 sec",
"Can we convert this draft to PR @lhoestq ?",
"Ready for review ! cc @vblagoje",
"@lhoestq I tried the branch and it works for me. Although performance trace now shows a speedup, the overall pre-training speed up is minimal. But that's on my plate to explore further. ",
"Thanks for investigating @vblagoje \r\n\r\n@albertvillanova , do you have any comments on this PR ? Otherwise I think we can merge it"
] | 1,623,751,892,000 | 1,624,874,019,000 | 1,624,874,018,000 | MEMBER | null | I changed the NumpyArrowExtractor to call directly to_numpy and see if it can lead to speed-ups as discussed in https://github.com/huggingface/datasets/issues/2498
This could make the numpy/torch/tf/jax formatting faster | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2505/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2505/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2505",
"html_url": "https://github.com/huggingface/datasets/pull/2505",
"diff_url": "https://github.com/huggingface/datasets/pull/2505.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2505.patch",
"merged_at": 1624874018000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2503 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2503/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2503/comments | https://api.github.com/repos/huggingface/datasets/issues/2503/events | https://github.com/huggingface/datasets/issues/2503 | 920,636,186 | MDU6SXNzdWU5MjA2MzYxODY= | 2,503 | SubjQA wrong boolean values in entries | {
"login": "arnaudstiegler",
"id": 26485052,
"node_id": "MDQ6VXNlcjI2NDg1MDUy",
"avatar_url": "https://avatars.githubusercontent.com/u/26485052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arnaudstiegler",
"html_url": "https://github.com/arnaudstiegler",
"followers_url": "https://api.github.com/users/arnaudstiegler/followers",
"following_url": "https://api.github.com/users/arnaudstiegler/following{/other_user}",
"gists_url": "https://api.github.com/users/arnaudstiegler/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arnaudstiegler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arnaudstiegler/subscriptions",
"organizations_url": "https://api.github.com/users/arnaudstiegler/orgs",
"repos_url": "https://api.github.com/users/arnaudstiegler/repos",
"events_url": "https://api.github.com/users/arnaudstiegler/events{/privacy}",
"received_events_url": "https://api.github.com/users/arnaudstiegler/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @arnaudstiegler, thanks for reporting. I'm investigating it.",
"@arnaudstiegler I have just checked that these mismatches are already present in the original dataset: https://github.com/megagonlabs/SubjQA\r\n\r\nWe are going to contact the dataset owners to report this.",
"I have:\r\n- opened an issue in their repo: https://github.com/megagonlabs/SubjQA/issues/3\r\n- written an email to all the paper authors",
"Please [see my response](https://github.com/megagonlabs/SubjQA/issues/3#issuecomment-905160010). There will be a fix in a couple of days."
] | 1,623,692,566,000 | 1,629,863,526,000 | null | NONE | null | ## Describe the bug
SubjQA seems to have a boolean that's consistently wrong.
It defines:
- question_subj_level: The subjectiviy level of the question (on a 1 to 5 scale with 1 being the most subjective).
- is_ques_subjective: A boolean subjectivity label derived from question_subj_level (i.e., scores below 4 are considered as subjective)
However, `is_ques_subjective` seems to have wrong values in the entire dataset.
For instance, in the example in the dataset card, we have:
- "question_subj_level": 2
- "is_ques_subjective": false
However, according to the description, the question should be subjective since the `question_subj_level` is below 4
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2503/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2502 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2502/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2502/comments | https://api.github.com/repos/huggingface/datasets/issues/2502/events | https://github.com/huggingface/datasets/pull/2502 | 920,623,572 | MDExOlB1bGxSZXF1ZXN0NjY5NzQ1MDA5 | 2,502 | JAX integration | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,623,691,463,000 | 1,624,292,150,000 | 1,624,292,149,000 | MEMBER | null | Hi !
I just added the "jax" formatting, as we already have for pytorch, tensorflow, numpy (and also pandas and arrow).
It does pretty much the same thing as the pytorch formatter except it creates jax.numpy.ndarray objects.
```python
from datasets import Dataset
d = Dataset.from_dict({"foo": [[0., 1., 2.]]})
d = d.with_format("jax")
d[0]
# {'foo': DeviceArray([0., 1., 2.], dtype=float32)}
```
A few details:
- The default integer precision for jax depends on the jax configuration `jax_enable_x64` (see [here](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#double-64bit-precision)), I took that into account. Unless `jax_enable_x64` is specified, it is int32 by default
- AFAIK it's not possible to do a full conversion from arrow data to jax data. We are doing arrow -> numpy -> jax but the numpy -> jax part doesn't do zero copy unfortutanely (see [here](https://github.com/google/jax/issues/4486))
- the env var for disabling JAX is `USE_JAX`. However I noticed that in `transformers` it is `USE_FLAX`. This is not an issue though IMO
I also updated `convert_to_python_objects` to allow users to pass jax.numpy.ndarray objects to build a dataset.
Since the `convert_to_python_objects` method became slow because it's the time when pytorch, tf (and now jax) are imported, I fixed it by checking the `sys.modules` to avoid unecessary import of pytorch, tf or jax.
Close #2495 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2502/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2502",
"html_url": "https://github.com/huggingface/datasets/pull/2502",
"diff_url": "https://github.com/huggingface/datasets/pull/2502.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2502.patch",
"merged_at": 1624292148000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2501/comments | https://api.github.com/repos/huggingface/datasets/issues/2501/events | https://github.com/huggingface/datasets/pull/2501 | 920,579,634 | MDExOlB1bGxSZXF1ZXN0NjY5NzA3Nzc0 | 2,501 | Add Zenodo metadata file with license | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title": "1.9",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 12,
"state": "closed",
"created_at": 1622477586000,
"updated_at": 1626099120000,
"due_on": 1625727600000,
"closed_at": 1625809807000
} | [] | 1,623,688,092,000 | 1,623,689,382,000 | 1,623,689,382,000 | MEMBER | null | This Zenodo metadata file fixes the name of the `Datasets` license appearing in the DOI as `"Apache-2.0"`, which otherwise by default is `"other-open"`.
Close #2472. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2501/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2501",
"html_url": "https://github.com/huggingface/datasets/pull/2501",
"diff_url": "https://github.com/huggingface/datasets/pull/2501.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2501.patch",
"merged_at": 1623689382000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2500/comments | https://api.github.com/repos/huggingface/datasets/issues/2500/events | https://github.com/huggingface/datasets/pull/2500 | 920,471,411 | MDExOlB1bGxSZXF1ZXN0NjY5NjE2MjQ1 | 2,500 | Add load_dataset_builder | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @mariosasko, thanks for taking on this issue.\r\n\r\nJust a few logistic suggestions, as you are one of our most active contributors ❤️ :\r\n- When you start working on an issue, you can self-assign it to you by commenting on the issue page with the keyword: `#self-assign`; we have implemented a GitHub Action to take care of that... 😉 \r\n- When you are still working on your Pull Request, instead of using the `[WIP]` in the PR name, you can instead create a *draft* pull request: use the drop-down (on the right of the *Create Pull Request* button) and select **Create Draft Pull Request**, then click **Draft Pull Request**.\r\n\r\nI hope you find these hints useful. 🤗 ",
"@albertvillanova Thanks for the tips. When creating this PR, it slipped my mind that this should be a draft. GH has an option to convert already created PRs to draft PRs, but this requires write access for the repo, so maybe you can help.",
"Ready for the review!\r\n\r\nOne additional change. I've modified the `camelcase_to_snakecase`/`snakecase_to_camelcase` conversion functions to fix conversion of the names with 2 or more underscores (e.g. `camelcase_to_snakecase(\"__DummyDataset__\")` would return `___dummy_dataset__`; notice one extra underscore at the beginning). The implementation is based on the [inflection](https://pypi.org/project/inflection/) library.\r\n",
"Thank you for adding this feature, @mariosasko - this is really awesome!\r\n\r\nTried with:\r\n```\r\npython -c \"from datasets import load_dataset_builder; b = load_dataset_builder('openwebtext-10k'); print(b.cache_dir)\"\r\nUsing the latest cached version of the module from /home/stas/.cache/huggingface/modules/datasets_modules/datasets\r\n/openwebtext-10k/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b (last modified on Wed May 12 \r\n20:22:53 2021) \r\n\r\nsince it couldn't be found locally at openwebtext-10k/openwebtext-10k.py \r\n\r\nor remotely (FileNotFoundError).\r\n\r\n/home/stas/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b\r\n```\r\n\r\nThe logger message (edited by me to add new lines to point the issues out) is a bit confusing to the user - that is what does `FileNotFoundError` refer to? \r\n\r\n1. May be replace `FileNotFoundError` with where it was looking for a file online. But then the remote file is there - it's found \r\n2. I'm not sure why it says \"since it couldn't be found locally\" - as it is locally found at the cache folder and again what does \" locally at openwebtext-10k/openwebtext-10k.py\" mean - i.e. where does it look for it? Is it `./openwebtext-10k/openwebtext-10k.py` it's looking for? or in some specific dir?\r\n\r\nIf the cached version always supersedes any other versions perhaps this is what it should say?\r\n```\r\nfound cached version at xxx, not looking for a local at yyy, not downloading remote at zzz\r\n```",
"Hi ! Thanks for the comments\r\n\r\nRegarding your last message:\r\nYou must pass `stas/openwebtext-10k` as in `load_dataset` instead of `openwebtext-10k`. Otherwise it doesn't know how to retrieve the builder from the HF Hub.\r\n\r\nWhen you specify a dataset name without a slash, it tries to load a canonical dataset or it looks locally at ./openwebtext-10k/openwebtext-10k.py\r\nHere since `openwebtext-10k` is not a canonical dataset and doesn't exist locally at ./openwebtext-10k/openwebtext-10k.py: it raised a FileNotFoundError.\r\nAs a fallback it managed to find the dataset script in your cache and it used this one.",
"Oh, I see, so I actually used an incorrect input. so it was a user error. Correcting it:\r\n\r\n```\r\npython -c \"from datasets import load_dataset_builder; b = load_dataset_builder('stas/openwebtext-10k'); print(b.cache_dir)\"\r\n/home/stas/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b\r\n```\r\n\r\nNow there is no logger message. Got it!\r\n\r\nOK, I'm not sure the magical recovery it did in first place is most beneficial in the long run. I'd have rather it failed and said: \"incorrect input there is no such dataset as 'openwebtext-10k' at <this path> or <this url>\" - because if it doesn't fail I may leave it in the code and it'll fail later when another user tries to use my code and won't have the cache. Does it make sense? Giving me `this url` allows me to go to the datasets hub and realize that the dataset is missing the username qualifier.\r\n\r\n> Here since openwebtext-10k is not a canonical dataset and doesn't exist locally at ./openwebtext-10k/openwebtext-10k.py: it raised a FileNotFoundError.\r\n\r\nExcept it slapped the exception name to ` remotely (FileNotFoundError).` which makes no sense.\r\n\r\nPlus for the local it's not clear where is it looking relatively too when it gets `FileNotFoundError` - perhaps it'd help to use absolute path and use it in the message?\r\n\r\n---------------\r\n\r\nFinally, the logger format is not set up so the user gets a warning w/o knowing it's a warning. As you can see it's missing the WARNING pre-amble in https://github.com/huggingface/datasets/pull/2500#issuecomment-874250500\r\n\r\ni.e. I had no idea it was warning me of something, I was just trying to make sense of the message that's why I started the discussion and otherwise I'd have completely missed the point of me making an error."
] | 1,623,680,865,000 | 1,625,789,296,000 | 1,625,481,958,000 | CONTRIBUTOR | null | Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself.
TODOs:
- [x] Add docstring and entry in the docs
- [x] Add tests
Closes #2484
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2500/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2500/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2500",
"html_url": "https://github.com/huggingface/datasets/pull/2500",
"diff_url": "https://github.com/huggingface/datasets/pull/2500.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2500.patch",
"merged_at": 1625481957000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2499 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2499/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2499/comments | https://api.github.com/repos/huggingface/datasets/issues/2499/events | https://github.com/huggingface/datasets/issues/2499 | 920,413,021 | MDU6SXNzdWU5MjA0MTMwMjE= | 2,499 | Python Programming Puzzles | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [
"👀 @TalSchuster",
"Thanks @VictorSanh!\r\nThere's also a [notebook](https://aka.ms/python_puzzles) and [demo](https://aka.ms/python_puzzles_study) available now to try out some of the puzzles"
] | 1,623,677,238,000 | 1,623,780,854,000 | null | MEMBER | null | ## Adding a Dataset
- **Name:** Python Programming Puzzles
- **Description:** Programming challenge called programming puzzles, as an objective and comprehensive evaluation of program synthesis
- **Paper:** https://arxiv.org/pdf/2106.05784.pdf
- **Data:** https://github.com/microsoft/PythonProgrammingPuzzles ([Scrolling through the data](https://github.com/microsoft/PythonProgrammingPuzzles/blob/main/problems/README.md))
- **Motivation:** Spans a large range of difficulty, problems, and domains. A useful resource for evaluation as we don't have a clear understanding of the abilities and skills of extremely large LMs.
Note: it's a growing dataset (contributions are welcome), so we'll need careful versioning for this dataset.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2499/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2499/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2498 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2498/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2498/comments | https://api.github.com/repos/huggingface/datasets/issues/2498/events | https://github.com/huggingface/datasets/issues/2498 | 920,411,285 | MDU6SXNzdWU5MjA0MTEyODU= | 2,498 | Improve torch formatting performance | {
"login": "vblagoje",
"id": 458335,
"node_id": "MDQ6VXNlcjQ1ODMzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vblagoje",
"html_url": "https://github.com/vblagoje",
"followers_url": "https://api.github.com/users/vblagoje/followers",
"following_url": "https://api.github.com/users/vblagoje/following{/other_user}",
"gists_url": "https://api.github.com/users/vblagoje/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vblagoje/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vblagoje/subscriptions",
"organizations_url": "https://api.github.com/users/vblagoje/orgs",
"repos_url": "https://api.github.com/users/vblagoje/repos",
"events_url": "https://api.github.com/users/vblagoje/events{/privacy}",
"received_events_url": "https://api.github.com/users/vblagoje/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"That’s interesting thanks, let’s see what we can do. Can you detail your last sentence? I’m not sure I understand it well.",
"Hi ! I just re-ran a quick benchmark and using `to_numpy()` seems to be faster now:\r\n\r\n```python\r\nimport pyarrow as pa # I used pyarrow 3.0.0\r\nimport numpy as np\r\n\r\nn, max_length = 1_000, 512\r\nlow, high, size = 0, 2 << 16, (n, max_length)\r\n\r\ntable = pa.Table.from_pydict({\r\n \"input_ids\": np.random.default_rng(42).integers(low=low, high=high, size=size).tolist()\r\n})\r\n\r\n\r\n%%timeit\r\n_ = table.to_pandas()[\"input_ids\"].to_numpy()\r\n# 1.44 ms ± 80.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\r\n\r\n%%timeit\r\n_ = table[\"input_ids\"].to_pandas().to_numpy()\r\n# 461 µs ± 14.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\r\n\r\n%%timeit\r\n_ = table[\"input_ids\"].to_numpy()\r\n# 317 µs ± 5.06 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)\r\n```\r\n\r\nCurrently the conversion from arrow to numpy is done in the NumpyArrowExtractor here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/d6d0ede9486ffad7944642ca9a326e058b676788/src/datasets/formatting/formatting.py#L143-L166\r\n\r\nLet's update the NumpyArrowExtractor to call `to_numpy` directly and see how our github benchmarks evolve ?__",
"Sounds like a plan @lhoestq If you create a PR I'll pick it up and try it out right away! ",
"@lhoestq I can also prepare the PR, just lmk. ",
"I’m not exactly sure how to read the graph but it seems that to_categorical take a lot of time here. Could you share more informations on the features/stats of your datasets so we could maybe design a synthetic datasets that looks more similar for debugging testing?",
"I created https://github.com/huggingface/datasets/pull/2505 if you want to play with it @vblagoje ",
"> I’m not exactly sure how to read the graph but it seems that to_categorical take a lot of time here. Could you share more informations on the features/stats of your datasets so we could maybe design a synthetic datasets that looks more similar for debugging testing?\r\n\r\n@thomwolf starting from the top, each rectangle represents the cumulative amount of it takes to execute the method call. Therefore, format_batch in torch_formatter.py takes ~20 sec, and the largest portion of that call is taken by to_pandas call and the smaller portion (grey rectangle) by the other method invocation(s) in format_batch (series_to_numpy etc). \r\n\r\nFeatures of the dataset are BERT pre-training model input columns i.e:\r\n```\r\nf = Features({ \r\n \"input_ids\": Sequence(feature=Value(dtype=\"int32\")), \r\n \"attention_mask\": Sequence(feature=Value(dtype=\"int8\")), \r\n \"token_type_ids\": Sequence(feature=Value(dtype=\"int8\")), \r\n \"labels\": Sequence(feature=Value(dtype=\"int32\")), \r\n \"next_sentence_label\": Value(dtype=\"int8\")\r\n})\r\n```\r\n\r\nI'll work with @lhoestq till we get to the bottom of this one. \r\n ",
"@lhoestq the proposed branch is faster, but overall training speedup is a few percentage points. I couldn't figure out how to include the GitHub branch into setup.py, so I couldn't start NVidia optimized Docker-based pre-training run. But on bare metal, there is a slight improvement. I'll do some more performance traces. ",
"Hi @vblagoje, to install Datasets from @lhoestq PR reference #2505, you can use:\r\n```shell\r\npip install git+ssh://git@github.com/huggingface/datasets.git@refs/pull/2505/head#egg=datasets\r\n```",
"Hey @albertvillanova yes thank you, I am aware, I can easily pull it from a terminal command line but then I can't automate docker image builds as dependencies are picked up from setup.py and for some reason setup.py doesn't accept this string format.",
"@vblagoje in that case, you can add this to your `setup.py`:\r\n```python\r\n install_requires=[\r\n \"datasets @ git+ssh://git@github.com/huggingface/datasets.git@refs/pull/2505/head\",\r\n```",
"@lhoestq @thomwolf @albertvillanova The new approach is definitely faster, dataloader now takes less than 3% cumulative time (pink rectangle two rectangles to the right of tensor.py backward invocation)\r\n\r\n![Screen Shot 2021-06-16 at 3 05 06 PM](https://user-images.githubusercontent.com/458335/122224432-19de4700-ce82-11eb-982f-d45d4bcc1e41.png)\r\n\r\nWhen we drill down into dataloader next invocation we get:\r\n\r\n![Screen Shot 2021-06-16 at 3 09 56 PM](https://user-images.githubusercontent.com/458335/122224976-a1c45100-ce82-11eb-8d40-59194740d616.png)\r\n\r\nAnd finally format_batch:\r\n\r\n![Screen Shot 2021-06-16 at 3 11 07 PM](https://user-images.githubusercontent.com/458335/122225132-cae4e180-ce82-11eb-8a16-967ab7c1c2aa.png)\r\n\r\n\r\nNot sure this could be further improved but this is definitely a decent step forward.\r\n\r\n",
"> ```python\r\n> datasets @ git+ssh://git@github.com/huggingface/datasets.git@refs/pull/2505/head\r\n> ```\r\n\r\n@albertvillanova how would I replace datasets dependency in https://github.com/huggingface/transformers/blob/master/setup.py as the above approach is not working. ",
"@vblagoje I tested my proposed approach before posting it here and it worked for me. \r\n\r\nIs it not working in your case because of the SSH protocol? In that case you could try the same approach but using HTTPS:\r\n```\r\n\"datasets @ git+https://github.com/huggingface/datasets.git@refs/pull/2505/head\",\r\n``` ",
"Also note the blanks before and after the `@`.",
"@albertvillanova of course it works. Apologies. I needed to change datasets in all deps references , like [here](https://github.com/huggingface/transformers/blob/master/setup.py#L235) for example. "
] | 1,623,677,124,000 | 1,624,269,294,000 | null | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors.
A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia and BookCorpus datasets. The training machines are similar to DGX-1 workstations. We use HF trainer torch.distributed training approach on a single machine with 8 GPUs.
The current performance is about 30% slower than NVidia optimized BERT [examples](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/LanguageModeling) baseline. Quite a bit of customized code and training loop tricks were used to achieve the baseline performance. It would be great to achieve the same performance while using nothing more than off the shelf HF ecosystem. Perhaps, in the future, with @stas00 work on deepspeed integration, it could even be exceeded.
**Describe the solution you'd like**
Using profiling tools we've observed that appx. 25% of cumulative run time is spent on data loader next call.
![dataloader_next](https://user-images.githubusercontent.com/458335/121895543-59742a00-ccee-11eb-85fb-f07715e3f1f6.png)
As you can observe most of the data loader next call is spent in HF datasets torch_formatter.py format_batch call.
Digging a bit deeper into format_batch we can see the following profiler data:
![torch_formatter](https://user-images.githubusercontent.com/458335/121895944-c7b8ec80-ccee-11eb-95d5-5875c5716c30.png)
Once again, a lot of time is spent in pyarrow table conversion to pandas which seems like an intermediary step. Offline @lhoestq told me that this approach was, for some unknown reason, faster than direct to numpy conversion.
**Describe alternatives you've considered**
I am not familiar with pyarrow and have not yet considered the alternatives to the current approach.
Most of the online advice around data loader performance improvements revolve around increasing number of workers, using pin memory for copying tensors from host device to gpus but we've already tried these avenues without much performance improvement. Weights & Biases dashboard for the pre-training task reports CPU utilization of ~ 10%, GPUs are completely saturated (GPU utilization is above 95% on all GPUs), while disk utilization is above 90%.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2498/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2497 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2497/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2497/comments | https://api.github.com/repos/huggingface/datasets/issues/2497/events | https://github.com/huggingface/datasets/pull/2497 | 920,250,382 | MDExOlB1bGxSZXF1ZXN0NjY5NDI3OTU3 | 2,497 | Use default cast for sliced list arrays if pyarrow >= 4 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title": "1.9",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 12,
"state": "closed",
"created_at": 1622477586000,
"updated_at": 1626099120000,
"due_on": 1625727600000,
"closed_at": 1625809807000
} | [
"I believe we don't use PyArrow >= 4.0.0 because of some segfault issues:\r\nhttps://github.com/huggingface/datasets/blob/1206ffbcd42dda415f6bfb3d5040708f50413c93/setup.py#L78\r\nCan you confirm @lhoestq ?",
"@SBrandeis pyarrow version 4.0.1 has fixed that issue: #2489 😉 "
] | 1,623,664,967,000 | 1,623,780,378,000 | 1,623,680,677,000 | MEMBER | null | From pyarrow version 4, it is supported to cast sliced lists.
This PR uses default pyarrow cast in Datasets to cast sliced list arrays if pyarrow version is >= 4.
In relation with PR #2461 and #2490.
cc: @lhoestq, @abhi1thakur, @SBrandeis | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2497/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2497",
"html_url": "https://github.com/huggingface/datasets/pull/2497",
"diff_url": "https://github.com/huggingface/datasets/pull/2497.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2497.patch",
"merged_at": 1623680677000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2496 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2496/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2496/comments | https://api.github.com/repos/huggingface/datasets/issues/2496/events | https://github.com/huggingface/datasets/issues/2496 | 920,216,314 | MDU6SXNzdWU5MjAyMTYzMTQ= | 2,496 | Dataset fingerprint changes after moving the cache directory, which prevent cache reload when using `map` | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,623,662,426,000 | 1,624,287,903,000 | 1,624,287,903,000 | MEMBER | null | `Dataset.map` uses the dataset fingerprint (a hash) for caching.
However the fingerprint seems to change when someone moves the cache directory of the dataset.
This is because it uses the default fingerprint generation:
1. the dataset path is used to get the fingerprint
2. the modification times of the arrow file is also used to get the fingerprint
To fix that we could set the fingerprint of the dataset to be a hash of (<dataset_name>, <config_name>, <version>, <script_hash>), i.e. a hash of the the cache path relative to the cache directory. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2496/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2496/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2495/comments | https://api.github.com/repos/huggingface/datasets/issues/2495/events | https://github.com/huggingface/datasets/issues/2495 | 920,170,030 | MDU6SXNzdWU5MjAxNzAwMzA= | 2,495 | JAX formatting | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,623,659,527,000 | 1,624,292,149,000 | 1,624,292,149,000 | MEMBER | null | We already support pytorch, tensorflow, numpy, pandas and arrow dataset formatting. Let's add jax as well | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2495/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2495/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2494 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2494/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2494/comments | https://api.github.com/repos/huggingface/datasets/issues/2494/events | https://github.com/huggingface/datasets/issues/2494 | 920,149,183 | MDU6SXNzdWU5MjAxNDkxODM= | 2,494 | Improve docs on Enhancing performance | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | open | false | null | [] | null | [] | 1,623,658,308,000 | 1,623,658,308,000 | null | MEMBER | null | In the ["Enhancing performance"](https://huggingface.co/docs/datasets/loading_datasets.html#enhancing-performance) section of docs, add specific use cases:
- How to make datasets the fastest
- How to make datasets take the less RAM
- How to make datasets take the less hard drive mem
cc: @thomwolf
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2494/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2493 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2493/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2493/comments | https://api.github.com/repos/huggingface/datasets/issues/2493/events | https://github.com/huggingface/datasets/pull/2493 | 919,833,281 | MDExOlB1bGxSZXF1ZXN0NjY5MDc4OTcw | 2,493 | add tensorflow-macos support | {
"login": "slayerjain",
"id": 12831254,
"node_id": "MDQ6VXNlcjEyODMxMjU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12831254?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/slayerjain",
"html_url": "https://github.com/slayerjain",
"followers_url": "https://api.github.com/users/slayerjain/followers",
"following_url": "https://api.github.com/users/slayerjain/following{/other_user}",
"gists_url": "https://api.github.com/users/slayerjain/gists{/gist_id}",
"starred_url": "https://api.github.com/users/slayerjain/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/slayerjain/subscriptions",
"organizations_url": "https://api.github.com/users/slayerjain/orgs",
"repos_url": "https://api.github.com/users/slayerjain/repos",
"events_url": "https://api.github.com/users/slayerjain/events{/privacy}",
"received_events_url": "https://api.github.com/users/slayerjain/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@albertvillanova done!"
] | 1,623,601,208,000 | 1,623,747,186,000 | 1,623,747,186,000 | CONTRIBUTOR | null | ref - https://github.com/huggingface/datasets/issues/2068 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2493/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2493",
"html_url": "https://github.com/huggingface/datasets/pull/2493",
"diff_url": "https://github.com/huggingface/datasets/pull/2493.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2493.patch",
"merged_at": 1623747186000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2492 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2492/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2492/comments | https://api.github.com/repos/huggingface/datasets/issues/2492/events | https://github.com/huggingface/datasets/pull/2492 | 919,718,102 | MDExOlB1bGxSZXF1ZXN0NjY4OTkxODk4 | 2,492 | Eduge | {
"login": "enod",
"id": 6023883,
"node_id": "MDQ6VXNlcjYwMjM4ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6023883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enod",
"html_url": "https://github.com/enod",
"followers_url": "https://api.github.com/users/enod/followers",
"following_url": "https://api.github.com/users/enod/following{/other_user}",
"gists_url": "https://api.github.com/users/enod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enod/subscriptions",
"organizations_url": "https://api.github.com/users/enod/orgs",
"repos_url": "https://api.github.com/users/enod/repos",
"events_url": "https://api.github.com/users/enod/events{/privacy}",
"received_events_url": "https://api.github.com/users/enod/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,623,561,059,000 | 1,624,355,344,000 | 1,623,840,106,000 | CONTRIBUTOR | null | Hi, awesome folks behind the huggingface!
Here is my PR for the text classification dataset in Mongolian.
Please do let me know in case you have anything to clarify.
Thanks & Regards,
Enod | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2492/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2492/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2492",
"html_url": "https://github.com/huggingface/datasets/pull/2492",
"diff_url": "https://github.com/huggingface/datasets/pull/2492.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2492.patch",
"merged_at": 1623840106000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2491 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2491/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2491/comments | https://api.github.com/repos/huggingface/datasets/issues/2491/events | https://github.com/huggingface/datasets/pull/2491 | 919,714,506 | MDExOlB1bGxSZXF1ZXN0NjY4OTg5MTUw | 2,491 | add eduge classification dataset | {
"login": "enod",
"id": 6023883,
"node_id": "MDQ6VXNlcjYwMjM4ODM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6023883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enod",
"html_url": "https://github.com/enod",
"followers_url": "https://api.github.com/users/enod/followers",
"following_url": "https://api.github.com/users/enod/following{/other_user}",
"gists_url": "https://api.github.com/users/enod/gists{/gist_id}",
"starred_url": "https://api.github.com/users/enod/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/enod/subscriptions",
"organizations_url": "https://api.github.com/users/enod/orgs",
"repos_url": "https://api.github.com/users/enod/repos",
"events_url": "https://api.github.com/users/enod/events{/privacy}",
"received_events_url": "https://api.github.com/users/enod/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Closing this PR as I'll submit a new one - bug free"
] | 1,623,559,021,000 | 1,623,560,808,000 | 1,623,560,798,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2491/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2491/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2491",
"html_url": "https://github.com/huggingface/datasets/pull/2491",
"diff_url": "https://github.com/huggingface/datasets/pull/2491.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2491.patch",
"merged_at": null
} | true |
|
https://api.github.com/repos/huggingface/datasets/issues/2490 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2490/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2490/comments | https://api.github.com/repos/huggingface/datasets/issues/2490/events | https://github.com/huggingface/datasets/pull/2490 | 919,571,385 | MDExOlB1bGxSZXF1ZXN0NjY4ODc4NDA3 | 2,490 | Allow latest pyarrow version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title": "1.9",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 12,
"state": "closed",
"created_at": 1622477586000,
"updated_at": 1626099120000,
"due_on": 1625727600000,
"closed_at": 1625809807000
} | [
"i need some help with this"
] | 1,623,507,454,000 | 1,625,590,492,000 | 1,623,657,203,000 | MEMBER | null | Allow latest pyarrow version, once that version 4.0.1 fixes the segfault bug introduced in version 4.0.0.
Close #2489. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2490/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2490/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2490",
"html_url": "https://github.com/huggingface/datasets/pull/2490",
"diff_url": "https://github.com/huggingface/datasets/pull/2490.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2490.patch",
"merged_at": 1623657203000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2489 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2489/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2489/comments | https://api.github.com/repos/huggingface/datasets/issues/2489/events | https://github.com/huggingface/datasets/issues/2489 | 919,569,749 | MDU6SXNzdWU5MTk1Njk3NDk= | 2,489 | Allow latest pyarrow version once segfault bug is fixed | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,623,506,992,000 | 1,623,657,203,000 | 1,623,657,203,000 | MEMBER | null | As pointed out by @symeneses (see https://github.com/huggingface/datasets/pull/2268#issuecomment-860048613), pyarrow has fixed the segfault bug present in version 4.0.0 (see https://issues.apache.org/jira/browse/ARROW-12568):
- it was fixed on 3 May 2021
- version 4.0.1 was released on 19 May 2021 with the bug fix | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2489/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2489/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2488 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2488/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2488/comments | https://api.github.com/repos/huggingface/datasets/issues/2488/events | https://github.com/huggingface/datasets/pull/2488 | 919,500,756 | MDExOlB1bGxSZXF1ZXN0NjY4ODIwNDA1 | 2,488 | Set configurable downloaded datasets path | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title": "1.9",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 12,
"state": "closed",
"created_at": 1622477586000,
"updated_at": 1626099120000,
"due_on": 1625727600000,
"closed_at": 1625809807000
} | [] | 1,623,488,943,000 | 1,623,662,007,000 | 1,623,659,347,000 | MEMBER | null | Part of #2480. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2488/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2488/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2488",
"html_url": "https://github.com/huggingface/datasets/pull/2488",
"diff_url": "https://github.com/huggingface/datasets/pull/2488.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2488.patch",
"merged_at": 1623659347000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2487 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2487/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2487/comments | https://api.github.com/repos/huggingface/datasets/issues/2487/events | https://github.com/huggingface/datasets/pull/2487 | 919,452,407 | MDExOlB1bGxSZXF1ZXN0NjY4Nzc5Mjk0 | 2,487 | Set configurable extracted datasets path | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"id": 6808903,
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"title": "1.9",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 12,
"state": "closed",
"created_at": 1622477586000,
"updated_at": 1626099120000,
"due_on": 1625727600000,
"closed_at": 1625809807000
} | [
"Let me push a small fix... 😉 ",
"Thanks !"
] | 1,623,476,849,000 | 1,623,663,017,000 | 1,623,661,376,000 | MEMBER | null | Part of #2480. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2487/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2487/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2487",
"html_url": "https://github.com/huggingface/datasets/pull/2487",
"diff_url": "https://github.com/huggingface/datasets/pull/2487.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2487.patch",
"merged_at": 1623661376000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2486 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2486/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2486/comments | https://api.github.com/repos/huggingface/datasets/issues/2486/events | https://github.com/huggingface/datasets/pull/2486 | 919,174,898 | MDExOlB1bGxSZXF1ZXN0NjY4NTI2Njg3 | 2,486 | Add Rico Dataset | {
"login": "ncoop57",
"id": 7613470,
"node_id": "MDQ6VXNlcjc2MTM0NzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ncoop57",
"html_url": "https://github.com/ncoop57",
"followers_url": "https://api.github.com/users/ncoop57/followers",
"following_url": "https://api.github.com/users/ncoop57/following{/other_user}",
"gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions",
"organizations_url": "https://api.github.com/users/ncoop57/orgs",
"repos_url": "https://api.github.com/users/ncoop57/repos",
"events_url": "https://api.github.com/users/ncoop57/events{/privacy}",
"received_events_url": "https://api.github.com/users/ncoop57/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"Hi ! Thanks for adding this dataset :)\r\n\r\nRegarding your questions:\r\n1. We can have them as different configuration of the `rico` dataset\r\n2. Yes please use the path to the image and not open the image directly, so that we can let users open the image one at at time during training if they want to for example. In the future we'll have an Image feature type that will decode the encoded image data on the fly when accessing the examples.\r\n3. Feel free to keep the hierarchies as strings if they don't follow a fixed format\r\n4. You can just return the path\r\n\r\n"
] | 1,623,442,661,000 | 1,631,176,166,000 | null | CONTRIBUTOR | null | Hi there!
I'm wanting to add the Rico datasets for software engineering type data to y'alls awesome library. However, as I have started coding, I've ran into a few hiccups so I thought it best to open the PR early to get a bit of discussion on how the Rico datasets should be added to the `datasets` lib.
1) There are 7 different datasets under Rico and so I was wondering, should I make a folder for each or should I put them as different configurations of a single dataset?
You can see the datasets available for Rico here: http://interactionmining.org/rico
2) As of right now, I have a semi working version of the first dataset which has pairs of screenshots and hierarchies from android applications. However, these screenshots are very large (1440, 2560, 3) and there are 66,000 of them so I am not able to perform the processing that the `datasets` lib does after downloading and extracting the dataset since I run out of memory very fast. Is there a way to have `datasets` lib not put everything into memory while it is processing the dataset?
2.1) If there is not a way, would it be better to just return the path to the screenshots instead of the actual image?
3) The hierarchies are JSON objects and looking through the documentation of `datasets`, I didn't see any feature that I could use for this type of data. So, for now I just have it being read in as a string, is this okay or should I be doing it differently?
4) One of the Rico datasets is a bunch of animations (GIFs), is there a `datasets` feature that I can put this type of data into or should I just return the path as a string?
I appreciate any and all help I can get for this PR, I think the Rico datasets will be an awesome addition to the library :nerd_face: ! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2486/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2486/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2486",
"html_url": "https://github.com/huggingface/datasets/pull/2486",
"diff_url": "https://github.com/huggingface/datasets/pull/2486.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2486.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2485 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2485/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2485/comments | https://api.github.com/repos/huggingface/datasets/issues/2485/events | https://github.com/huggingface/datasets/issues/2485 | 919,099,218 | MDU6SXNzdWU5MTkwOTkyMTg= | 2,485 | Implement layered building | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,623,437,665,000 | 1,623,437,665,000 | null | MEMBER | null | As discussed with @stas00 and @lhoestq (see also here https://github.com/huggingface/datasets/issues/2481#issuecomment-859712190):
> My suggestion for this would be to have this enabled by default.
>
> Plus I don't know if there should be a dedicated issue to that is another functionality. But I propose layered building rather than all at once. That is:
>
> 1. uncompress a handful of files via a generator enough to generate one arrow file
> 2. process arrow file 1
> 3. delete all the files that went in and aren't needed anymore.
>
> rinse and repeat.
>
> 1. This way much less disc space will be required - e.g. on JZ we won't be running into inode limitation, also it'd help with the collaborative hub training project
> 2. The user doesn't need to go and manually clean up all the huge files that were left after pre-processing
> 3. It would already include deleting temp files this issue is talking about
>
> I wonder if the new streaming API would be of help, except here the streaming would be into arrow files as the destination, rather than dataloaders. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2485/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2485/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2484 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2484/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2484/comments | https://api.github.com/repos/huggingface/datasets/issues/2484/events | https://github.com/huggingface/datasets/issues/2484 | 919,092,635 | MDU6SXNzdWU5MTkwOTI2MzU= | 2,484 | Implement loading a dataset builder | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"#self-assign"
] | 1,623,437,242,000 | 1,625,481,957,000 | 1,625,481,957,000 | MEMBER | null | As discussed with @stas00 and @lhoestq, this would allow things like:
```python
from datasets import load_dataset_builder
dataset_name = "openwebtext"
builder = load_dataset_builder(dataset_name)
print(builder.cache_dir)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2484/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2484/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2483 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2483/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2483/comments | https://api.github.com/repos/huggingface/datasets/issues/2483/events | https://github.com/huggingface/datasets/pull/2483 | 918,871,712 | MDExOlB1bGxSZXF1ZXN0NjY4MjU1Mjg1 | 2,483 | Use gc.collect only when needed to avoid slow downs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I continue thinking that the origin of the issue has to do with tqdm (and not with Arrow): this issue only arises for version 4.50.0 (and later) of tqdm, not for previous versions of tqdm.\r\n\r\nMy guess is that tqdm made a change from version 4.50.0 that does not properly release the iterable. ",
"FR"
] | 1,623,424,170,000 | 1,624,044,306,000 | 1,623,425,496,000 | MEMBER | null | In https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 we added a call to gc.collect to resolve some issues on windows (see https://github.com/huggingface/datasets/pull/2482)
However calling gc.collect too often causes significant slow downs (the CI run time doubled).
So I just moved the gc.collect call to the exact place where it's actually needed: when post-processing a dataset | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2483/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2483/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2483",
"html_url": "https://github.com/huggingface/datasets/pull/2483",
"diff_url": "https://github.com/huggingface/datasets/pull/2483.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2483.patch",
"merged_at": 1623425495000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2482 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2482/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2482/comments | https://api.github.com/repos/huggingface/datasets/issues/2482/events | https://github.com/huggingface/datasets/pull/2482 | 918,846,027 | MDExOlB1bGxSZXF1ZXN0NjY4MjMyMzI5 | 2,482 | Allow to use tqdm>=4.50.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,623,422,961,000 | 1,623,424,311,000 | 1,623,424,310,000 | MEMBER | null | We used to have permission errors on windows whith the latest versions of tqdm (see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/6365/workflows/24f7c960-3176-43a5-9652-7830a23a981e/jobs/39232))
They were due to open arrow files not properly closed by pyarrow.
Since https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 gc.collect is called each time we don't need an arrow file to make sure that the files are closed.
close https://github.com/huggingface/datasets/issues/2471
cc @lewtun | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2482/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2482/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2482",
"html_url": "https://github.com/huggingface/datasets/pull/2482",
"diff_url": "https://github.com/huggingface/datasets/pull/2482.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2482.patch",
"merged_at": 1623424310000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2481 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2481/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2481/comments | https://api.github.com/repos/huggingface/datasets/issues/2481/events | https://github.com/huggingface/datasets/issues/2481 | 918,680,168 | MDU6SXNzdWU5MTg2ODAxNjg= | 2,481 | Delete extracted files to save disk space | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"id": 6836458,
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"title": "1.10",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 29,
"state": "closed",
"created_at": 1623178113000,
"updated_at": 1626881809000,
"due_on": 1628146800000,
"closed_at": 1626881809000
} | [
"My suggestion for this would be to have this enabled by default.\r\n\r\nPlus I don't know if there should be a dedicated issue to that is another functionality. But I propose layered building rather than all at once. That is:\r\n\r\n1. uncompress a handful of files via a generator enough to generate one arrow file\r\n2. process arrow file 1\r\n3. delete all the files that went in and aren't needed anymore.\r\n\r\nrinse and repeat.\r\n\r\n1. This way much less disc space will be required - e.g. on JZ we won't be running into inode limitation, also it'd help with the collaborative hub training project\r\n2. The user doesn't need to go and manually clean up all the huge files that were left after pre-processing\r\n3. It would already include deleting temp files this issue is talking about\r\n\r\nI wonder if the new streaming API would be of help, except here the streaming would be into arrow files as the destination, rather than dataloaders."
] | 1,623,414,112,000 | 1,626,685,698,000 | 1,626,685,698,000 | MEMBER | null | As discussed with @stas00 and @lhoestq, allowing the deletion of extracted files would save a great amount of disk space to typical user. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2481/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2481/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2480 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2480/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2480/comments | https://api.github.com/repos/huggingface/datasets/issues/2480/events | https://github.com/huggingface/datasets/issues/2480 | 918,678,578 | MDU6SXNzdWU5MTg2Nzg1Nzg= | 2,480 | Set download/extracted paths configurable | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"For example to be able to send uncompressed and temp build files to another volume/partition, so that the user gets the minimal disk usage on their primary setup - and ends up with just the downloaded compressed data + arrow files, but outsourcing the huge files and building to another partition. e.g. on JZ there is a special partition for fast data, but it's also volatile, so only temp files should go there.\r\n\r\nThink of it as `TMPDIR` so we need the equivalent for `datasets`."
] | 1,623,414,024,000 | 1,623,767,029,000 | null | MEMBER | null | As discussed with @stas00 and @lhoestq, setting these paths configurable may allow to overcome disk space limitation on different partitions/drives.
TODO:
- [x] Set configurable extracted datasets path: #2487
- [x] Set configurable downloaded datasets path: #2488
- [ ] Set configurable "incomplete" datasets path? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2480/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2480/timeline | null | null | null | false |