url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.26B
| node_id
stringlengths 18
32
| number
int64 1
4.44k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,654B
| updated_at
int64 1,587B
1,654B
| closed_at
int64 1,587B
1,654B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 1
value | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3832 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3832/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3832/comments | https://api.github.com/repos/huggingface/datasets/issues/3832/events | https://github.com/huggingface/datasets/issues/3832 | 1,160,503,446 | I_kwDODunzps5FK-CW | 3,832 | Making Hugging Face the place to go for Graph NNs datasets | {
"login": "omarespejel",
"id": 4755430,
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/omarespejel",
"html_url": "https://github.com/omarespejel",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3898693527,
"node_id": "LA_kwDODunzps7oYVeX",
"url": "https://api.github.com/repos/huggingface/datasets/labels/graph",
"name": "graph",
"color": "7AFCAA",
"default": false,
"description": "Datasets for Graph Neural Networks"
}
] | open | false | null | [] | null | [
"It will be indeed really great to add support to GNN datasets. Big :+1: for this initiative.",
"@napoles-uach identifies the [TUDatasets](https://chrsmrrs.github.io/datasets/) (A collection of benchmark datasets for graph classification and regression). \r\n\r\nAdded to the Tasks in the initial issue.",
"Thanks Omar, that is a great collection!",
"Great initiative! Let's keep this issue for these 3 datasets, but moving forward maybe let's create a new issue per dataset :rocket: great work @napoles-uach and @omarespejel!"
] | 1,646,535,778,000 | 1,647,243,938,000 | null | NONE | null | Let's make Hugging Face Datasets the central hub for GNN datasets :)
**Motivation**. Datasets are currently quite scattered and an open-source central point such as the Hugging Face Hub would be ideal to support the growth of the GNN field.
What are some datasets worth integrating into the Hugging Face hub?
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
Special thanks to @napoles-uach for his collaboration on identifying the first ones:
- [ ] [SNAP-Stanford OGB Datasets](https://github.com/snap-stanford/ogb).
- [ ] [SNAP-Stanford Pretrained GNNs Chemistry and Biology Datasets](https://github.com/snap-stanford/pretrain-gnns).
- [ ] [TUDatasets](https://chrsmrrs.github.io/datasets/) (A collection of benchmark datasets for graph classification and regression)
cc @osanseviero
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3832/reactions",
"total_count": 5,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3832/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3831 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3831/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3831/comments | https://api.github.com/repos/huggingface/datasets/issues/3831/events | https://github.com/huggingface/datasets/issues/3831 | 1,160,501,000 | I_kwDODunzps5FK9cI | 3,831 | when using to_tf_dataset with shuffle is true, not all completed batches are made | {
"login": "greenned",
"id": 42107709,
"node_id": "MDQ6VXNlcjQyMTA3NzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/42107709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/greenned",
"html_url": "https://github.com/greenned",
"followers_url": "https://api.github.com/users/greenned/followers",
"following_url": "https://api.github.com/users/greenned/following{/other_user}",
"gists_url": "https://api.github.com/users/greenned/gists{/gist_id}",
"starred_url": "https://api.github.com/users/greenned/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/greenned/subscriptions",
"organizations_url": "https://api.github.com/users/greenned/orgs",
"repos_url": "https://api.github.com/users/greenned/repos",
"events_url": "https://api.github.com/users/greenned/events{/privacy}",
"received_events_url": "https://api.github.com/users/greenned/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Maybe @Rocketknight1 can help here",
"Hi @greenned, this is expected behaviour for `to_tf_dataset`. By default, we drop the smaller 'remainder' batch during training (i.e. when `shuffle=True`). If you really want to keep that batch, you can set `drop_remainder=False` when calling `to_tf_dataset()`.",
"@Rocketknight1 Oh, thank you. I didn't get **drop_remainder** Have a nice day!",
"No problem!\r\n"
] | 1,646,534,630,000 | 1,646,752,736,000 | 1,646,752,736,000 | NONE | null | ## Describe the bug
when converting a dataset to tf_dataset by using to_tf_dataset with shuffle true, the remainder is not converted to one batch
## Steps to reproduce the bug
this is the sample code below
https://colab.research.google.com/drive/1_oRXWsR38ElO1EYF9ayFoCU7Ou1AAej4?usp=sharing
## Expected results
regardless of shuffle is true or not, 67 rows dataset should be 5 batches when batch size is 16.
## Actual results
4 batches
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3831/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3830/comments | https://api.github.com/repos/huggingface/datasets/issues/3830/events | https://github.com/huggingface/datasets/issues/3830 | 1,160,181,404 | I_kwDODunzps5FJvac | 3,830 | Got error when load cnn_dailymail dataset | {
"login": "wgong0510",
"id": 78331051,
"node_id": "MDQ6VXNlcjc4MzMxMDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/78331051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wgong0510",
"html_url": "https://github.com/wgong0510",
"followers_url": "https://api.github.com/users/wgong0510/followers",
"following_url": "https://api.github.com/users/wgong0510/following{/other_user}",
"gists_url": "https://api.github.com/users/wgong0510/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wgong0510/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wgong0510/subscriptions",
"organizations_url": "https://api.github.com/users/wgong0510/orgs",
"repos_url": "https://api.github.com/users/wgong0510/repos",
"events_url": "https://api.github.com/users/wgong0510/events{/privacy}",
"received_events_url": "https://api.github.com/users/wgong0510/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | null | [] | null | [
"Was able to reproduce the issue on Colab; full logs below. \r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNotADirectoryError Traceback (most recent call last)\r\n[<ipython-input-2-39967739ba7f>](https://localhost:8080/#) in <module>()\r\n 1 import datasets\r\n 2 \r\n----> 3 train_data = datasets.load_dataset(\"cnn_dailymail\", \"3.0.0\", split=\"train\")\r\n\r\n5 frames\r\n[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)\r\n 1705 ignore_verifications=ignore_verifications,\r\n 1706 try_from_hf_gcs=try_from_hf_gcs,\r\n-> 1707 use_auth_token=use_auth_token,\r\n 1708 )\r\n 1709 \r\n\r\n[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 593 if not downloaded_from_gcs:\r\n 594 self._download_and_prepare(\r\n--> 595 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 596 )\r\n 597 # Sync info\r\n\r\n[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 659 split_dict = SplitDict(dataset_name=self.name)\r\n 660 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 661 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 662 \r\n 663 # Checksums verification\r\n\r\n[/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py](https://localhost:8080/#) in _split_generators(self, dl_manager)\r\n 253 def _split_generators(self, dl_manager):\r\n 254 dl_paths = dl_manager.download_and_extract(_DL_URLS)\r\n--> 255 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN)\r\n 256 # Generate shared vocabulary\r\n 257 \r\n\r\n[/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py](https://localhost:8080/#) in _subset_filenames(dl_paths, split)\r\n 154 else:\r\n 155 logger.fatal(\"Unsupported split: %s\", split)\r\n--> 156 cnn = _find_files(dl_paths, \"cnn\", urls)\r\n 157 dm = _find_files(dl_paths, \"dm\", urls)\r\n 158 return cnn + dm\r\n\r\n[/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py](https://localhost:8080/#) in _find_files(dl_paths, publisher, url_dict)\r\n 133 else:\r\n 134 logger.fatal(\"Unsupported publisher: %s\", publisher)\r\n--> 135 files = sorted(os.listdir(top_dir))\r\n 136 \r\n 137 ret_files = []\r\n\r\nNotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'\r\n```",
"Hi @jon-tow, thanks for reporting. And hi @dynamicwebpaige, thanks for your investigation. \r\n\r\nThis issue was already reported \r\n- #3784\r\n\r\nand its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it. See:\r\n- #3787 \r\n\r\nWe are planning to make a patch release today (indeed, we were planning to do it last Friday).\r\n\r\nIn the meantime, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```\r\n\r\nCC: @lhoestq "
] | 1,646,444,592,000 | 1,646,636,021,000 | 1,646,636,021,000 | NONE | null | When using datasets.load_dataset method to load cnn_dailymail dataset, got error as below:
- windows os: FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'D:\\SourceCode\\DataScience\\HuggingFace\\Data\\downloads\\1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b\\cnn\\stories'
- google colab: NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
The code is to load dataset:
windows os:
```
from datasets import load_dataset
dataset = load_dataset("cnn_dailymail", "3.0.0", cache_dir="D:\\SourceCode\\DataScience\\HuggingFace\\Data")
```
google colab:
```
import datasets
train_data = datasets.load_dataset("cnn_dailymail", "3.0.0", split="train")
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3830/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3829 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3829/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3829/comments | https://api.github.com/repos/huggingface/datasets/issues/3829/events | https://github.com/huggingface/datasets/issues/3829 | 1,160,154,352 | I_kwDODunzps5FJozw | 3,829 | [📄 Docs] Create a `datasets` performance guide. | {
"login": "dynamicwebpaige",
"id": 3712347,
"node_id": "MDQ6VXNlcjM3MTIzNDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/3712347?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dynamicwebpaige",
"html_url": "https://github.com/dynamicwebpaige",
"followers_url": "https://api.github.com/users/dynamicwebpaige/followers",
"following_url": "https://api.github.com/users/dynamicwebpaige/following{/other_user}",
"gists_url": "https://api.github.com/users/dynamicwebpaige/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dynamicwebpaige/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dynamicwebpaige/subscriptions",
"organizations_url": "https://api.github.com/users/dynamicwebpaige/orgs",
"repos_url": "https://api.github.com/users/dynamicwebpaige/repos",
"events_url": "https://api.github.com/users/dynamicwebpaige/events{/privacy}",
"received_events_url": "https://api.github.com/users/dynamicwebpaige/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi ! Yes this is definitely something we'll explore, since optimizing processing pipelines can be challenging and because performance is key here: we want anyone to be able to play with large-scale datasets more easily.\r\n\r\nI think we'll start by documenting the performance of the dataset transforms we provide, and then we can have some tools to help debugging/optimizing them"
] | 1,646,440,086,000 | 1,646,929,467,000 | null | NONE | null | ## Brief Overview
Downloading, saving, and preprocessing large datasets from the `datasets` library can often result in [performance bottlenecks](https://github.com/huggingface/datasets/issues/3735). These performance snags can be challenging to identify and to debug, especially for users who are less experienced with building deep learning experiments.
## Feature Request
Could we create a performance guide for using `datasets`, similar to:
* [Better performance with the `tf.data` API](https://github.com/huggingface/datasets/issues/3735)
* [Analyze `tf.data` performance with the TF Profiler](https://www.tensorflow.org/guide/data_performance_analysis)
This performance guide should detail practical options for improving performance with `datasets`, and enumerate any common best practices. It should also show how to use tools like the PyTorch Profiler or the TF Profiler to identify any performance bottlenecks (example below).
![image](https://user-images.githubusercontent.com/3712347/156859152-a3cb9565-3ec6-4d39-8e77-56d0a75a4954.png)
## Related Issues
* [wiki_dpr pre-processing performance #1670](https://github.com/huggingface/datasets/issues/1670)
* [Adjusting chunk size for streaming datasets #3499](https://github.com/huggingface/datasets/issues/3499)
* [how large datasets are handled under the hood #1004](https://github.com/huggingface/datasets/issues/1004)
* [using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? #1830](https://github.com/huggingface/datasets/issues/1830)
* [Best way to batch a large dataset? #315](https://github.com/huggingface/datasets/issues/315)
* [Saving processed dataset running infinitely #1911](https://github.com/huggingface/datasets/issues/1911) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3829/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3829/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3828/comments | https://api.github.com/repos/huggingface/datasets/issues/3828/events | https://github.com/huggingface/datasets/issues/3828 | 1,160,064,029 | I_kwDODunzps5FJSwd | 3,828 | The Pile's _FEATURE spec seems to be incorrect | {
"login": "dlwh",
"id": 9633,
"node_id": "MDQ6VXNlcjk2MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dlwh",
"html_url": "https://github.com/dlwh",
"followers_url": "https://api.github.com/users/dlwh/followers",
"following_url": "https://api.github.com/users/dlwh/following{/other_user}",
"gists_url": "https://api.github.com/users/dlwh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dlwh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dlwh/subscriptions",
"organizations_url": "https://api.github.com/users/dlwh/orgs",
"repos_url": "https://api.github.com/users/dlwh/repos",
"events_url": "https://api.github.com/users/dlwh/events{/privacy}",
"received_events_url": "https://api.github.com/users/dlwh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @dlwh, thanks for reporting.\r\n\r\nPlease note, that the source data files for \"all\" config are different from the other configurations.\r\n\r\nThe \"all\" config contains the official Pile data files, from https://mystic.the-eye.eu/public/AI/pile/\r\nAll data examples contain a \"meta\" dict with a single \"pile_set_name\" key:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ds = load_dataset(\"the_pile\", \"all\", split=\"train\", streaming=True)\r\n item = next(iter(ds))\r\nDownloading builder script: 9.09kB [00:00, 4.42MB/s]\r\n\r\nIn [3]: item[\"meta\"]\r\nOut[3]: {'pile_set_name': 'Pile-CC'}\r\n```\r\n\r\nOn the other hand, all the other subset configs data files come from the Pile preliminary components directory: https://mystic.the-eye.eu/public/AI/pile_preliminary_components/\r\nFor theses components, the \"meta\" field may have different keys depending on the subset: \"id\", \"language\", \"pmid\",... Because of that, if we had kept the `dict` data format for the \"meta\" field, we would have an error when trying to concatenate different subsets, whose \"meta\" keys are not identical. In order to avoid that, the \"meta\" field is cast to `str` in all these cases, so that there is no incompatibility in their \"meta\" data type when concatenating.\r\n\r\nYou can check, for example, that for \"pubmed_central\" the \"meta\" field is cast to `str`:\r\n```python\r\nIn [4]: from datasets import load_dataset\r\n ds = load_dataset(\"the_pile\", \"pubmed_central\", split=\"train\", streaming=True)\r\n item = next(iter(ds))\r\n\r\nIn [5]: item[\"meta\"]\r\nOut[5]: \"{'id': 'PMC6071596'}\"\r\n```\r\n\r\nFeel free to reopen this issue if you have further questions. "
] | 1,646,429,132,000 | 1,646,731,849,000 | 1,646,731,848,000 | NONE | null | ## Describe the bug
If you look at https://huggingface.co/datasets/the_pile/blob/main/the_pile.py:
For "all"
* the pile_set_name is never set for data
* there's actually an id field inside of "meta"
For subcorpora pubmed_central and hacker_news:
* the meta is specified to be a string, but it's actually a dict with an id field inside.
## Steps to reproduce the bug
## Expected results
Feature spec should match the data I'd think?
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3828/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3827 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3827/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3827/comments | https://api.github.com/repos/huggingface/datasets/issues/3827/events | https://github.com/huggingface/datasets/pull/3827 | 1,159,878,436 | PR_kwDODunzps4z95dj | 3,827 | Remove deprecated `remove_columns` param in `filter` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3827). All of your documentation changes will be reflected on that endpoint."
] | 1,646,414,606,000 | 1,646,656,672,000 | 1,646,656,671,000 | CONTRIBUTOR | null | A leftover from #3803. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3827/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3827",
"html_url": "https://github.com/huggingface/datasets/pull/3827",
"diff_url": "https://github.com/huggingface/datasets/pull/3827.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3827.patch",
"merged_at": 1646656671000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3826 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3826/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3826/comments | https://api.github.com/repos/huggingface/datasets/issues/3826/events | https://github.com/huggingface/datasets/pull/3826 | 1,159,851,110 | PR_kwDODunzps4z90JU | 3,826 | Add IterableDataset.filter | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3826). All of your documentation changes will be reflected on that endpoint.",
"Indeed ! If `batch_size` is `None` or `<=0` then the full dataset should be passed. It's been mentioned in the docs for a while but never actually implemented. We can fix that later"
] | 1,646,413,043,000 | 1,646,846,593,000 | 1,646,846,591,000 | MEMBER | null | _Needs https://github.com/huggingface/datasets/pull/3801 to be merged first_
I added `IterableDataset.filter` with an API that is a subset of `Dataset.filter`:
```python
def filter(self, function, batched=False, batch_size=1000, with_indices=false, input_columns=None):
```
TODO:
- [x] tests
- [x] docs
related to https://github.com/huggingface/datasets/issues/3444 and https://github.com/huggingface/datasets/issues/3753 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3826/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3826",
"html_url": "https://github.com/huggingface/datasets/pull/3826",
"diff_url": "https://github.com/huggingface/datasets/pull/3826.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3826.patch",
"merged_at": 1646846591000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3825 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3825/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3825/comments | https://api.github.com/repos/huggingface/datasets/issues/3825/events | https://github.com/huggingface/datasets/pull/3825 | 1,159,802,345 | PR_kwDODunzps4z9p4b | 3,825 | Update version and date in Wikipedia dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3825). All of your documentation changes will be reflected on that endpoint."
] | 1,646,409,927,000 | 1,646,414,677,000 | 1,646,414,676,000 | MEMBER | null | CC: @geohci | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3825/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3825",
"html_url": "https://github.com/huggingface/datasets/pull/3825",
"diff_url": "https://github.com/huggingface/datasets/pull/3825.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3825.patch",
"merged_at": 1646414676000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3824 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3824/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3824/comments | https://api.github.com/repos/huggingface/datasets/issues/3824/events | https://github.com/huggingface/datasets/pull/3824 | 1,159,574,186 | PR_kwDODunzps4z85SO | 3,824 | Allow not specifying feature cols other than `predictions`/`references` in `Metric.compute` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3824). All of your documentation changes will be reflected on that endpoint."
] | 1,646,395,480,000 | 1,646,417,062,000 | 1,646,417,061,000 | CONTRIBUTOR | null | Fix #3818 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3824/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3824",
"html_url": "https://github.com/huggingface/datasets/pull/3824",
"diff_url": "https://github.com/huggingface/datasets/pull/3824.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3824.patch",
"merged_at": 1646417061000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3823 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3823/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3823/comments | https://api.github.com/repos/huggingface/datasets/issues/3823/events | https://github.com/huggingface/datasets/issues/3823 | 1,159,497,844 | I_kwDODunzps5FHIh0 | 3,823 | 500 internal server error when trying to open a dataset composed of Zarr stores | {
"login": "jacobbieker",
"id": 7170359,
"node_id": "MDQ6VXNlcjcxNzAzNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7170359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jacobbieker",
"html_url": "https://github.com/jacobbieker",
"followers_url": "https://api.github.com/users/jacobbieker/followers",
"following_url": "https://api.github.com/users/jacobbieker/following{/other_user}",
"gists_url": "https://api.github.com/users/jacobbieker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jacobbieker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jacobbieker/subscriptions",
"organizations_url": "https://api.github.com/users/jacobbieker/orgs",
"repos_url": "https://api.github.com/users/jacobbieker/repos",
"events_url": "https://api.github.com/users/jacobbieker/events{/privacy}",
"received_events_url": "https://api.github.com/users/jacobbieker/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @jacobbieker, thanks for reporting!\r\n\r\nI have transferred this issue to our Hub team and they are investigating it. I keep you informed. ",
"Hi @jacobbieker, we are investigating this issue on our side and we'll see if we can fix it, but please note that your repo is considered problematic for git. Here are the results of running https://github.com/github/git-sizer on it:\r\n\r\n```\r\nProcessing blobs: 147448 \r\nProcessing trees: 27 \r\nProcessing commits: 4 \r\nMatching commits to trees: 4 \r\nProcessing annotated tags: 0 \r\nProcessing references: 3 \r\n| Name | Value | Level of concern |\r\n| ---------------------------- | --------- | ------------------------------ |\r\n| Biggest objects | | |\r\n| * Trees | | |\r\n| * Maximum entries [1] | 167 k | !!!!!!!!!!!!!!!!!!!!!!!!!!!!!! |\r\n| | | |\r\n| Biggest checkouts | | |\r\n| * Number of files [2] | 189 k | *** |\r\n\r\n[1] aa057d2667c34c70c6146efc631f5c9917ff326e (refs/heads/main:2016.zarr/unknown)\r\n[2] 6897b7bf6440fdd16b2c39d08085a669e7eaa59d (refs/heads/main^{tree})\r\n```\r\n\r\nYou can check https://github.com/github/git-sizer for more information on how to avoid such pathological structures.",
"Hi, thanks for getting back to me so quick! And yeah, I figured that was probably the problem. I was going to try to delete the repo, but couldn't through the website, so if that's the easiest way to solve it, I can regenerate the dataset in a different format with less tiny files, and you guys can delete the repo as it is. Zarr just saves everything as lots of small files to make chunks easy to load, which is why I was preferring that format, but maybne that just doesn't work well for HF datasets.",
"Hi @jacobbieker,\r\n\r\nFor future use cases, our Hub team is still pondering whether to limit the maximum number of files per repo to avoid technical issues...\r\n\r\nOn the meantime, they have made a fix and your dataset is working: https://huggingface.co/datasets/openclimatefix/mrms"
] | 1,646,390,234,000 | 1,646,732,859,000 | 1,646,732,859,000 | NONE | null | ## Describe the bug
The dataset [openclimatefix/mrms](https://huggingface.co/datasets/openclimatefix/mrms) gives a 500 server error when trying to open it on the website, or through code.
The dataset doesn't have a loading script yet, and I did push two [xarray](https://docs.xarray.dev/en/stable/) Zarr stores of data there recentlyish. The Zarr stores are composed of lots of small files, which I am guessing is probably the problem, as we have another [OCF dataset](https://huggingface.co/datasets/openclimatefix/eumetsat_uk_hrv) using xarray and Zarr, but with the Zarr stored on GCP public datasets instead of directly in HF datasets, and that one opens fine.
In general, we were hoping to use HF datasets to release some more public geospatial datasets as benchmarks, which are commonly stored as Zarr stores as they can be compressed well and deal with the multi-dimensional data and coordinates fairly easily compared to other formats, but with this error, I'm assuming we should try a different format?
For context, we are trying to have complete public model+data reimplementations of some SOTA weather and solar nowcasting models, like [MetNet, MetNet-2,](https://github.com/openclimatefix/metnet) [DGMR](https://github.com/openclimatefix/skillful_nowcasting), and [others](https://github.com/openclimatefix/graph_weather), which all have large, complex datasets.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("openclimatefix/mrms")
```
## Expected results
The dataset should be downloaded or open up
## Actual results
A 500 internal server error
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-5.15.25-1-MANJARO-x86_64-with-glibc2.35
- Python version: 3.9.10
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3823/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3822 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3822/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3822/comments | https://api.github.com/repos/huggingface/datasets/issues/3822/events | https://github.com/huggingface/datasets/issues/3822 | 1,159,395,728 | I_kwDODunzps5FGvmQ | 3,822 | Add Biwi Kinect Head Pose Database | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | closed | false | {
"login": "dnaveenr",
"id": 17746528,
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaveenr",
"html_url": "https://github.com/dnaveenr",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "dnaveenr",
"id": 17746528,
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaveenr",
"html_url": "https://github.com/dnaveenr",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Official dataset location : https://icu.ee.ethz.ch/research/datsets.html\r\nIn the \"Biwi Kinect Head Pose Database\" section, I do not find any information regarding \"Downloading the dataset.\" . Do we mail the authors regarding this ?\r\n\r\nI found the dataset on Kaggle : [Link](https://www.kaggle.com/kmader/biwi-kinect-head-pose-database) , but since 🤗 does not host any of the datasets, this would require the user to provide their Kaggle username and API key to download. \r\n\r\nAny inputs on how we could proceed ? Thank you.\r\n[ Need your inputs here, @lhoestq or @mariosasko ]",
"Hi @dnaveenr! Thanks for tackling this issue. This link should work: https://data.vision.ee.ethz.ch/cvl/gfanelli/kinect_head_pose_db.tgz",
"#self-assign",
"Added in https://github.com/huggingface/datasets/pull/3903, thanks @dnaveenr !"
] | 1,646,383,719,000 | 1,654,088,447,000 | 1,654,088,447,000 | MEMBER | null | ## Adding a Dataset
- **Name:** Biwi Kinect Head Pose Database
- **Description:** Over 15K images of 20 people recorded with a Kinect while turning their heads around freely. For each frame, depth and rgb images are provided, together with ground in the form of the 3D location of the head and its rotation angles.
- **Data:** [*link to the Github repository or current dataset location*](https://icu.ee.ethz.ch/research/datsets.html)
- **Motivation:** Useful pose estimation dataset
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3822/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3821 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3821/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3821/comments | https://api.github.com/repos/huggingface/datasets/issues/3821/events | https://github.com/huggingface/datasets/pull/3821 | 1,159,371,927 | PR_kwDODunzps4z8O5J | 3,821 | Update Wikipedia dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm starting to generate the pre-processed data for some of the languages (for backward compatibility).\r\n\r\nOnce this merged, we will create the pre-processed data on the Hub under the Wikimedia namespace.",
"All steps have been properly done.\r\n\r\nI'm merging all these commits into master."
] | 1,646,381,961,000 | 1,647,866,123,000 | 1,647,865,860,000 | MEMBER | null | This PR combines all updates to Wikipedia dataset.
Once approved, this will be used to generate the pre-processed Wikipedia datasets.
Finally, this PR will be able to be merged into master:
- NOT using squash
- BUT a regular MERGE (or REBASE+MERGE), so that all commits are preserved
TODO:
- [x] #3435
- [x] #3789
- [x] #3825
- [x] Run to get the pre-processed data for big languages (backward compatibility)
- [x] #3958
CC: @geohci | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3821/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3821",
"html_url": "https://github.com/huggingface/datasets/pull/3821",
"diff_url": "https://github.com/huggingface/datasets/pull/3821.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3821.patch",
"merged_at": 1647865860000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3820 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3820/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3820/comments | https://api.github.com/repos/huggingface/datasets/issues/3820/events | https://github.com/huggingface/datasets/issues/3820 | 1,159,106,603 | I_kwDODunzps5FFpAr | 3,820 | `pubmed_qa` checksum mismatch | {
"login": "jon-tow",
"id": 41410219,
"node_id": "MDQ6VXNlcjQxNDEwMjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/41410219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jon-tow",
"html_url": "https://github.com/jon-tow",
"followers_url": "https://api.github.com/users/jon-tow/followers",
"following_url": "https://api.github.com/users/jon-tow/following{/other_user}",
"gists_url": "https://api.github.com/users/jon-tow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jon-tow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jon-tow/subscriptions",
"organizations_url": "https://api.github.com/users/jon-tow/orgs",
"repos_url": "https://api.github.com/users/jon-tow/repos",
"events_url": "https://api.github.com/users/jon-tow/events{/privacy}",
"received_events_url": "https://api.github.com/users/jon-tow/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | null | [] | null | [
"Hi @jon-tow, thanks for reporting.\r\n\r\nThis issue was already reported and its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it. See:\r\n- #3787 \r\n\r\nWe are planning to make a patch release today.\r\n\r\nIn the meantime, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```"
] | 1,646,353,688,000 | 1,646,386,952,000 | 1,646,386,952,000 | CONTRIBUTOR | null | ## Describe the bug
Loading [`pubmed_qa`](https://huggingface.co/datasets/pubmed_qa) results in a mismatched checksum error.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import datasets
try:
datasets.load_dataset("pubmed_qa", "pqa_labeled")
except Exception as e:
print(e)
try:
datasets.load_dataset("pubmed_qa", "pqa_unlabeled")
except Exception as e:
print(e)
try:
datasets.load_dataset("pubmed_qa", "pqa_artificial")
except Exception as e:
print(e)
```
## Expected results
Successful download.
## Actual results
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset
builder_instance.download_and_prepare(
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 665, in _download_and_prepare
verify_checksums(
File "/usr/local/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1RsGLINVce-0GsDkCLDuLZmoLuzfmoCuQ', 'https://drive.google.com/uc?export=download&id=15v1x6aQDlZymaHGP7cZJZZYFfeJt2NdS']
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: macOS
- Python version: 3.8.1
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3820/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3819 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3819/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3819/comments | https://api.github.com/repos/huggingface/datasets/issues/3819/events | https://github.com/huggingface/datasets/pull/3819 | 1,158,848,288 | PR_kwDODunzps4z6fvn | 3,819 | Fix typo in doc build yml | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3819). All of your documentation changes will be reflected on that endpoint."
] | 1,646,338,124,000 | 1,646,399,261,000 | 1,646,399,261,000 | CONTRIBUTOR | null | cc: @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3819/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3819",
"html_url": "https://github.com/huggingface/datasets/pull/3819",
"diff_url": "https://github.com/huggingface/datasets/pull/3819.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3819.patch",
"merged_at": 1646399261000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3818 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3818/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3818/comments | https://api.github.com/repos/huggingface/datasets/issues/3818/events | https://github.com/huggingface/datasets/issues/3818 | 1,158,788,545 | I_kwDODunzps5FEbXB | 3,818 | Support for "sources" parameter in the add() and add_batch() methods in datasets.metric - SARI | {
"login": "lmvasque",
"id": 6901031,
"node_id": "MDQ6VXNlcjY5MDEwMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6901031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lmvasque",
"html_url": "https://github.com/lmvasque",
"followers_url": "https://api.github.com/users/lmvasque/followers",
"following_url": "https://api.github.com/users/lmvasque/following{/other_user}",
"gists_url": "https://api.github.com/users/lmvasque/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lmvasque/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lmvasque/subscriptions",
"organizations_url": "https://api.github.com/users/lmvasque/orgs",
"repos_url": "https://api.github.com/users/lmvasque/repos",
"events_url": "https://api.github.com/users/lmvasque/events{/privacy}",
"received_events_url": "https://api.github.com/users/lmvasque/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi, thanks for reporting! We can add a `sources: datasets.Value(\"string\")` feature to the `Features` dict in the `SARI` script to fix this. Would you be interested in submitting a PR?",
"Hi Mario,\r\n\r\nThanks for your message. I did try to add `sources` into the `Features` dict using a script for the metric:\r\n```\r\n features=datasets.Features(\r\n {\r\n \"sources\": datasets.Value(\"string\", id=\"sequence\"),\r\n \"predictions\": datasets.Value(\"string\", id=\"sequence\"),\r\n \"references\": datasets.Sequence(datasets.Value(\"string\", id=\"sequence\"), id=\"references\"),\r\n }\r\n ),\r\n```\r\n\r\nBut that only avoids a failure in `encode_batch` in the `add_batch` method:\r\n```\r\n batch = {\"predictions\": predictions, \"references\": references}\r\n batch = self.info.features.encode_batch(batch)\r\n```\r\n\r\nThe real problem is that `add_batch()`, `add()` and `compute()` does not receive a `sources` param:\r\n```\r\ndef add_batch(self, *, predictions=None, references=None):\r\ndef add(self, *, prediction=None, reference=None):\r\ndef compute(self, *, predictions=None, references=None, **kwargs)\r\n```\r\n\r\nAnd then, it fails:\r\n`TypeError: add_batch() got an unexpected keyword argument sources`\r\n\r\nI need this for adding any metric based on SARI or alike, not only for sari.py :)\r\n\r\nLet me know if I understood correctly the proposed solution.\r\n",
"The `Metric` class has been modified recently to support this use-case, but the `add_batch` + `compute` pattern still doesn't work correctly. I'll open a PR."
] | 1,646,333,874,000 | 1,646,417,061,000 | 1,646,417,061,000 | NONE | null | **Is your feature request related to a problem? Please describe.**
The methods `add_batch` and `add` from the `Metric` [class](https://github.com/huggingface/datasets/blob/1675ad6a958435b675a849eafa8a7f10fe0f43bc/src/datasets/metric.py) does not work with [SARI](https://github.com/huggingface/datasets/blob/master/metrics/sari/sari.py) metric. This metric not only relies on the predictions and references, but also in the input.
For example, when the `add_batch` method is used, then the `compute()` method fails:
```
metric = load_metric("sari")
metric.add_batch(
predictions=["About 95 you now get in ."],
references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]])
metric.compute()
> TypeError: _compute() missing 1 required positional argument: 'sources'
```
Therefore, the `compute() `method can only be used standalone:
```
metric = load_metric("sari")
result = metric.compute(
sources=["About 95 species are currently accepted ."],
predictions=["About 95 you now get in ."],
references=[["About 95 species are currently known .","About 95 species are now accepted .","95 species are now accepted ."]])
> {'sari': 26.953601953601954}
```
**Describe the solution you'd like**
Support for an additional parameter `sources` in the `add_batch` and `add` of the `Metric` class.
```
add_batch(*, sources=None, predictions=None, references=None, **kwargs)
add(*, sources=None, predictions=None, references=None, **kwargs)
compute()
```
**Describe alternatives you've considered**
I've tried to override the `add_batch` and `add`, however, these are highly dependent to the `Metric` class. We could also write a simple function that compute the scores of a sentences list, but then we lose the functionality from the original [add](https://huggingface.co/docs/datasets/_modules/datasets/metric.html#Metric.add) and [add_batch method](https://huggingface.co/docs/datasets/_modules/datasets/metric.html#Metric.add_batch).
**Additional context**
These methods are used in the transformers [pytorch examples](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization_no_trainer.py).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3818/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3818/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3817 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3817/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3817/comments | https://api.github.com/repos/huggingface/datasets/issues/3817/events | https://github.com/huggingface/datasets/pull/3817 | 1,158,592,335 | PR_kwDODunzps4z5pQ7 | 3,817 | Simplify Common Voice code | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think the script looks pretty clean and readable now! cool!\r\n"
] | 1,646,323,281,000 | 1,646,405,508,000 | 1,646,397,563,000 | MEMBER | null | In #3736 we introduced one method to generate examples when streaming, that is different from the one when not streaming.
In this PR I propose a new implementation which is simpler: it only has one function, based on `iter_archive`. And you still have access to local audio files when loading the dataset in non-streaming mode.
cc @patrickvonplaten @polinaeterna @anton-l @albertvillanova since this will become the template for many audio datasets to come.
This change can also trivially be applied to the other audio datasets that already exist. Using this line, you can get access to local files in non-streaming mode:
```python
local_extracted_archive = dl_manager.extract(archive_path) if not dl_manager.is_streaming else None
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3817/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3817/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3817",
"html_url": "https://github.com/huggingface/datasets/pull/3817",
"diff_url": "https://github.com/huggingface/datasets/pull/3817.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3817.patch",
"merged_at": 1646397563000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3816 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3816/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3816/comments | https://api.github.com/repos/huggingface/datasets/issues/3816/events | https://github.com/huggingface/datasets/pull/3816 | 1,158,589,913 | PR_kwDODunzps4z5owP | 3,816 | Doc new UI test workflows2 | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"<img src=\"https://www.bikevillastravel.com/cms/static/images/loading.gif\" alt=\"Girl in a jacket\" width=\"50\" >"
] | 1,646,323,154,000 | 1,646,325,735,000 | 1,646,325,735,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3816/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3816/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3816",
"html_url": "https://github.com/huggingface/datasets/pull/3816",
"diff_url": "https://github.com/huggingface/datasets/pull/3816.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3816.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3815/comments | https://api.github.com/repos/huggingface/datasets/issues/3815/events | https://github.com/huggingface/datasets/pull/3815 | 1,158,589,512 | PR_kwDODunzps4z5oq- | 3,815 | Fix iter_archive getting reset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,646,323,132,000 | 1,646,330,797,000 | 1,646,330,773,000 | MEMBER | null | The `DownloadManager.iter_archive` method currently returns an iterator - which is **empty** once you iter over it once. This means you can't pass the same archive iterator to several splits.
To fix that, I changed the ouput of `DownloadManager.iter_archive` to be an iterable that you can iterate over several times, instead of a one-time-use iterator.
The `StreamingDownloadManager.iter_archive` already returns an appropriate iterable, and the code added in this PR is inspired from the one in `streaming_download_manager.py` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3815/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3815/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3815",
"html_url": "https://github.com/huggingface/datasets/pull/3815",
"diff_url": "https://github.com/huggingface/datasets/pull/3815.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3815.patch",
"merged_at": 1646330773000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3814/comments | https://api.github.com/repos/huggingface/datasets/issues/3814/events | https://github.com/huggingface/datasets/pull/3814 | 1,158,518,995 | PR_kwDODunzps4z5Zk4 | 3,814 | Handle Nones in PyArrow struct | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Looks like I added my comments while you were editing - sorry about that"
] | 1,646,319,815,000 | 1,646,325,464,000 | 1,646,325,463,000 | CONTRIBUTOR | null | This PR fixes an issue introduced by #3575 where `None` values stored in PyArrow arrays/structs would get ignored by `cast_storage` or by the `pa.array(cast_to_python_objects(..))` pattern. To fix the former, it also bumps the minimal PyArrow version to v5.0.0 to use the `mask` param in `pa.SturctArray`.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3814/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3814/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3814",
"html_url": "https://github.com/huggingface/datasets/pull/3814",
"diff_url": "https://github.com/huggingface/datasets/pull/3814.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3814.patch",
"merged_at": 1646325463000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3813/comments | https://api.github.com/repos/huggingface/datasets/issues/3813/events | https://github.com/huggingface/datasets/issues/3813 | 1,158,474,859 | I_kwDODunzps5FDOxr | 3,813 | Add MetaShift dataset | {
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 3608941089,
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision",
"name": "vision",
"color": "bfdadc",
"default": false,
"description": "Vision datasets"
}
] | closed | false | {
"login": "dnaveenr",
"id": 17746528,
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaveenr",
"html_url": "https://github.com/dnaveenr",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "dnaveenr",
"id": 17746528,
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dnaveenr",
"html_url": "https://github.com/dnaveenr",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"I would like to take this up and give it a shot. Any image specific - dataset guidelines to keep in mind ? Thank you.",
"#self-assign",
"I've started working on adding this dataset. I require some inputs on the following : \r\n\r\nRef for the initial draft [here](https://github.com/dnaveenr/datasets/blob/add_metashift_dataset/datasets/metashift/metashift.py)\r\n1. The dataset does not have a typical - train/test/val split. What do we do for the _split_generators() function ? How do we go about this ?\r\n2. This dataset builds on the Visual Genome dataset, using a metadata file. The dataset is generated using generate_full_MetaShift.py script. By default, the authors choose to generate the dataset only for a SELECTED_CLASSES. The following script is used : \r\nCode : https://github.com/Weixin-Liang/MetaShift/blob/main/dataset/generate_full_MetaShift.py \r\nInfo : https://metashift.readthedocs.io/en/latest/sub_pages/download_MetaShift.html#generate-the-full-metashift-dataset\r\nCan I just copy over the required functions into the metashift.py to generate the dataset ?\r\n3. How do we complete the _generate_examples for this dataset ?\r\n\r\nThe user has the ability to use default selected classes, get the complete dataset or add more specific additional classes. I think config would be a good option here.\r\n\r\nInputs, suggestions would be helpful. Thank you.",
"I think @mariosasko and @lhoestq should be able to help here 😄 ",
"Hi ! Thanks for adding this dataset :) Let me answer your questions:\r\n\r\n1. in this case you can put everything in the \"train\" split\r\n2. Yes you can copy the script (provided you also include the MIT license of the code in the file header for example). Though we ideally try to not create new directories nor files when generating dataset, so if possible this script should be adapted to not create the file structure they mentioned, but instead yield the images one by one in `_generate_examples`. Let me know if you think this is feasible\r\n3. see point 2 haha\r\n\r\n> The user has the ability to use default selected classes, get the complete dataset or add more specific additional classes. I think config would be a good option here.\r\n\r\nYup ! We can also define a `selected_classes` parameter such that users can do\r\n```python\r\nload_dataset(\"metashift\", selected_classes=[\"cat\", \"dog\", ...])\r\n```",
"Great. This is helpful. Thanks @lhoestq .\r\nRegarding Point 2, I'll try using yield instead of creating the directories and see if its feasible. selected_classes config sounds good.",
"Closed via #3900 "
] | 1,646,317,605,000 | 1,649,597,999,000 | 1,649,597,999,000 | MEMBER | null | ## Adding a Dataset
- **Name:** MetaShift
- **Description:** collection of 12,868 sets of natural images across 410 classes-
- **Paper:** https://arxiv.org/abs/2202.06523v1
- **Data:** https://github.com/weixin-liang/metashift
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3813/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3813/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3812 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3812/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3812/comments | https://api.github.com/repos/huggingface/datasets/issues/3812/events | https://github.com/huggingface/datasets/pull/3812 | 1,158,369,995 | PR_kwDODunzps4z46C4 | 3,812 | benchmark streaming speed with tar vs zip archives | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I'm closing the PR since we're not going to merge it"
] | 1,646,311,721,000 | 1,646,319,334,000 | 1,646,319,333,000 | CONTRIBUTOR | null | # do not merge
## Hypothesis
packing data into a single zip archive could allow us not to care about splitting data into several tar archives for efficient streaming which is annoying (since data creators usually host the data in a single tar)
## Data
I host it [here](https://huggingface.co/datasets/polinaeterna/benchmark_dataset/)
## I checked three configurations:
1. All data in one zip archive, streaming only those files that exist in split metadata file (we can access them directrly with no need to iterate over full archive), see [this func](https://github.com/huggingface/datasets/compare/master...polinaeterna:benchmark-tar-zip?expand=1#diff-4f5200d4586aec5b2a89fcf34441c5f92156f9e9d408acc7e50666f9a1921ddcR196)
2. All data in three splits, the standart way to make streaming efficient, see [this func](https://github.com/huggingface/datasets/compare/master...polinaeterna:benchmark-tar-zip?expand=1#diff-4f5200d4586aec5b2a89fcf34441c5f92156f9e9d408acc7e50666f9a1921ddcR174)
3. All data in single tar, iterate over the full archive and take only files existing in split metadata file, see [this func](https://github.com/huggingface/datasets/compare/master...polinaeterna:benchmark-tar-zip?expand=1#diff-4f5200d4586aec5b2a89fcf34441c5f92156f9e9d408acc7e50666f9a1921ddcR150)
## Results
1. one zip
![image](https://user-images.githubusercontent.com/16348744/156567611-e3652087-7147-4cf0-9047-9cbc00ec71f5.png)
2. three tars
![image](https://user-images.githubusercontent.com/16348744/156567688-2a462107-f83e-4722-8ea3-71a13b56c998.png)
3. one tar
![image](https://user-images.githubusercontent.com/16348744/156567772-1bceb5f7-e7d9-4fa3-b31b-17fec5f9a5a7.png)
didn't check on the full data as it's time consuming but anyway it's pretty obvious that one-zip-way is not a good idea. here it's even worse than full iteration over tar containing all three splits (but that would depend on the case).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3812/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3812/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3812",
"html_url": "https://github.com/huggingface/datasets/pull/3812",
"diff_url": "https://github.com/huggingface/datasets/pull/3812.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3812.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3811 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3811/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3811/comments | https://api.github.com/repos/huggingface/datasets/issues/3811/events | https://github.com/huggingface/datasets/pull/3811 | 1,158,234,407 | PR_kwDODunzps4z4dHS | 3,811 | Update dev doc gh workflows | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,646,303,341,000 | 1,646,304,354,000 | 1,646,304,354,000 | CONTRIBUTOR | null | Reflect changes from https://github.com/huggingface/transformers/pull/15891 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3811/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3811/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3811",
"html_url": "https://github.com/huggingface/datasets/pull/3811",
"diff_url": "https://github.com/huggingface/datasets/pull/3811.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3811.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3810 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3810/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3810/comments | https://api.github.com/repos/huggingface/datasets/issues/3810/events | https://github.com/huggingface/datasets/pull/3810 | 1,158,202,093 | PR_kwDODunzps4z4WUW | 3,810 | Update version of xcopa dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,646,301,505,000 | 1,646,304,270,000 | 1,646,304,269,000 | MEMBER | null | Note that there was a version update of the `xcopa` dataset: https://github.com/cambridgeltl/xcopa/releases
We updated our loading script, but we did not bump a new version number:
- #3254
This PR updates our loading script version from `1.0.0` to `1.1.0`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3810/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3810/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3810",
"html_url": "https://github.com/huggingface/datasets/pull/3810",
"diff_url": "https://github.com/huggingface/datasets/pull/3810.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3810.patch",
"merged_at": 1646304269000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3809 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3809/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3809/comments | https://api.github.com/repos/huggingface/datasets/issues/3809/events | https://github.com/huggingface/datasets/issues/3809 | 1,158,143,480 | I_kwDODunzps5FB934 | 3,809 | Checksums didn't match for datasets on Google Drive | {
"login": "muelletm",
"id": 11507045,
"node_id": "MDQ6VXNlcjExNTA3MDQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/11507045?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muelletm",
"html_url": "https://github.com/muelletm",
"followers_url": "https://api.github.com/users/muelletm/followers",
"following_url": "https://api.github.com/users/muelletm/following{/other_user}",
"gists_url": "https://api.github.com/users/muelletm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muelletm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muelletm/subscriptions",
"organizations_url": "https://api.github.com/users/muelletm/orgs",
"repos_url": "https://api.github.com/users/muelletm/repos",
"events_url": "https://api.github.com/users/muelletm/events{/privacy}",
"received_events_url": "https://api.github.com/users/muelletm/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @muelletm, thanks for reporting.\r\n\r\nThis issue was already reported and its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it. See:\r\n- #3787 \r\n\r\nUntil our next `datasets` library release, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```"
] | 1,646,298,070,000 | 1,646,299,498,000 | 1,646,299,445,000 | NONE | null | ## Describe the bug
Datasets hosted on Google Drive do not seem to work right now.
Loading them fails with a checksum error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
for dataset in ["head_qa", "yelp_review_full"]:
try:
load_dataset(dataset)
except Exception as exception:
print("Error", dataset, exception)
```
Here is a [colab](https://colab.research.google.com/drive/1wOtHBmL8I65NmUYakzPV5zhVCtHhi7uQ#scrollTo=cDzdCLlk-Bo4).
## Expected results
The datasets should be loaded.
## Actual results
```
Downloading and preparing dataset head_qa/es (download: 75.69 MiB, generated: 2.86 MiB, post-processed: Unknown size, total: 78.55 MiB) to /root/.cache/huggingface/datasets/head_qa/es/1.1.0/583ab408e8baf54aab378c93715fadc4d8aa51b393e27c3484a877e2ac0278e9...
Error head_qa Checksums didn't match for dataset source files:
['https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t']
Downloading and preparing dataset yelp_review_full/yelp_review_full (download: 187.06 MiB, generated: 496.94 MiB, post-processed: Unknown size, total: 684.00 MiB) to /root/.cache/huggingface/datasets/yelp_review_full/yelp_review_full/1.0.0/13c31a618ba62568ec8572a222a283dfc29a6517776a3ac5945fb508877dde43...
Error yelp_review_full Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0']
```
## Environment info
- `datasets` version: 1.18.3
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3809/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3809/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3808 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3808/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3808/comments | https://api.github.com/repos/huggingface/datasets/issues/3808/events | https://github.com/huggingface/datasets/issues/3808 | 1,157,650,043 | I_kwDODunzps5FAFZ7 | 3,808 | Pre-Processing Cache Fails when using a Factory pattern | {
"login": "Helw150",
"id": 9847335,
"node_id": "MDQ6VXNlcjk4NDczMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/9847335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Helw150",
"html_url": "https://github.com/Helw150",
"followers_url": "https://api.github.com/users/Helw150/followers",
"following_url": "https://api.github.com/users/Helw150/following{/other_user}",
"gists_url": "https://api.github.com/users/Helw150/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Helw150/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Helw150/subscriptions",
"organizations_url": "https://api.github.com/users/Helw150/orgs",
"repos_url": "https://api.github.com/users/Helw150/repos",
"events_url": "https://api.github.com/users/Helw150/events{/privacy}",
"received_events_url": "https://api.github.com/users/Helw150/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Ok - this is still an issue but I believe the root cause is different than I originally thought. I'm now able to get caching to work consistently with the above example as long as I fix the python hash seed `export PYTHONHASHSEED=1234`",
"Hi! \r\n\r\nYes, our hasher should work with decorators. For instance, this dummy example:\r\n```python\r\ndef f(arg):\r\n def f1(ex):\r\n return {\"a\": ex[\"col1\"] + arg}\r\n return f1\r\n```\r\ngives the same hash across different Python sessions (`datasets.fingerprint.Hasher.hash(f(\"string1\")` returns `\"408c9059f89dbd6c\"` on my machine).\r\n\r\nCould you please make the example self-contained? This way, we can reproduce the bug. Additionally, you can try to find the problematic object yourself by testing their hash with `datasets.fingerprint.Hasher.hash(obj)`\r\n\r\nThis could be related to https://github.com/huggingface/datasets/issues/3638.",
"#3638 was indeed my issue. Thanks!"
] | 1,646,252,323,000 | 1,646,953,307,000 | 1,646,953,307,000 | NONE | null | ## Describe the bug
If you utilize a pre-processing function which is created using a factory pattern, the function hash changes on each run (even if the function is identical) and therefore the data will be reproduced each time.
## Steps to reproduce the bug
```python
def preprocess_function_factory(augmentation=None):
def preprocess_function(examples):
# Tokenize the texts
if augmentation:
conversions1 = [
augmentation(example)
for example in examples[sentence1_key]
]
if sentence2_key is None:
args = (conversions1,)
else:
conversions2 = [
augmentation(example)
for example in examples[sentence2_key]
]
args = (conversions1, conversions2)
else:
args = (
(examples[sentence1_key],)
if sentence2_key is None
else (examples[sentence1_key], examples[sentence2_key])
)
result = tokenizer(
*args, padding=padding, max_length=max_seq_length, truncation=True
)
# Map labels to IDs (not necessary for GLUE tasks)
if label_to_id is not None and "label" in examples:
result["label"] = [
(label_to_id[l] if l != -1 else -1) for l in examples["label"]
]
return result
return preprocess_function
capitalize = lambda x: x.capitalize()
preprocess_function = preprocess_function_factory(augmentation=capitalize)
print(hash(preprocess_function)) # This will change on each run
raw_datasets = raw_datasets.map(
preprocess_function,
batched=True,
load_from_cache_file=True,
desc="Running transformation and tokenizer on dataset",
)
```
## Expected results
Running the code twice will cause the cache to be re-used.
## Actual results
Running the code twice causes the whole dataset to be re-processed
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3808/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3808/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3807 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3807/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3807/comments | https://api.github.com/repos/huggingface/datasets/issues/3807/events | https://github.com/huggingface/datasets/issues/3807 | 1,157,531,812 | I_kwDODunzps5E_oik | 3,807 | NonMatchingChecksumError in xcopa dataset | {
"login": "afcruzs-ms",
"id": 93286455,
"node_id": "U_kgDOBY9wNw",
"avatar_url": "https://avatars.githubusercontent.com/u/93286455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/afcruzs-ms",
"html_url": "https://github.com/afcruzs-ms",
"followers_url": "https://api.github.com/users/afcruzs-ms/followers",
"following_url": "https://api.github.com/users/afcruzs-ms/following{/other_user}",
"gists_url": "https://api.github.com/users/afcruzs-ms/gists{/gist_id}",
"starred_url": "https://api.github.com/users/afcruzs-ms/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/afcruzs-ms/subscriptions",
"organizations_url": "https://api.github.com/users/afcruzs-ms/orgs",
"repos_url": "https://api.github.com/users/afcruzs-ms/repos",
"events_url": "https://api.github.com/users/afcruzs-ms/events{/privacy}",
"received_events_url": "https://api.github.com/users/afcruzs-ms/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"@albertvillanova here's a separate issue for a bug similar to #3792",
"Hi @afcruzs-ms, thanks for opening this separate issue for your problem.\r\n\r\nThe root problem in the other issue (#3792) was a change in the service of Google Drive.\r\n\r\nBut in your case, the `xcopa` dataset is not hosted on Google Drive. Therefore, the root cause should be a different one.\r\n\r\nLet me look at it... ",
"@afcruzs-ms, I'm not able to reproduce the issue you reported:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ...: dataset = load_dataset(\"xcopa\", \"it\")\r\nDownloading builder script: 5.21kB [00:00, 2.75MB/s] \r\nDownloading metadata: 28.6kB [00:00, 14.5MB/s] \r\nDownloading and preparing dataset xcopa/it (download: 627.09 KiB, generated: 76.43 KiB, post-processed: Unknown size, total: 703.52 KiB) to .../.cache/huggingface/datasets/xcopa/it/1.0.0/e1fab65f984b24c8b66bcf7ac27a26a1182f84adfb2e74035861be65e214b9e6...\r\nDownloading data: 642kB [00:00, 5.42MB/s]\r\nDataset xcopa downloaded and prepared to .../.cache/huggingface/datasets/xcopa/it/1.0.0/e1fab65f984b24c8b66bcf7ac27a26a1182f84adfb2e74035861be65e214b9e6. Subsequent calls will reuse this data. \r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 733.27it/s]\r\n\r\nIn [2]: dataset\r\nOut[2]: \r\nDatasetDict({\r\n test: Dataset({\r\n features: ['premise', 'choice1', 'choice2', 'question', 'label', 'idx', 'changed'],\r\n num_rows: 500\r\n })\r\n validation: Dataset({\r\n features: ['premise', 'choice1', 'choice2', 'question', 'label', 'idx', 'changed'],\r\n num_rows: 100\r\n })\r\n})\r\n```\r\n\r\nMaybe you have some issue with your cached data... Could you please try to force the redownload of the data?\r\n```python\r\ndataset = load_dataset(\"xcopa\", \"it\", download_mode=\"force_redownload\")\r\n```",
"It works indeed, thanks! ",
"unfortunately, i am having a similar problem with the irc_disentaglement dataset :/\r\nmy code:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"irc_disentangle\", download_mode=\"force_redownload\")\r\n```\r\n\r\nhowever, it produces the same error as @afcruzs-ms \r\n```\r\n[38](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=37) if len(bad_urls) > 0:\r\n [39](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=38) error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> [40](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=39) raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n [41](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=40) logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/jkkummerfeld/irc-disentanglement/tarball/master']\r\n```\r\n\r\nI attempted to use the `ignore_verifications' as such:\r\n```\r\nds = datasets.load_dataset('irc_disentangle', download_mode=\"force_redownload\", ignore_verifications=True)\r\n\r\n```\r\n```\r\nDownloading builder script: 12.0kB [00:00, 5.92MB/s] \r\nDownloading metadata: 7.58kB [00:00, 3.48MB/s] \r\nNo config specified, defaulting to: irc_disentangle/ubuntu\r\nDownloading and preparing dataset irc_disentangle/ubuntu (download: 112.98 MiB, generated: 60.05 MiB, post-processed: Unknown size, total: 173.03 MiB) to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5...\r\nDownloading data: 118MB [00:09, 12.1MB/s] \r\n \r\nDataset irc_disentangle downloaded and prepared to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5. Subsequent calls will reuse this data.\r\n100%|██████████| 3/3 [00:00<00:00, 675.38it/s]\r\n```\r\nbut, this returns an empty set?\r\n\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n test: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n validation: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n})\r\n```\r\n\r\nnot sure what else to try at this point?\r\nThanks in advanced🤗",
"Thanks @labouz for reporting: yes, better opening a new GitHub issue as you did. I'm addressing it:\r\n- #4376"
] | 1,646,244,619,000 | 1,653,026,442,000 | 1,646,329,231,000 | NONE | null | ## Describe the bug
Loading the xcopa dataset doesn't work, it fails due to a mismatch in the checksum.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("xcopa", "it")
```
## Expected results
The dataset should be loaded correctly.
## Actual results
Fails with:
```python
in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/cambridgeltl/xcopa/archive/master.zip']
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3, and 1.18.4.dev0
- Platform:
- Python version: 3.8
- PyArrow version:
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3807/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3807/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3806 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3806/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3806/comments | https://api.github.com/repos/huggingface/datasets/issues/3806/events | https://github.com/huggingface/datasets/pull/3806 | 1,157,505,826 | PR_kwDODunzps4z2FeI | 3,806 | Fix Spanish data file URL in wiki_lingua dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,646,243,022,000 | 1,646,296,697,000 | 1,646,296,696,000 | MEMBER | null | This PR fixes the URL for Spanish data file.
Previously, Spanish had the same URL as Vietnamese data file. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3806/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3806/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3806",
"html_url": "https://github.com/huggingface/datasets/pull/3806",
"diff_url": "https://github.com/huggingface/datasets/pull/3806.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3806.patch",
"merged_at": 1646296696000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3805 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3805/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3805/comments | https://api.github.com/repos/huggingface/datasets/issues/3805/events | https://github.com/huggingface/datasets/pull/3805 | 1,157,454,884 | PR_kwDODunzps4z16os | 3,805 | Remove decode: true for image feature in head_qa | {
"login": "craffel",
"id": 417568,
"node_id": "MDQ6VXNlcjQxNzU2OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/417568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/craffel",
"html_url": "https://github.com/craffel",
"followers_url": "https://api.github.com/users/craffel/followers",
"following_url": "https://api.github.com/users/craffel/following{/other_user}",
"gists_url": "https://api.github.com/users/craffel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/craffel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/craffel/subscriptions",
"organizations_url": "https://api.github.com/users/craffel/orgs",
"repos_url": "https://api.github.com/users/craffel/repos",
"events_url": "https://api.github.com/users/craffel/events{/privacy}",
"received_events_url": "https://api.github.com/users/craffel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,646,240,314,000 | 1,646,655,216,000 | 1,646,655,215,000 | CONTRIBUTOR | null | This was erroneously added in https://github.com/huggingface/datasets/commit/701f128de2594e8dc06c0b0427c0ba1e08be3054. This PR removes it. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3805/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3805/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3805",
"html_url": "https://github.com/huggingface/datasets/pull/3805",
"diff_url": "https://github.com/huggingface/datasets/pull/3805.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3805.patch",
"merged_at": 1646655215000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3804 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3804/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3804/comments | https://api.github.com/repos/huggingface/datasets/issues/3804/events | https://github.com/huggingface/datasets/issues/3804 | 1,157,297,278 | I_kwDODunzps5E-vR- | 3,804 | Text builder with custom separator line boundaries | {
"login": "cronoik",
"id": 18630848,
"node_id": "MDQ6VXNlcjE4NjMwODQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cronoik",
"html_url": "https://github.com/cronoik",
"followers_url": "https://api.github.com/users/cronoik/followers",
"following_url": "https://api.github.com/users/cronoik/following{/other_user}",
"gists_url": "https://api.github.com/users/cronoik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cronoik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cronoik/subscriptions",
"organizations_url": "https://api.github.com/users/cronoik/orgs",
"repos_url": "https://api.github.com/users/cronoik/repos",
"events_url": "https://api.github.com/users/cronoik/events{/privacy}",
"received_events_url": "https://api.github.com/users/cronoik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Gently pinging @lhoestq",
"Hi ! Interresting :)\r\n\r\nCould you give more details on what kind of separators you would like to use instead ?",
"In my case, I just want to use `\\n` but not `U+2028`.",
"Ok I see, maybe there can be a `sep` parameter to allow users to specify what line/paragraph separator they'd like to use",
"Related to:\r\n- #3729 \r\n- #3910",
"Thanks for requesting this enhancement. We have recently found a somehow related issue with another dataset:\r\n- #3704\r\n\r\nLet me make a PR proposal."
] | 1,646,232,616,000 | 1,647,446,039,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
The current [Text](https://github.com/huggingface/datasets/blob/207be676bffe9d164740a41a883af6125edef135/src/datasets/packaged_modules/text/text.py#L23) builder implementation splits texts with `splitlines()` which splits the text on several line boundaries. Not all of them are always wanted.
**Describe the solution you'd like**
```python
if self.config.sample_by == "line":
batch_idx = 0
while True:
batch = f.read(self.config.chunksize)
if not batch:
break
batch += f.readline() # finish current line
if self.config.custom_newline is None:
batch = batch.splitlines(keepends=self.config.keep_linebreaks)
else:
batch = batch.split(self.config.custom_newline)[:-1]
pa_table = pa.Table.from_arrays([pa.array(batch)], schema=schema)
# Uncomment for debugging (will print the Arrow table size and elements)
# logger.warning(f"pa_table: {pa_table} num rows: {pa_table.num_rows}")
# logger.warning('\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows)))
yield (file_idx, batch_idx), pa_table
batch_idx += 1
```
**A clear and concise description of what you want to happen.**
Creating the dataset rows with a subset of the `splitlines()` line boundaries. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3804/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3804/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3803 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3803/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3803/comments | https://api.github.com/repos/huggingface/datasets/issues/3803/events | https://github.com/huggingface/datasets/pull/3803 | 1,157,271,679 | PR_kwDODunzps4z1T48 | 3,803 | Remove deprecated methods/params (preparation for v2.0) | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,646,231,352,000 | 1,646,232,801,000 | 1,646,232,801,000 | CONTRIBUTOR | null | This PR removes the following deprecated methos/params:
* `Dataset.cast_`/`DatasetDict.cast_`
* `Dataset.dictionary_encode_column_`/`DatasetDict.dictionary_encode_column_`
* `Dataset.remove_columns_`/`DatasetDict.remove_columns_`
* `Dataset.rename_columns_`/`DatasetDict.rename_columns_`
* `prepare_module`
* param `script_version` in `load_dataset`/`load_metric`
* param `version` in `hf_github_url`
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3803/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3803/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3803",
"html_url": "https://github.com/huggingface/datasets/pull/3803",
"diff_url": "https://github.com/huggingface/datasets/pull/3803.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3803.patch",
"merged_at": 1646232801000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3802/comments | https://api.github.com/repos/huggingface/datasets/issues/3802/events | https://github.com/huggingface/datasets/pull/3802 | 1,157,009,964 | PR_kwDODunzps4z0biM | 3,802 | Release of FairLex dataset | {
"login": "iliaschalkidis",
"id": 1626984,
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iliaschalkidis",
"html_url": "https://github.com/iliaschalkidis",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This is awesome ! The dataset card and the dataset script look amazing :)\r\n\r\nI wanted to ask you if you'd be interested to have this dataset under the namespace of you research group at https://huggingface.co/coastalcph ? If yes, then you can actually create a dataset repository under your research group name and upload the files from this PR there",
"Hi @lhoestq,\r\n\r\nYeah, I could do that. I see that people do that a lot of models, but not for datasets. \r\n\r\nIs there any good reason to have it under the organization domain instead of the general domain?\r\n\r\n Thanks!",
"It's nice to have it under your namespace:\r\n- it will appear on your research group page, along with your models\r\n- you can edit or create datasets at any time - you don't need to open PRs on GitHub\r\n\r\nAll the datasets that are not under a namespace are this way because we started adding datasets from GitHub. Now we encourage users to upload them directly to make things simpler, and aligned with the workflow for models\r\n\r\n(the documentation will be updated in the following days)\r\n\r\nNote that we will keep accepting PRs here though when there is no clear namespace under which a dataset should be, or for users that want a review from us",
"Ok, I'll do that. So, I'll just have to upload all the files under the `/fairlex` directory in my PR, right?",
"Yes exactly !",
"Ok, I uploaded most of them from the UI environment (https://huggingface.co/datasets/coastalcph/fairlex). Can I possibly upload the dummy data as well from the UI environment. I really want to avoid the CLI right now 😄 ",
"Yea sure, feel free to use the UI of the website, even for the dummy data ^^",
"Did you upload them yourself? Because I see the data preview, and I'm pretty sure, I didn't do that 😄 ",
"The preview is computed from the real data ;)\r\n\r\nThe dummy data are used for testing only",
"Haha, ok I was shocked! Cool, I close this PR, then. Thanks, again! ",
"Thank you 🤗"
] | 1,646,217,618,000 | 1,646,234,470,000 | 1,646,234,334,000 | CONTRIBUTOR | null |
**FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing**
We present a benchmark suite of four datasets for evaluating the fairness of pre-trained legal language models and the techniques used to fine-tune them for downstream tasks. Our benchmarks cover four jurisdictions (European Council, USA, Swiss, and Chinese), five languages (English, German, French, Italian, and Chinese), and fairness across five attributes (gender, age, nationality/region, language, and legal area). In our experiments, we evaluate pre-trained language models using several group-robust fine-tuning techniques and show that performance group disparities are vibrant in many cases, while none of these techniques guarantee fairness, nor consistently mitigate group disparities. Furthermore, we provide a quantitative and qualitative analysis of our results, highlighting open challenges in the development of robustness methods in legal NLP.
*Ilias Chalkidis, Tommaso Pasini, Sheng Zhang, Letizia Tomada, Letizia, Sebastian Felix Schwemer, Anders Søgaard. FairLex: A Multilingual Benchmark for Evaluating Fairness in Legal Text Processing. 2022. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Dublin, Ireland.*
Note: Please review this initial commit, and I'll update the publication link, once I'll have the ArXived version. Thanks!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3802/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3802/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3802",
"html_url": "https://github.com/huggingface/datasets/pull/3802",
"diff_url": "https://github.com/huggingface/datasets/pull/3802.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3802.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3801 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3801/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3801/comments | https://api.github.com/repos/huggingface/datasets/issues/3801/events | https://github.com/huggingface/datasets/pull/3801 | 1,155,649,279 | PR_kwDODunzps4zvqjN | 3,801 | [Breaking] Align `map` when streaming: update instead of overwrite + add missing parameters | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Right ! Will add it in another PR :)"
] | 1,646,158,003,000 | 1,646,670,630,000 | 1,646,670,629,000 | MEMBER | null | Currently the datasets in streaming mode and in non-streaming mode have two distinct API for `map` processing.
In this PR I'm aligning the two by changing `map` in streamign mode. This includes a **major breaking change** and will require a major release of the library: **Datasets 2.0**
In particular, `Dataset.map` adds new columns (with dict.update) BUT `IterableDataset.map` used to discard previous columns (it overwrites the dict). In this PR I'm chaning the `IterableDataset.map` to behave the same way as `Dataset.map`: it will update the examples instead of overwriting them.
I'm also adding those missing parameters to streaming `map`: with_indices, input_columns, remove_columns
### TODO
- [x] tests
- [x] docs
Related to https://github.com/huggingface/datasets/issues/3444 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3801/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3801/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3801",
"html_url": "https://github.com/huggingface/datasets/pull/3801",
"diff_url": "https://github.com/huggingface/datasets/pull/3801.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3801.patch",
"merged_at": 1646670629000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3800 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3800/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3800/comments | https://api.github.com/repos/huggingface/datasets/issues/3800/events | https://github.com/huggingface/datasets/pull/3800 | 1,155,620,761 | PR_kwDODunzps4zvkjA | 3,800 | Added computer vision tasks | {
"login": "merveenoyan",
"id": 53175384,
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/merveenoyan",
"html_url": "https://github.com/merveenoyan",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,646,156,266,000 | 1,646,378,155,000 | 1,646,378,155,000 | CONTRIBUTOR | null | Previous PR was in my fork so thought it'd be easier if I do it from a branch. Added computer vision task datasets according to HF tasks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3800/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3800/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3800",
"html_url": "https://github.com/huggingface/datasets/pull/3800",
"diff_url": "https://github.com/huggingface/datasets/pull/3800.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3800.patch",
"merged_at": 1646378155000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3799 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3799/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3799/comments | https://api.github.com/repos/huggingface/datasets/issues/3799/events | https://github.com/huggingface/datasets/pull/3799 | 1,155,356,102 | PR_kwDODunzps4zus9R | 3,799 | Xtreme-S Metrics | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq - if you could take a final review here this would be great (if you have 5min :-) ) ",
"Don't think the failures are related but not 100% sure",
"Yes the CI fail is unrelated - you can ignore it"
] | 1,646,142,148,000 | 1,647,441,629,000 | 1,647,441,626,000 | MEMBER | null | **Added datasets (TODO)**:
- [x] MLS
- [x] Covost2
- [x] Minds-14
- [x] Voxpopuli
- [x] FLoRes (need data)
**Metrics**: Done | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3799/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3799/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3799",
"html_url": "https://github.com/huggingface/datasets/pull/3799",
"diff_url": "https://github.com/huggingface/datasets/pull/3799.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3799.patch",
"merged_at": 1647441626000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3798 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3798/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3798/comments | https://api.github.com/repos/huggingface/datasets/issues/3798/events | https://github.com/huggingface/datasets/pull/3798 | 1,154,411,066 | PR_kwDODunzps4zrl5Y | 3,798 | Fix error message in CSV loader for newer Pandas versions | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,646,072,650,000 | 1,646,074,299,000 | 1,646,074,298,000 | CONTRIBUTOR | null | Fix the error message in the CSV loader for `Pandas >= 1.4`. To fix this, I directly print the current file name in the for-loop. An alternative would be to use a check similar to this:
```python
csv_file_reader.handle.handle if datasets.config.PANDAS_VERSION >= version.parse("1.4") else csv_file_reader.f
```
CC: @SBrandeis | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3798/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3798/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3798",
"html_url": "https://github.com/huggingface/datasets/pull/3798",
"diff_url": "https://github.com/huggingface/datasets/pull/3798.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3798.patch",
"merged_at": 1646074298000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3797 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3797/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3797/comments | https://api.github.com/repos/huggingface/datasets/issues/3797/events | https://github.com/huggingface/datasets/pull/3797 | 1,154,383,063 | PR_kwDODunzps4zrgAD | 3,797 | Reddit dataset card contribution | {
"login": "anna-kay",
"id": 56791604,
"node_id": "MDQ6VXNlcjU2NzkxNjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/56791604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anna-kay",
"html_url": "https://github.com/anna-kay",
"followers_url": "https://api.github.com/users/anna-kay/followers",
"following_url": "https://api.github.com/users/anna-kay/following{/other_user}",
"gists_url": "https://api.github.com/users/anna-kay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anna-kay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anna-kay/subscriptions",
"organizations_url": "https://api.github.com/users/anna-kay/orgs",
"repos_url": "https://api.github.com/users/anna-kay/repos",
"events_url": "https://api.github.com/users/anna-kay/events{/privacy}",
"received_events_url": "https://api.github.com/users/anna-kay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,646,070,798,000 | 1,646,139,537,000 | 1,646,139,537,000 | CONTRIBUTOR | null | Description tags for webis-tldr-17 added. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3797/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3797/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3797",
"html_url": "https://github.com/huggingface/datasets/pull/3797",
"diff_url": "https://github.com/huggingface/datasets/pull/3797.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3797.patch",
"merged_at": 1646139536000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3796 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3796/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3796/comments | https://api.github.com/repos/huggingface/datasets/issues/3796/events | https://github.com/huggingface/datasets/pull/3796 | 1,154,298,629 | PR_kwDODunzps4zrOQ4 | 3,796 | Skip checksum computation if `ignore_verifications` is `True` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,646,065,725,000 | 1,646,067,826,000 | 1,646,067,826,000 | CONTRIBUTOR | null | This will speed up the loading of the datasets where the number of data files is large (can easily happen with `imagefoler`, for instance) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3796/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3796/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3796",
"html_url": "https://github.com/huggingface/datasets/pull/3796",
"diff_url": "https://github.com/huggingface/datasets/pull/3796.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3796.patch",
"merged_at": 1646067826000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3795 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3795/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3795/comments | https://api.github.com/repos/huggingface/datasets/issues/3795/events | https://github.com/huggingface/datasets/issues/3795 | 1,153,261,281 | I_kwDODunzps5EvV7h | 3,795 | can not flatten natural_questions dataset | {
"login": "Hannibal046",
"id": 38466901,
"node_id": "MDQ6VXNlcjM4NDY2OTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hannibal046",
"html_url": "https://github.com/Hannibal046",
"followers_url": "https://api.github.com/users/Hannibal046/followers",
"following_url": "https://api.github.com/users/Hannibal046/following{/other_user}",
"gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions",
"organizations_url": "https://api.github.com/users/Hannibal046/orgs",
"repos_url": "https://api.github.com/users/Hannibal046/repos",
"events_url": "https://api.github.com/users/Hannibal046/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hannibal046/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"same issue. downgrade it to a lower version.",
"Thanks for reporting, I'll take a look tomorrow :)"
] | 1,645,970,260,000 | 1,647,873,372,000 | 1,647,873,372,000 | NONE | null | ## Describe the bug
after downloading the natural_questions dataset, can not flatten the dataset considering there are `long answer` and `short answer` in `annotations`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('natural_questions',cache_dir = 'data/dataset_cache_dir')
dataset['train'].flatten()
```
## Expected results
a dataset with `long_answer` as features
## Actual results
Traceback (most recent call last):
File "temp.py", line 5, in <module>
dataset['train'].flatten()
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/fingerprint.py", line 413, in wrapper
out = func(self, *args, **kwargs)
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1296, in flatten
dataset._data = update_metadata_with_features(dataset._data, dataset.features)
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 536, in update_metadata_with_features
features = Features({col_name: features[col_name] for col_name in table.column_names})
File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 536, in <dictcomp>
features = Features({col_name: features[col_name] for col_name in table.column_names})
KeyError: 'annotations.long_answer'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.13
- Platform: MBP
- Python version: 3.8
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3795/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3795/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3794 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3794/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3794/comments | https://api.github.com/repos/huggingface/datasets/issues/3794/events | https://github.com/huggingface/datasets/pull/3794 | 1,153,185,343 | PR_kwDODunzps4zniT4 | 3,794 | Add Mahalanobis distance metric | {
"login": "JoaoLages",
"id": 17574157,
"node_id": "MDQ6VXNlcjE3NTc0MTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/17574157?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoaoLages",
"html_url": "https://github.com/JoaoLages",
"followers_url": "https://api.github.com/users/JoaoLages/followers",
"following_url": "https://api.github.com/users/JoaoLages/following{/other_user}",
"gists_url": "https://api.github.com/users/JoaoLages/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoaoLages/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoaoLages/subscriptions",
"organizations_url": "https://api.github.com/users/JoaoLages/orgs",
"repos_url": "https://api.github.com/users/JoaoLages/repos",
"events_url": "https://api.github.com/users/JoaoLages/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoaoLages/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,645,959,391,000 | 1,646,232,375,000 | 1,646,232,375,000 | CONTRIBUTOR | null | Mahalanobis distance is a very useful metric to measure the distance from one datapoint X to a distribution P.
In this PR I implement the metric in a simple way with the help of numpy only.
Similar to the [MAUVE implementation](https://github.com/huggingface/datasets/blob/master/metrics/mauve/mauve.py), we can make this metric accept texts as input and encode them with a featurize model, if that is desirable. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3794/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3794/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3794",
"html_url": "https://github.com/huggingface/datasets/pull/3794",
"diff_url": "https://github.com/huggingface/datasets/pull/3794.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3794.patch",
"merged_at": 1646232374000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3793 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3793/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3793/comments | https://api.github.com/repos/huggingface/datasets/issues/3793/events | https://github.com/huggingface/datasets/pull/3793 | 1,150,974,950 | PR_kwDODunzps4zfdL0 | 3,793 | Docs new UI actions no self hosted | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"It seems like the doc can't be compiled right now because of the following:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/doc-builder\", line 33, in <module>\r\n sys.exit(load_entry_point('doc-builder', 'console_scripts', 'doc-builder')())\r\n File \"/__w/datasets/datasets/doc-builder/src/doc_builder/commands/doc_builder_cli.py\", line 39, in main\r\n args.func(args)\r\n File \"/__w/datasets/datasets/doc-builder/src/doc_builder/commands/build.py\", line 95, in build_command\r\n build_doc(\r\n File \"/__w/datasets/datasets/doc-builder/src/doc_builder/build_doc.py\", line 361, in build_doc\r\n anchors_mapping = build_mdx_files(package, doc_folder, output_dir, page_info)\r\n File \"/__w/datasets/datasets/doc-builder/src/doc_builder/build_doc.py\", line 200, in build_mdx_files\r\n raise type(e)(f\"There was an error when converting {file} to the MDX format.\\n\" + e.args[0]) from e\r\nTypeError: There was an error when converting datasets/docs/source/package_reference/table_classes.mdx to the MDX format.\r\nexpected string or bytes-like object\r\n```",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3793). All of your documentation changes will be reflected on that endpoint.",
"This is due to the injection of docstrings from PyArrow. I think I can fix that by moving all the docstrings and fix them manually.",
"> It seems like the doc can't be compiled right now because of the following:\r\n\r\nit is expected since there is something I need to change on doc-builder side.\r\n\r\n> This is due to the injection of docstrings from PyArrow. I think I can fix that by moving all the docstrings and fix them manually.\r\n\r\n@lhoestq I will let you know if we need to change it manually.\r\n\r\n@LysandreJik thanks a lot for this PR! I only had one question [here](https://github.com/huggingface/datasets/pull/3793#discussion_r816100194)",
"> @lhoestq I will let you know if we need to change it manually.\r\n\r\nIt would be simpler to change it manually anyway - I don't want our documentation to break if PyArrow has documentation issues",
"For some reason it fails when `Installing node dependencies` when running `npm ci` from the `kit` directory, any idea why @mishig25 ?",
"Checking it rn",
"It's very likely linked to an OOM error: https://github.com/huggingface/transformers/pull/15710#issuecomment-1051737337"
] | 1,645,832,935,000 | 1,646,150,129,000 | 1,646,150,128,000 | MEMBER | null | Removes the need to have a self-hosted runner for the dev documentation | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3793/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3793/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3793",
"html_url": "https://github.com/huggingface/datasets/pull/3793",
"diff_url": "https://github.com/huggingface/datasets/pull/3793.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3793.patch",
"merged_at": 1646150128000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3792 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3792/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3792/comments | https://api.github.com/repos/huggingface/datasets/issues/3792/events | https://github.com/huggingface/datasets/issues/3792 | 1,150,812,404 | I_kwDODunzps5EmAD0 | 3,792 | Checksums didn't match for dataset source | {
"login": "rafikg",
"id": 13174842,
"node_id": "MDQ6VXNlcjEzMTc0ODQy",
"avatar_url": "https://avatars.githubusercontent.com/u/13174842?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafikg",
"html_url": "https://github.com/rafikg",
"followers_url": "https://api.github.com/users/rafikg/followers",
"following_url": "https://api.github.com/users/rafikg/following{/other_user}",
"gists_url": "https://api.github.com/users/rafikg/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafikg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafikg/subscriptions",
"organizations_url": "https://api.github.com/users/rafikg/orgs",
"repos_url": "https://api.github.com/users/rafikg/repos",
"events_url": "https://api.github.com/users/rafikg/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafikg/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"Same issue with `dataset = load_dataset(\"dbpedia_14\")`\r\n```\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k']",
"I think this is a side-effect of #3787. The checksums won't match because the URLs have changed. @rafikg @Y0mingZhang, while this is fixed, maybe you can load the datasets as such:\r\n\r\n`data = datasets.load_dataset(\"wiki_lingua\", name=language, split=\"train[:2000]\", ignore_verifications=True)`\r\n`dataset = load_dataset(\"dbpedia_14\", ignore_verifications=True)`\r\n\r\nThis will, most probably, skip the verifications and integrity checks listed [here](https://huggingface.co/docs/datasets/loading_datasets.html#integrity-verifications)",
"Hi! Installing the `datasets` package from master (`pip install git+https://github.com/huggingface/datasets.git`) and then redownloading the datasets with `download_mode` set to `force_redownload` (e.g. `dataset = load_dataset(\"dbpedia_14\", download_mode=\"force_redownload\")`) should fix the issue.",
"Hi @rafikg and @Y0mingZhang, thanks for reporting.\r\n\r\nIndeed it seems that Google Drive changed their way to access their data files. We have recently handled that change:\r\n- #3787\r\n\r\nbut it will be accessible to users only in our next release of the `datasets` version.\r\n- Note that our latest release (version 1.18.3) was made before this fix: https://github.com/huggingface/datasets/releases/tag/1.18.3\r\n\r\nIn the meantime, as @mariosasko explained, you can incorporate this \"fix\" by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, you should force the redownload of the data (before the fix, you are just downloading/caching the virus scan warning page, instead of the data file):\r\n```shell\r\ndata = datasets.load_dataset(\"wiki_lingua\", name=language, split=\"train[:2000]\", download_mode=\"force_redownload\")",
"@albertvillanova by running:\r\n```\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\ndata = datasets.load_dataset(\"wiki_lingua\", name=language, split=\"train[:2000]\", download_mode=\"force_redownload\", ignore_verifications=True)\r\n```\r\n\r\nI had a pickle error **UnpicklingError: invalid load key, '<'** in this part of code both `locally and on google colab`:\r\n\r\n```\r\n\"\"\"Yields examples.\"\"\"\r\nwith open(filepath, \"rb\") as f:\r\n data = pickle.load(f)\r\nfor id_, row in enumerate(data.items()):\r\n yield id_, {\"url\": row[0], \"article\": self._process_article(row[1])}\r\n```\r\n",
"This issue impacts many more datasets than the ones mention in this thread. Can we post # of downloads for each dataset by day (by successes and failures)? If so, it should be obvious which ones are failing.",
"I can see this problem too in xcopa, unfortunately installing the latest master (1.18.4.dev0) doesn't work, @albertvillanova .\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"xcopa\", \"it\")\r\n```\r\n\r\nThrows\r\n\r\n```\r\nin verify_checksums(expected_checksums, recorded_checksums, verification_name)\r\n 38 if len(bad_urls) > 0:\r\n 39 error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n 41 logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n 42 \r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/cambridgeltl/xcopa/archive/master.zip']\r\n```",
"Hi @rafikg, I think that is another different issue. Let me check it... \r\n\r\nI guess maybe you are using a different Python version that the one the dataset owner used to create the pickle file...",
"@kwchurch the datasets impacted for this specific issue are the ones which are hosted at Google Drive.",
"@afcruzs-ms I think your issue is a different one, because that dataset is not hosted at Google Drive. Would you mind open another issue for that other problem, please? Thanks! :)",
"@albertvillanova just to let you know that I tried it locally and on colab and it is the same error",
"There are many many datasets on HugggingFace that are receiving this checksum error. Some of these datasets are very popular. There must be a way to track these errors, or to do regression testing. We don't want to catch each of these errors on each dataset, one at a time.",
"@rafikg I am sorry, but I can't reproduce your issue. For me it works OK for all languages. See: https://colab.research.google.com/drive/1yIcLw1it118-TYE3ZlFmV7gJcsF6UCsH?usp=sharing",
"@kwchurch the PR #3787 fixes this issue (generated by a change in Google Drive service) for ALL datasets with this issue. Once we make our next library release (in a couple of days), the fix will be accessible to all users that update our library from PyPI.",
"By the way, @rafikg, I discovered the URL for Spanish was wrong. I've created a PR to fix it:\r\n- #3806 ",
"I have the same problem with \"wider_face\" dataset. It seems that \"load_dataset\" function can not download the dataset from google drive.\r\n"
] | 1,645,818,909,000 | 1,648,794,083,000 | 1,646,037,858,000 | NONE | null | ## Dataset viewer issue for 'wiki_lingua*'
**Link:** *link to the dataset viewer page*
`data = datasets.load_dataset("wiki_lingua", name=language, split="train[:2000]")
`
*short description of the issue*
```
[NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=11wMGqNVSwwk6zUnDaJEgm3qT71kAHeff']]()
```
Am I the one who added this dataset ? No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3792/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3792/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3791 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3791/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3791/comments | https://api.github.com/repos/huggingface/datasets/issues/3791/events | https://github.com/huggingface/datasets/pull/3791 | 1,150,733,475 | PR_kwDODunzps4zevU2 | 3,791 | Add `data_dir` to `data_files` resolution and misc improvements to HfFileSystem | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,645,813,595,000 | 1,646,140,243,000 | 1,646,140,242,000 | CONTRIBUTOR | null | As discussed in https://github.com/huggingface/datasets/pull/2830#issuecomment-1048989764, this PR adds a QOL improvement to easily reference the files inside a directory in `load_dataset` using the `data_dir` param (very handy for ImageFolder because it avoids globbing, but also useful for the other loaders). Additionally, it fixes the issue with `HfFileSystem.isdir`, which would previously always return `False`, and aligns the path-handling logic in `HfFileSystem` with `fsspec.GitHubFileSystem`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3791/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3791/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3791",
"html_url": "https://github.com/huggingface/datasets/pull/3791",
"diff_url": "https://github.com/huggingface/datasets/pull/3791.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3791.patch",
"merged_at": 1646140242000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3790/comments | https://api.github.com/repos/huggingface/datasets/issues/3790/events | https://github.com/huggingface/datasets/pull/3790 | 1,150,646,899 | PR_kwDODunzps4zedMa | 3,790 | Add doc builder scripts | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think we're only missing the hosted runner to be configured for this repository and we should be good",
"Regarding the self-hosted runner, I actually encourage using the approach defined here: https://github.com/huggingface/transformers/pull/15710, which doesn't leverage a self-hosted runner. This prevents queuing jobs, which is important when we expect several concurrent jobs.",
"Opened a PR for that on your branch here: https://github.com/huggingface/datasets/pull/3793"
] | 1,645,807,127,000 | 1,646,150,142,000 | 1,646,150,141,000 | MEMBER | null | I added the three scripts:
- build_dev_documentation.yml
- build_documentation.yml
- delete_dev_documentation.yml
I got them from `transformers` and did a few changes:
- I removed the `transformers`-specific dependencies
- I changed all the paths to be "datasets" instead of "transformers"
- I passed the `--library_name datasets` arg to the `doc-builder build` command (according to https://github.com/huggingface/doc-builder/pull/94/files#diff-bcc33cf7c223511e498776684a9a433810b527a0a38f483b1487e8a42b6575d3R26)
cc @LysandreJik @mishig25 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3790/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3790/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3790",
"html_url": "https://github.com/huggingface/datasets/pull/3790",
"diff_url": "https://github.com/huggingface/datasets/pull/3790.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3790.patch",
"merged_at": 1646150141000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3789 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3789/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3789/comments | https://api.github.com/repos/huggingface/datasets/issues/3789/events | https://github.com/huggingface/datasets/pull/3789 | 1,150,587,404 | PR_kwDODunzps4zeQpx | 3,789 | Add URL and ID fields to Wikipedia dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Do you think we have a dedicated branch for all the changes we want to do to wikipedia ? Then once everything looks good + we have preprocessed the main languages, we can merge it on the `master` branch",
"Yes, @lhoestq, I agree with you.\r\n\r\nI have just created the dedicated branch [`update-wikipedia`](https://github.com/huggingface/datasets/tree/update-wikipedia). We can merge every PR (once validated) to that branch; once all changes are merged to that branch, we could create the preprocessed datasets and then merge the branch to master. ",
"@lhoestq I guess you approve this PR?"
] | 1,645,803,277,000 | 1,646,382,264,000 | 1,646,382,263,000 | MEMBER | null | This PR adds the URL field, so that we conform to proper attribution, required by their license: provide credit to the authors by including a hyperlink (where possible) or URL to the page or pages you are re-using.
About the conversion from title to URL, I found that apart from replacing blanks with underscores, some other special character must also be percent-encoded (e.g. `"` to `%22`): https://meta.wikimedia.org/wiki/Help:URL
Therefore, I have finally used `urllib.parse.quote` function. This additionally percent-encodes non-ASCII characters, but Wikimedia docs say these are equivalent:
> For the other characters either the code or the character can be used in internal and external links, they are equivalent. The system does a conversion when needed.
> [[%C3%80_propos_de_M%C3%A9ta]]
> is rendered as [À_propos_de_Méta](https://meta.wikimedia.org/wiki/%C3%80_propos_de_M%C3%A9ta), almost like [À propos de Méta](https://meta.wikimedia.org/wiki/%C3%80_propos_de_M%C3%A9ta), which leads to this page on Meta with in the address bar the URL
> [http://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta](https://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta)
> while [http://meta.wikipedia.org/wiki/À_propos_de_Méta](https://meta.wikipedia.org/wiki/%C3%80_propos_de_M%C3%A9ta) leads to the same.
Fix #3398.
CC: @geohci | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3789/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3789/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3789",
"html_url": "https://github.com/huggingface/datasets/pull/3789",
"diff_url": "https://github.com/huggingface/datasets/pull/3789.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3789.patch",
"merged_at": 1646382263000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3788 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3788/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3788/comments | https://api.github.com/repos/huggingface/datasets/issues/3788/events | https://github.com/huggingface/datasets/issues/3788 | 1,150,375,720 | I_kwDODunzps5EkVco | 3,788 | Only-data dataset loaded unexpectedly as validation split | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"I see two options:\r\n1. drop the \"dev\" keyword since it can be considered too generic\r\n2. improve the pattern to something more reasonable, e.g. asking for a separator before and after \"dev\"\r\n```python\r\n[\"*[ ._-]dev[ ._-]*\", \"dev[ ._-]*\"]\r\n```\r\n\r\nI think 2. is nice. If we agree on this one we can even decide to require the separation for the other split keywords \"train\", \"test\" etc.",
"Yes, I had something like that on mind: \"dev\" not being part of a word.\r\n```\r\n\"[^a-zA-Z]dev[^a-zA-Z]\"",
"Is there a reason why we want that regex? It feels like something that'll still be an issue for some weird case. \"my_dataset_dev\" doesn't match your regex, \"my_dataset_validation\" doesn't either ... Why not always \"train\" unless specified?",
"The regex is needed as part of our effort to make datasets configurable without code. In particular we define some generic dataset repository structures that users can follow\r\n\r\n> ```\r\n> \"[^a-zA-Z]*dev[^a-zA-Z]*\"\r\n> ```\r\n\r\nunfortunately our glob doesn't support \"^\": \r\n\r\nhttps://github.com/fsspec/filesystem_spec/blob/3e739db7e53f5b408319dcc9d11e92bc1f938902/fsspec/spec.py#L465-L479",
"> \"my_dataset_dev\" doesn't match your regex, \"my_dataset_validation\" doesn't either ... Why not always \"train\" unless specified?\r\n\r\nAnd `my_dataset_dev.foo` would match the pattern, and we also have the same pattern but for the \"validation\" keyword so `my_dataset_validation.foo` would work too",
"> The regex is needed as part of our effort to make datasets configurable without code\r\n\r\nThis feels like coding with the filename ^^'",
"This is still much easier than having to write a full dataset script right ? :p"
] | 1,645,791,099,000 | 1,646,047,342,000 | null | MEMBER | null | ## Describe the bug
As reported by @thomasw21 and @lhoestq, a dataset containing only a data file whose name matches the pattern `*dev*` will be returned as VALIDATION split, even if this is not the desired behavior, e.g. a file named `datosdevision.jsonl.gz`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3788/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3788/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3787 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3787/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3787/comments | https://api.github.com/repos/huggingface/datasets/issues/3787/events | https://github.com/huggingface/datasets/pull/3787 | 1,150,235,569 | PR_kwDODunzps4zdE7b | 3,787 | Fix Google Drive URL to avoid Virus scan warning | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for this @albertvillanova!",
"Once this PR merged into master and until our next `datasets` library release, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```",
"Thanks, that solved a bunch of problems we had downstream!\r\ncf. https://github.com/ElementAI/picard/issues/61"
] | 1,645,781,712,000 | 1,646,426,612,000 | 1,645,790,195,000 | MEMBER | null | This PR fixes, in the datasets library instead of in every specific dataset, the issue of downloading the Virus scan warning page instead of the actual data file for Google Drive URLs.
Fix #3786, fix #3784. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3787/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3787/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3787",
"html_url": "https://github.com/huggingface/datasets/pull/3787",
"diff_url": "https://github.com/huggingface/datasets/pull/3787.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3787.patch",
"merged_at": 1645790195000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3786 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3786/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3786/comments | https://api.github.com/repos/huggingface/datasets/issues/3786/events | https://github.com/huggingface/datasets/issues/3786 | 1,150,233,067 | I_kwDODunzps5Ejynr | 3,786 | Bug downloading Virus scan warning page from Google Drive URLs | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Once the PR merged into master and until our next `datasets` library release, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```"
] | 1,645,781,543,000 | 1,646,299,559,000 | 1,645,790,195,000 | MEMBER | null | ## Describe the bug
Recently, some issues were reported with URLs from Google Drive, where we were downloading the Virus scan warning page instead of the data file itself.
See:
- #3758
- #3773
- #3784
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3786/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3786/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3785 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3785/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3785/comments | https://api.github.com/repos/huggingface/datasets/issues/3785/events | https://github.com/huggingface/datasets/pull/3785 | 1,150,069,801 | PR_kwDODunzps4zciES | 3,785 | Fix: Bypass Virus Checks in Google Drive Links (CNN-DM dataset) | {
"login": "AngadSethi",
"id": 58678541,
"node_id": "MDQ6VXNlcjU4Njc4NTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/58678541?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AngadSethi",
"html_url": "https://github.com/AngadSethi",
"followers_url": "https://api.github.com/users/AngadSethi/followers",
"following_url": "https://api.github.com/users/AngadSethi/following{/other_user}",
"gists_url": "https://api.github.com/users/AngadSethi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AngadSethi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AngadSethi/subscriptions",
"organizations_url": "https://api.github.com/users/AngadSethi/orgs",
"repos_url": "https://api.github.com/users/AngadSethi/repos",
"events_url": "https://api.github.com/users/AngadSethi/events{/privacy}",
"received_events_url": "https://api.github.com/users/AngadSethi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thank you, @albertvillanova!",
"Got it. Thanks for explaining this, @albertvillanova!\r\n\r\n> On the other hand, the tests are not passing because the dummy data should also be fixed. Once done, this PR will be able to be merged into master.\r\n\r\nWill do this 👍",
"Hi ! I think we need to fix the issue for every dataset. This can be done simply by fixing how we handle Google Drive links, see my comment https://github.com/huggingface/datasets/pull/3775#issuecomment-1050970157",
"Hi @lhoestq! I think @albertvillanova has already fixed this in #3787",
"Cool ! I missed this one :) thanks",
"No problem!",
"Hi, @AngadSethi, I think that once:\r\n- #3787 \r\n\r\nwas merged, issue:\r\n- #3784 \r\n\r\nwas also fixed.\r\n\r\nTherefore, I think this PR is no longer necessary. I'm closing it. Let me know if you agree.",
"Yes, absolutely @albertvillanova! I agree :)"
] | 1,645,768,137,000 | 1,646,325,827,000 | 1,646,316,217,000 | NONE | null | This commit fixes the issue described in #3784. By adding an extra parameter to the end of Google Drive links, we are able to bypass the virus check and download the datasets.
So, if the original link looked like https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ
The new link now looks like https://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ&confirm=t
Fixes #3784 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3785/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3785/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3785",
"html_url": "https://github.com/huggingface/datasets/pull/3785",
"diff_url": "https://github.com/huggingface/datasets/pull/3785.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3785.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3784 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3784/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3784/comments | https://api.github.com/repos/huggingface/datasets/issues/3784/events | https://github.com/huggingface/datasets/issues/3784 | 1,150,057,955 | I_kwDODunzps5EjH3j | 3,784 | Unable to Download CNN-Dailymail Dataset | {
"login": "AngadSethi",
"id": 58678541,
"node_id": "MDQ6VXNlcjU4Njc4NTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/58678541?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AngadSethi",
"html_url": "https://github.com/AngadSethi",
"followers_url": "https://api.github.com/users/AngadSethi/followers",
"following_url": "https://api.github.com/users/AngadSethi/following{/other_user}",
"gists_url": "https://api.github.com/users/AngadSethi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AngadSethi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AngadSethi/subscriptions",
"organizations_url": "https://api.github.com/users/AngadSethi/orgs",
"repos_url": "https://api.github.com/users/AngadSethi/repos",
"events_url": "https://api.github.com/users/AngadSethi/events{/privacy}",
"received_events_url": "https://api.github.com/users/AngadSethi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "AngadSethi",
"id": 58678541,
"node_id": "MDQ6VXNlcjU4Njc4NTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/58678541?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AngadSethi",
"html_url": "https://github.com/AngadSethi",
"followers_url": "https://api.github.com/users/AngadSethi/followers",
"following_url": "https://api.github.com/users/AngadSethi/following{/other_user}",
"gists_url": "https://api.github.com/users/AngadSethi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AngadSethi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AngadSethi/subscriptions",
"organizations_url": "https://api.github.com/users/AngadSethi/orgs",
"repos_url": "https://api.github.com/users/AngadSethi/repos",
"events_url": "https://api.github.com/users/AngadSethi/events{/privacy}",
"received_events_url": "https://api.github.com/users/AngadSethi/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "AngadSethi",
"id": 58678541,
"node_id": "MDQ6VXNlcjU4Njc4NTQx",
"avatar_url": "https://avatars.githubusercontent.com/u/58678541?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AngadSethi",
"html_url": "https://github.com/AngadSethi",
"followers_url": "https://api.github.com/users/AngadSethi/followers",
"following_url": "https://api.github.com/users/AngadSethi/following{/other_user}",
"gists_url": "https://api.github.com/users/AngadSethi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AngadSethi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AngadSethi/subscriptions",
"organizations_url": "https://api.github.com/users/AngadSethi/orgs",
"repos_url": "https://api.github.com/users/AngadSethi/repos",
"events_url": "https://api.github.com/users/AngadSethi/events{/privacy}",
"received_events_url": "https://api.github.com/users/AngadSethi/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"#self-assign",
"@AngadSethi thanks for reporting and thanks for your PR!",
"Glad to help @albertvillanova! Just fine-tuning the PR, will comment once I am able to get it up and running 😀",
"Fixed by:\r\n- #3787"
] | 1,645,766,687,000 | 1,646,316,317,000 | 1,646,316,317,000 | NONE | null | ## Describe the bug
I am unable to download the CNN-Dailymail dataset. Upon closer investigation, I realised why this was happening:
- The dataset sits in Google Drive, and both the CNN and DM datasets are large.
- Google is unable to scan the folder for viruses, **so the link which would originally download the dataset, now downloads the source code of this web page:**
![image](https://user-images.githubusercontent.com/58678541/155658435-c2f497d7-7601-4332-94b1-18a62dd96422.png)
- **This leads to the following error**:
```python
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
```
## Steps to reproduce the bug
```python
import datasets
dataset = datasets.load_dataset("cnn_dailymail", "3.0.0", split="train")
```
## Expected results
That the dataset is downloaded and processed just like other datasets.
## Actual results
Hit with this error:
```python
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3784/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3784/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3783 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3783/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3783/comments | https://api.github.com/repos/huggingface/datasets/issues/3783/events | https://github.com/huggingface/datasets/pull/3783 | 1,149,256,744 | PR_kwDODunzps4zZ1jR | 3,783 | Support passing str to iter_files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@mariosasko it was indeed while reading that PR, that I remembered this change I wanted to do long ago... 😉"
] | 1,645,707,495,000 | 1,645,718,500,000 | 1,645,718,500,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3783/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3783/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3783",
"html_url": "https://github.com/huggingface/datasets/pull/3783",
"diff_url": "https://github.com/huggingface/datasets/pull/3783.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3783.patch",
"merged_at": 1645718499000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3782 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3782/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3782/comments | https://api.github.com/repos/huggingface/datasets/issues/3782/events | https://github.com/huggingface/datasets/pull/3782 | 1,148,994,022 | PR_kwDODunzps4zY-Xb | 3,782 | Error of writing with different schema, due to nonpreservation of nullability | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting, indeed `disable_nullable` doesn't seem to be supported in this case. Maybe at one point we can have `disable_nullable` as a parameter of certain feature types"
] | 1,645,690,987,000 | 1,646,319,279,000 | 1,646,319,279,000 | CONTRIBUTOR | null | ## 1. Case
```
dataset.map(
batched=True,
disable_nullable=True,
)
```
will get the following error at here https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_writer.py#L516
`pyarrow.lib.ArrowInvalid: Tried to write record batch with different schema`
## 2. Debugging
### 2.1 tracing
During `_map_single`, the following are called
https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_dataset.py#L2523
https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/arrow_writer.py#L511
### 2.2. Observation
The problem is, even after `table_cast`, `pa_table.schema != self._schema`
`pa_table.schema` (before/after `table_cast`)
```
input_ids: list<item: int32>
child 0, item: int32
```
`self._schema`
```
input_ids: list<item: int32> not null
child 0, item: int32
```
### 2.3. Reason
https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/table.py#L1121
Here we lose nullability stored in `schema` because it seems that `Features` is always nullable and don't store nullability.
https://github.com/huggingface/datasets/blob/c9967f55626931f8059dc416526c791444cdfdf7/src/datasets/table.py#L1103
So, casting to schema from such `Features` loses nullability, and eventually causes error of writing with different schema
## 3. Solution
1. Let `Features` stores nullability.
2. Directly cast table with original schema but not schema from converted `Features`. (this PR)
3. Don't `cast_table` when `write_table` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3782/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3782/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3782",
"html_url": "https://github.com/huggingface/datasets/pull/3782",
"diff_url": "https://github.com/huggingface/datasets/pull/3782.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3782.patch",
"merged_at": 1646319279000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3781 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3781/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3781/comments | https://api.github.com/repos/huggingface/datasets/issues/3781/events | https://github.com/huggingface/datasets/pull/3781 | 1,148,599,680 | PR_kwDODunzps4zXr_O | 3,781 | Reddit dataset card additions | {
"login": "anna-kay",
"id": 56791604,
"node_id": "MDQ6VXNlcjU2NzkxNjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/56791604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anna-kay",
"html_url": "https://github.com/anna-kay",
"followers_url": "https://api.github.com/users/anna-kay/followers",
"following_url": "https://api.github.com/users/anna-kay/following{/other_user}",
"gists_url": "https://api.github.com/users/anna-kay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anna-kay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anna-kay/subscriptions",
"organizations_url": "https://api.github.com/users/anna-kay/orgs",
"repos_url": "https://api.github.com/users/anna-kay/repos",
"events_url": "https://api.github.com/users/anna-kay/events{/privacy}",
"received_events_url": "https://api.github.com/users/anna-kay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hello! I added the tags and created a PR. Just to note, regarding the paperswithcode_id tag, that currently has the value \"reddit\"; the dataset described as reddit in paperswithcode is https://paperswithcode.com/dataset/reddit and it isn't the Webis-tldr-17. I could not find Webis-tldr-17 in paperswithcode neither in the Summarization category nor using the keywords reddit, webis, & tldr. I didn't change this tag."
] | 1,645,651,756,000 | 1,646,071,240,000 | 1,646,047,274,000 | CONTRIBUTOR | null | The changes proposed are based on the "TL;DR: Mining Reddit to Learn Automatic Summarization" paper & https://zenodo.org/record/1043504#.YhaKHpbQC38
It is a Reddit dataset indeed, but the name given to the dataset by the authors is Webis-TLDR-17 (corpus), so perhaps it should be modified as well.
The task at which the dataset is aimed is abstractive summarization.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3781/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3781/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3781",
"html_url": "https://github.com/huggingface/datasets/pull/3781",
"diff_url": "https://github.com/huggingface/datasets/pull/3781.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3781.patch",
"merged_at": 1646047274000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3780 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3780/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3780/comments | https://api.github.com/repos/huggingface/datasets/issues/3780/events | https://github.com/huggingface/datasets/pull/3780 | 1,148,186,272 | PR_kwDODunzps4zWVSM | 3,780 | Add ElkarHizketak v1.0 dataset | {
"login": "antxa",
"id": 7646055,
"node_id": "MDQ6VXNlcjc2NDYwNTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7646055?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antxa",
"html_url": "https://github.com/antxa",
"followers_url": "https://api.github.com/users/antxa/followers",
"following_url": "https://api.github.com/users/antxa/following{/other_user}",
"gists_url": "https://api.github.com/users/antxa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antxa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antxa/subscriptions",
"organizations_url": "https://api.github.com/users/antxa/orgs",
"repos_url": "https://api.github.com/users/antxa/repos",
"events_url": "https://api.github.com/users/antxa/events{/privacy}",
"received_events_url": "https://api.github.com/users/antxa/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I also filled some missing sections in the dataset card"
] | 1,645,627,457,000 | 1,646,420,669,000 | 1,646,420,669,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3780/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3780",
"html_url": "https://github.com/huggingface/datasets/pull/3780",
"diff_url": "https://github.com/huggingface/datasets/pull/3780.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3780.patch",
"merged_at": 1646420669000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3779 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3779/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3779/comments | https://api.github.com/repos/huggingface/datasets/issues/3779/events | https://github.com/huggingface/datasets/pull/3779 | 1,148,050,636 | PR_kwDODunzps4zV4qr | 3,779 | Update manual download URL in newsroom dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,645,620,547,000 | 1,645,622,801,000 | 1,645,622,800,000 | MEMBER | null | Fix #3778. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3779/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3779",
"html_url": "https://github.com/huggingface/datasets/pull/3779",
"diff_url": "https://github.com/huggingface/datasets/pull/3779.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3779.patch",
"merged_at": 1645622800000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3778 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3778/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3778/comments | https://api.github.com/repos/huggingface/datasets/issues/3778/events | https://github.com/huggingface/datasets/issues/3778 | 1,147,898,946 | I_kwDODunzps5Ea4xC | 3,778 | Not be able to download dataset - "Newsroom" | {
"login": "Darshan2104",
"id": 61326242,
"node_id": "MDQ6VXNlcjYxMzI2MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/61326242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Darshan2104",
"html_url": "https://github.com/Darshan2104",
"followers_url": "https://api.github.com/users/Darshan2104/followers",
"following_url": "https://api.github.com/users/Darshan2104/following{/other_user}",
"gists_url": "https://api.github.com/users/Darshan2104/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Darshan2104/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Darshan2104/subscriptions",
"organizations_url": "https://api.github.com/users/Darshan2104/orgs",
"repos_url": "https://api.github.com/users/Darshan2104/repos",
"events_url": "https://api.github.com/users/Darshan2104/events{/privacy}",
"received_events_url": "https://api.github.com/users/Darshan2104/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @Darshan2104, thanks for reporting.\r\n\r\nPlease note that at Hugging Face we do not host the data of this dataset, but just a loading script pointing to the host of the data owners.\r\n\r\nApparently the data owners changed their data host server. After googling it, I found their new website at: https://lil.nlp.cornell.edu/newsroom/index.html\r\n- Download page: https://lil.nlp.cornell.edu/newsroom/download/index.html\r\n\r\nI'm fixing the link in our Datasets library.",
"@albertvillanova Thanks for the solution and link you made my day!"
] | 1,645,611,350,000 | 1,645,635,904,000 | 1,645,622,800,000 | NONE | null | Hello,
I tried to download the **newsroom** dataset but it didn't work out for me. it said me to **download it manually**!
For manually, Link is also didn't work! It is sawing some ad or something!
If anybody has solved this issue please help me out or if somebody has this dataset please share your google drive link, it would be a great help!
Thanks
Darshan Tank | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3778/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3777 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3777/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3777/comments | https://api.github.com/repos/huggingface/datasets/issues/3777/events | https://github.com/huggingface/datasets/pull/3777 | 1,147,232,875 | PR_kwDODunzps4zTVrz | 3,777 | Start removing canonical datasets logic | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I'm not sure if the documentation explains why the dataset identifiers might have a namespace or not (the user/org): 'glue' vs 'severo/glue'. Do you think we should explain it, and relate it to the GitHub/Hub distinction?",
"> I'm not sure if the documentation explains why the dataset identifiers might have a namespace or not (the user/org): 'glue' vs 'severo/glue'. Do you think we should explain it, and relate it to the GitHub/Hub distinction?\r\n\r\nI added an explanation, let me know if it sounds good to you:\r\n\r\n```\r\nDatasets used to be hosted on our GitHub repository, but all datasets have now been migrated to the Hugging Face Hub.\r\nThe legacy GitHub datasets were added originally on our GitHub repository and therefore don't have a namespace: \"squad\", \"glue\", etc. unlike the other datasets that are named \"username/dataset_name\" or \"org/dataset_name\".\r\n```\r\n",
"Thanks for the feedbacks ! Merging this now - if you have some comments I can take care of them in a subsequent PR\r\n\r\nI'll also take care of resolving the conflicts with https://github.com/huggingface/datasets/pull/3690"
] | 1,645,554,210,000 | 1,645,715,077,000 | 1,645,715,076,000 | MEMBER | null | I updated the source code and the documentation to start removing the "canonical datasets" logic.
Indeed this makes the documentation confusing and we don't want this distinction anymore in the future. Ideally users should share their datasets on the Hub directly.
### Changes
- the documentation about dataset loading mentions the datasets on the Hub (no difference between canonical and community, since they all have their own repository now)
- the documentation about adding a dataset doesn't explain the technical differences between canonical and community anymore, and only presents how to add a community dataset. There is still a small section at the bottom that mentions the datasets that are still on GitHub and redirects to the `ADD_NEW_DATASET.md` guide on GitHub about how to contribute a dataset to the `datasets` library
- the code source doesn't mention "canonical" anymore anywhere. There is still a `GitHubDatasetModuleFactory` class that is left, but I updated the docstring to say that it will be eventually removed in favor of the `HubDatasetModuleFactory` classes that already exist
Would love to have your feedbacks on this !
cc @julien-c @thomwolf @SBrandeis | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3777/reactions",
"total_count": 5,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3777/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3777",
"html_url": "https://github.com/huggingface/datasets/pull/3777",
"diff_url": "https://github.com/huggingface/datasets/pull/3777.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3777.patch",
"merged_at": 1645715076000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3776 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3776/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3776/comments | https://api.github.com/repos/huggingface/datasets/issues/3776/events | https://github.com/huggingface/datasets/issues/3776 | 1,146,932,871 | I_kwDODunzps5EXM6H | 3,776 | Allow download only some files from the Wikipedia dataset | {
"login": "jvanz",
"id": 1514798,
"node_id": "MDQ6VXNlcjE1MTQ3OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1514798?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jvanz",
"html_url": "https://github.com/jvanz",
"followers_url": "https://api.github.com/users/jvanz/followers",
"following_url": "https://api.github.com/users/jvanz/following{/other_user}",
"gists_url": "https://api.github.com/users/jvanz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jvanz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jvanz/subscriptions",
"organizations_url": "https://api.github.com/users/jvanz/orgs",
"repos_url": "https://api.github.com/users/jvanz/repos",
"events_url": "https://api.github.com/users/jvanz/events{/privacy}",
"received_events_url": "https://api.github.com/users/jvanz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi @jvanz, thank you for your proposal.\r\n\r\nIn fact, we are aware that it is very common the problem you mention. Because of that, we are currently working in implementing a new version of wikipedia on the Hub, with all data preprocessed (no need to use Apache Beam), from where you will be able to use `data_files` to load only a specific subset of the data files.\r\n\r\nSee:\r\n- #3401 "
] | 1,645,537,601,000 | 1,645,541,402,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
The Wikipedia dataset can be really big. This is a problem if you want to use it locally in a laptop with the Apache Beam `DirectRunner`. Even if your laptop have a considerable amount of memory (e.g. 32gb).
**Describe the solution you'd like**
I would like to use the `data_files` argument in the `load_dataset` function to define which file in the wikipedia dataset I would like to download. Thus, I can work with the dataset in a smaller machine using the Apache Beam `DirectRunner`.
**Describe alternatives you've considered**
I've tried to use the `simple` Wikipedia dataset. But it's in English and I would like to use Portuguese texts in my model.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3776/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3775 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3775/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3775/comments | https://api.github.com/repos/huggingface/datasets/issues/3775/events | https://github.com/huggingface/datasets/pull/3775 | 1,146,849,454 | PR_kwDODunzps4zSEd4 | 3,775 | Update gigaword card and info | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think it actually comes from an issue here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/810b12f763f5cf02f2e43565b8890d278b7398cd/src/datasets/utils/file_utils.py#L575-L579\r\n\r\nand \r\n\r\nhttps://github.com/huggingface/datasets/blob/810b12f763f5cf02f2e43565b8890d278b7398cd/src/datasets/utils/streaming_download_manager.py#L386-L389\r\n\r\nThis code doesn't seem to work anymore. This can probably be fixed with\r\n\r\n```python\r\nif url.startswith(\"https://drive.google.com/\"): \r\n url += \"&confirm=t\"\r\n cookies = response.cookies \r\n```\r\n\r\nbecause Google Drive doesn't return the `download_warning` cookie anymore.",
"Actually it seems that is has been fixed already in https://github.com/huggingface/datasets/pull/3787 :)\r\n\r\nI think it should have fixed the gigaword dataset loading",
"@lhoestq The linked PR indeed fixes the issue. This PR is still worth merging IMO to update `gigaword`'s card."
] | 1,645,532,836,000 | 1,646,048,124,000 | 1,646,048,124,000 | CONTRIBUTOR | null | Reported on the forum: https://discuss.huggingface.co/t/error-loading-dataset/14999 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3775/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3775",
"html_url": "https://github.com/huggingface/datasets/pull/3775",
"diff_url": "https://github.com/huggingface/datasets/pull/3775.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3775.patch",
"merged_at": 1646048124000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3774 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3774/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3774/comments | https://api.github.com/repos/huggingface/datasets/issues/3774/events | https://github.com/huggingface/datasets/pull/3774 | 1,146,843,177 | PR_kwDODunzps4zSDHC | 3,774 | Fix reddit_tifu data URL | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,645,532,475,000 | 1,645,533,525,000 | 1,645,533,524,000 | MEMBER | null | Fix #3773. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3774/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3774",
"html_url": "https://github.com/huggingface/datasets/pull/3774",
"diff_url": "https://github.com/huggingface/datasets/pull/3774.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3774.patch",
"merged_at": 1645533524000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3773 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3773/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3773/comments | https://api.github.com/repos/huggingface/datasets/issues/3773/events | https://github.com/huggingface/datasets/issues/3773 | 1,146,758,335 | I_kwDODunzps5EWiS_ | 3,773 | Checksum mismatch for the reddit_tifu dataset | {
"login": "anna-kay",
"id": 56791604,
"node_id": "MDQ6VXNlcjU2NzkxNjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/56791604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anna-kay",
"html_url": "https://github.com/anna-kay",
"followers_url": "https://api.github.com/users/anna-kay/followers",
"following_url": "https://api.github.com/users/anna-kay/following{/other_user}",
"gists_url": "https://api.github.com/users/anna-kay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anna-kay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anna-kay/subscriptions",
"organizations_url": "https://api.github.com/users/anna-kay/orgs",
"repos_url": "https://api.github.com/users/anna-kay/repos",
"events_url": "https://api.github.com/users/anna-kay/events{/privacy}",
"received_events_url": "https://api.github.com/users/anna-kay/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @anna-kay. We are fixing it.",
"@albertvillanova Thank you for the fast response! However I am still getting the same error:\r\n\r\nDownloading: 2.23kB [00:00, ?B/s]\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Anna\\PycharmProjects\\summarization\\main.py\", line 17, in <module>\r\n dataset = load_dataset('reddit_tifu', 'long')\r\n File \"C:\\Users\\Anna\\Desktop\\summarization\\summarization_env\\lib\\site-packages\\datasets\\load.py\", line 1702, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\Anna\\Desktop\\summarization\\summarization_env\\lib\\site-packages\\datasets\\builder.py\", line 594, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"C:\\Users\\Anna\\Desktop\\summarization\\summarization_env\\lib\\site-packages\\datasets\\builder.py\", line 665, in _download_and_prepare\r\n verify_checksums(\r\n File \"C:\\Users\\Anna\\Desktop\\summarization\\summarization_env\\lib\\site-packages\\datasets\\utils\\info_utils.py\", line 40, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF']\r\n\r\nI have cleaned the cache/huggingface/datasets & cache/huggingface/modules files and also tried on another machine with a fresh installation of trasnformers & datasets. \r\nThe reddit_tifu.py that gets downloaded still has the previous url on line 51, _URL = \"https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF\" ",
"Hi @anna-kay, I'm sorry I didn't clearly explain the details to you:\r\n- the error has been fixed in our `master` branch on GitHub: https://github.com/huggingface/datasets/commit/8ae21bf6a77175dc803ce2f1b93d18b8fbf45586\r\n- the fix will not be accessible to users in PyPI until our next release of the `datasets` library\r\n - our latest release (version 1.18.3) was made 23 days ago: https://github.com/huggingface/datasets/releases/tag/1.18.3\r\n- in the meantime, you can get the fix if you install datasets from our GitHub `master` branch:\r\n ```\r\n pip install git+https://github.com/huggingface/datasets#egg=datasets\r\n ```",
"@albertvillanova Ok great, makes sence. Thank you very much for the explanation!"
] | 1,645,527,427,000 | 1,645,817,269,000 | 1,645,533,524,000 | CONTRIBUTOR | null | ## Describe the bug
A checksum occurs when downloading the reddit_tifu data (both long & short).
## Steps to reproduce the bug
reddit_tifu_dataset = load_dataset('reddit_tifu', 'long')
## Expected results
The expected result is for the dataset to be downloaded and cached locally.
## Actual results
File "/.../lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF']
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3773/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3772 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3772/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3772/comments | https://api.github.com/repos/huggingface/datasets/issues/3772/events | https://github.com/huggingface/datasets/pull/3772 | 1,146,718,630 | PR_kwDODunzps4zRor8 | 3,772 | Fix: dataset name is stored in keys | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,645,525,237,000 | 1,645,528,114,000 | 1,645,528,113,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3772/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3772",
"html_url": "https://github.com/huggingface/datasets/pull/3772",
"diff_url": "https://github.com/huggingface/datasets/pull/3772.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3772.patch",
"merged_at": 1645528113000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3771 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3771/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3771/comments | https://api.github.com/repos/huggingface/datasets/issues/3771/events | https://github.com/huggingface/datasets/pull/3771 | 1,146,561,140 | PR_kwDODunzps4zRHsd | 3,771 | Fix DuplicatedKeysError on msr_sqa dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,645,515,864,000 | 1,645,517,560,000 | 1,645,517,559,000 | MEMBER | null | Fix #3770. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3771/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3771/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3771",
"html_url": "https://github.com/huggingface/datasets/pull/3771",
"diff_url": "https://github.com/huggingface/datasets/pull/3771.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3771.patch",
"merged_at": 1645517559000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3770/comments | https://api.github.com/repos/huggingface/datasets/issues/3770/events | https://github.com/huggingface/datasets/issues/3770 | 1,146,336,667 | I_kwDODunzps5EU7Wb | 3,770 | DuplicatedKeysError on msr_sqa dataset | {
"login": "kolk",
"id": 9049591,
"node_id": "MDQ6VXNlcjkwNDk1OTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/9049591?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kolk",
"html_url": "https://github.com/kolk",
"followers_url": "https://api.github.com/users/kolk/followers",
"following_url": "https://api.github.com/users/kolk/following{/other_user}",
"gists_url": "https://api.github.com/users/kolk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kolk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kolk/subscriptions",
"organizations_url": "https://api.github.com/users/kolk/orgs",
"repos_url": "https://api.github.com/users/kolk/repos",
"events_url": "https://api.github.com/users/kolk/events{/privacy}",
"received_events_url": "https://api.github.com/users/kolk/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @kolk.\r\n\r\nWe are fixing it. "
] | 1,645,490,613,000 | 1,645,517,559,000 | 1,645,517,559,000 | NONE | null | ### Describe the bug
Failure to generate dataset msr_sqa because of duplicate keys.
### Steps to reproduce the bug
```
from datasets import load_dataset
load_dataset("msr_sqa")
```
### Expected results
The examples keys should be unique.
**Actual results**
```
>>> load_dataset("msr_sqa")
Downloading:
6.72k/? [00:00<00:00, 148kB/s]
Downloading:
2.93k/? [00:00<00:00, 53.8kB/s]
Using custom data configuration default
Downloading and preparing dataset msr_sqa/default (download: 4.57 MiB, generated: 26.25 MiB, post-processed: Unknown size, total: 30.83 MiB) to /root/.cache/huggingface/datasets/msr_sqa/default/0.0.0/70b2a497bd3cc8fc960a3557d2bad1eac5edde824505e15c9c8ebe4c260fd4d1...
Downloading: 100%
4.80M/4.80M [00:00<00:00, 7.49MB/s]
---------------------------------------------------------------------------
DuplicatedKeysError Traceback (most recent call last)
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator)
1080 example = self.info.features.encode_example(record)
-> 1081 writer.write(example, key)
1082 finally:
8 frames
DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: nt-639
Keys should be unique and deterministic in nature
During handling of the above exception, another exception occurred:
DuplicatedKeysError Traceback (most recent call last)
[/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in check_duplicate_keys(self)
449 for hash, key in self.hkey_record:
450 if hash in tmp_record:
--> 451 raise DuplicatedKeysError(key)
452 else:
453 tmp_record.add(hash)
DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: nt-639
Keys should be unique and deterministic in nature
```
### Environment info
datasets version: 1.18.3
Platform: Google colab notebook
Python version: 3.7
PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3770/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3769 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3769/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3769/comments | https://api.github.com/repos/huggingface/datasets/issues/3769/events | https://github.com/huggingface/datasets/issues/3769 | 1,146,258,023 | I_kwDODunzps5EUoJn | 3,769 | `dataset = dataset.map()` causes faiss index lost | {
"login": "Oaklight",
"id": 13076552,
"node_id": "MDQ6VXNlcjEzMDc2NTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/13076552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Oaklight",
"html_url": "https://github.com/Oaklight",
"followers_url": "https://api.github.com/users/Oaklight/followers",
"following_url": "https://api.github.com/users/Oaklight/following{/other_user}",
"gists_url": "https://api.github.com/users/Oaklight/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Oaklight/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Oaklight/subscriptions",
"organizations_url": "https://api.github.com/users/Oaklight/orgs",
"repos_url": "https://api.github.com/users/Oaklight/repos",
"events_url": "https://api.github.com/users/Oaklight/events{/privacy}",
"received_events_url": "https://api.github.com/users/Oaklight/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! Indeed `map` is dropping the index right now, because one can create a dataset with more or fewer rows using `map` (and therefore the index might not be relevant anymore)\r\n\r\nI guess we could check the resulting dataset length, and if the user hasn't changed the dataset size we could keep the index, what do you think ?"
] | 1,645,480,763,000 | 1,646,233,135,000 | null | NONE | null | ## Describe the bug
assigning the resulted dataset to original dataset causes lost of the faiss index
## Steps to reproduce the bug
`my_dataset` is a regular loaded dataset. It's a part of a customed dataset structure
```python
self.dataset.add_faiss_index('embeddings')
self.dataset.list_indexes()
# ['embeddings']
dataset2 = my_dataset.map(
lambda x: self._get_nearest_examples_batch(x['text']), batch=True
)
# the unexpected result:
dataset2.list_indexes()
# []
self.dataset.list_indexes()
# ['embeddings']
```
in case something wrong with my `_get_nearest_examples_batch()`, it's like this
```python
def _get_nearest_examples_batch(self, examples, k=5):
queries = embed(examples)
scores_batch, retrievals_batch = self.dataset.get_nearest_examples_batch(self.faiss_column, queries, k)
return {
'neighbors': [batch['text'] for batch in retrievals_batch],
'scores': scores_batch
}
```
## Expected results
`map` shouldn't drop the indexes, in another word, indexes should be carried to the generated dataset
## Actual results
map drops the indexes
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Ubuntu 20.04.3 LTS
- Python version: 3.8.12
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3769/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3768 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3768/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3768/comments | https://api.github.com/repos/huggingface/datasets/issues/3768/events | https://github.com/huggingface/datasets/pull/3768 | 1,146,102,442 | PR_kwDODunzps4zPobl | 3,768 | Fix HfFileSystem docstring | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,645,467,280,000 | 1,645,521,183,000 | 1,645,521,182,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3768/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3768",
"html_url": "https://github.com/huggingface/datasets/pull/3768",
"diff_url": "https://github.com/huggingface/datasets/pull/3768.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3768.patch",
"merged_at": 1645521182000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3767/comments | https://api.github.com/repos/huggingface/datasets/issues/3767/events | https://github.com/huggingface/datasets/pull/3767 | 1,146,036,648 | PR_kwDODunzps4zPahh | 3,767 | Expose method and fix param | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,645,462,667,000 | 1,645,518,903,000 | 1,645,518,902,000 | CONTRIBUTOR | null | A fix + expose a new method, following https://github.com/huggingface/datasets/pull/3670 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3767/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3767",
"html_url": "https://github.com/huggingface/datasets/pull/3767",
"diff_url": "https://github.com/huggingface/datasets/pull/3767.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3767.patch",
"merged_at": 1645518902000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3766 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3766/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3766/comments | https://api.github.com/repos/huggingface/datasets/issues/3766/events | https://github.com/huggingface/datasets/pull/3766 | 1,145,829,289 | PR_kwDODunzps4zOujH | 3,766 | Fix head_qa data URL | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,645,451,570,000 | 1,645,454,360,000 | 1,645,454,359,000 | MEMBER | null | Fix #3758. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3766/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3766/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3766",
"html_url": "https://github.com/huggingface/datasets/pull/3766",
"diff_url": "https://github.com/huggingface/datasets/pull/3766.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3766.patch",
"merged_at": 1645454359000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3765 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3765/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3765/comments | https://api.github.com/repos/huggingface/datasets/issues/3765/events | https://github.com/huggingface/datasets/pull/3765 | 1,145,126,881 | PR_kwDODunzps4zMdIL | 3,765 | Update URL for tagging app | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Oh, this URL shouldn't be updated to the tagging app as it's actually used for creating the README - closing this."
] | 1,645,389,271,000 | 1,645,389,370,000 | 1,645,389,366,000 | MEMBER | null | This PR updates the URL for the tagging app to be the one on Spaces. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3765/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3765",
"html_url": "https://github.com/huggingface/datasets/pull/3765",
"diff_url": "https://github.com/huggingface/datasets/pull/3765.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3765.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3764 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3764/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3764/comments | https://api.github.com/repos/huggingface/datasets/issues/3764/events | https://github.com/huggingface/datasets/issues/3764 | 1,145,107,050 | I_kwDODunzps5EQPJq | 3,764 | ! | {
"login": "LesiaFedorenko",
"id": 77545307,
"node_id": "MDQ6VXNlcjc3NTQ1MzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/77545307?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LesiaFedorenko",
"html_url": "https://github.com/LesiaFedorenko",
"followers_url": "https://api.github.com/users/LesiaFedorenko/followers",
"following_url": "https://api.github.com/users/LesiaFedorenko/following{/other_user}",
"gists_url": "https://api.github.com/users/LesiaFedorenko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LesiaFedorenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LesiaFedorenko/subscriptions",
"organizations_url": "https://api.github.com/users/LesiaFedorenko/orgs",
"repos_url": "https://api.github.com/users/LesiaFedorenko/repos",
"events_url": "https://api.github.com/users/LesiaFedorenko/events{/privacy}",
"received_events_url": "https://api.github.com/users/LesiaFedorenko/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [] | 1,645,383,943,000 | 1,645,433,758,000 | 1,645,433,758,000 | NONE | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3764/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3763 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3763/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3763/comments | https://api.github.com/repos/huggingface/datasets/issues/3763/events | https://github.com/huggingface/datasets/issues/3763 | 1,145,099,878 | I_kwDODunzps5EQNZm | 3,763 | It's not possible download `20200501.pt` dataset | {
"login": "jvanz",
"id": 1514798,
"node_id": "MDQ6VXNlcjE1MTQ3OTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1514798?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jvanz",
"html_url": "https://github.com/jvanz",
"followers_url": "https://api.github.com/users/jvanz/followers",
"following_url": "https://api.github.com/users/jvanz/following{/other_user}",
"gists_url": "https://api.github.com/users/jvanz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jvanz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jvanz/subscriptions",
"organizations_url": "https://api.github.com/users/jvanz/orgs",
"repos_url": "https://api.github.com/users/jvanz/repos",
"events_url": "https://api.github.com/users/jvanz/events{/privacy}",
"received_events_url": "https://api.github.com/users/jvanz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @jvanz, thanks for reporting.\r\n\r\nPlease note that Wikimedia website does not longer host Wikipedia dumps for so old dates.\r\n\r\nFor a list of accessible dump dates of `pt` Wikipedia, please see: https://dumps.wikimedia.org/ptwiki/\r\n\r\nYou can load for example `20220220` `pt` Wikipedia:\r\n```python\r\ndataset = load_dataset(\"wikipedia\", language=\"pt\", date=\"20220220\", beam_runner=\"DirectRunner\")\r\n```",
"> ```python\r\n> dataset = load_dataset(\"wikipedia\", language=\"pt\", date=\"20220220\", beam_runner=\"DirectRunner\")\r\n> ```\r\n\r\nThank you! I did not know that I can do this. I was following the example in the error message when I do not define which language dataset I'm trying to download.\r\n\r\nI've tried something similar changing the date in the `load_dataset` call that I've shared in the bug description. Obviously, it did not work. I need to read the docs more carefully next time. My bad!\r\n\r\nThanks again and sorry for the noise.\r\n\r\n"
] | 1,645,382,098,000 | 1,645,445,172,000 | 1,645,435,506,000 | NONE | null | ## Describe the bug
The dataset `20200501.pt` is broken.
The available datasets: https://dumps.wikimedia.org/ptwiki/
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner')
```
## Expected results
I expect to download the dataset locally.
## Actual results
```
>>> from datasets import load_dataset
>>> dataset = load_dataset("wikipedia", "20200501.pt", beam_runner='DirectRunner')
Downloading and preparing dataset wikipedia/20200501.pt to /home/jvanz/.cache/huggingface/datasets/wikipedia/20200501.pt/1.0.0/009f923d9b6dd00c00c8cdc7f408f2b47f45dd4f5fb7982a21f9448f4afbe475...
/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/apache_beam/__init__.py:79: UserWarning: This version of Apache Beam has not been sufficiently tested on Python 3.9. You may encounter bugs or missing features.
warnings.warn(
0%| | 0/1 [00:00<?, ?it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset
builder_instance.download_and_prepare(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 1245, in _download_and_prepare
super()._download_and_prepare(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/builder.py", line 661, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/jvanz/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/009f923d9b6dd00c00c8cdc7f408f2b47f45dd4f5fb7982a21f9448f4afbe475/wikipedia.py", line 420, in _split_generators
downloaded_files = dl_manager.download_and_extract({"info": info_url})
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 307, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 195, in download
downloaded_path_or_paths = map_nested(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 260, in map_nested
mapped = [
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 261, in <listcomp>
_single_map_nested((function, obj, types, None, True))
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 196, in _single_map_nested
return function(data_struct)
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 216, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 298, in cached_path
output_path = get_from_cache(
File "/home/jvanz/anaconda3/envs/tf-gpu/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 612, in get_from_cache
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/ptwiki/20200501/dumpstatus.json
```
## Environment info
```
- `datasets` version: 1.18.3
- Platform: Linux-5.3.18-150300.59.49-default-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 6.0.1
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3763/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3762 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3762/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3762/comments | https://api.github.com/repos/huggingface/datasets/issues/3762/events | https://github.com/huggingface/datasets/issues/3762 | 1,144,849,557 | I_kwDODunzps5EPQSV | 3,762 | `Dataset.class_encode` should support custom class names | {
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi @Dref360, thanks a lot for your proposal.\r\n\r\nIt totally makes sense to have more flexibility when class encoding, I agree.\r\n\r\nYou could even further customize the class encoding by passing an instance of `ClassLabel` itself (instead of replicating `ClassLabel` instantiation arguments as `Dataset.class_encode_column` arguments).\r\n\r\nAnd the latter made me think of `Dataset.cast_column`...\r\n\r\nMaybe better to have some others' opinions @lhoestq @mariosasko ",
"Hi @Dref360! You can use [`Dataset.align_labels_with_mapping`](https://huggingface.co/docs/datasets/master/package_reference/main_classes.html#datasets.Dataset.align_labels_with_mapping) after `Dataset.class_encode_column` to assign a different mapping of labels to ids.\r\n\r\n@albertvillanova I'd like to avoid adding more complexity to the API where it's not (absolutely) needed, so I don't think introducing a new param in `Dataset.class_encode_column` is a good idea.\r\n\r\n",
"I wasn't aware that it existed thank you for the link.\n\nClosing then! "
] | 1,645,305,705,000 | 1,645,445,795,000 | 1,645,445,795,000 | CONTRIBUTOR | null | I can make a PR, just wanted approval before starting.
**Is your feature request related to a problem? Please describe.**
It is often the case that classes are not ordered in alphabetical order. Current `class_encode_column` sort the classes before indexing.
https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_dataset.py#L1235
**Describe the solution you'd like**
I would like to add a **optional** parameter `class_names` to `class_encode_column` that would be used for the mapping instead of sorting the unique values.
**Describe alternatives you've considered**
One can use map instead. I find it harder to read.
```python
CLASS_NAMES = ['apple', 'orange', 'potato']
ds = ds.map(lambda item: CLASS_NAMES.index(item[label_column]))
# Proposition
ds = ds.class_encode_column(label_column, CLASS_NAMES)
```
**Additional context**
I can make the PR if this feature is accepted.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3762/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3761 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3761/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3761/comments | https://api.github.com/repos/huggingface/datasets/issues/3761/events | https://github.com/huggingface/datasets/issues/3761 | 1,144,830,702 | I_kwDODunzps5EPLru | 3,761 | Know your data for HF hub | {
"login": "Muhtasham",
"id": 20128202,
"node_id": "MDQ6VXNlcjIwMTI4MjAy",
"avatar_url": "https://avatars.githubusercontent.com/u/20128202?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Muhtasham",
"html_url": "https://github.com/Muhtasham",
"followers_url": "https://api.github.com/users/Muhtasham/followers",
"following_url": "https://api.github.com/users/Muhtasham/following{/other_user}",
"gists_url": "https://api.github.com/users/Muhtasham/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Muhtasham/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Muhtasham/subscriptions",
"organizations_url": "https://api.github.com/users/Muhtasham/orgs",
"repos_url": "https://api.github.com/users/Muhtasham/repos",
"events_url": "https://api.github.com/users/Muhtasham/events{/privacy}",
"received_events_url": "https://api.github.com/users/Muhtasham/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi @Muhtasham you should take a look at https://huggingface.co/blog/data-measurements-tool and accompanying demo app at https://huggingface.co/spaces/huggingface/data-measurements-tool\r\n\r\nWe would be interested in your feedback. cc @meg-huggingface @sashavor @yjernite "
] | 1,645,300,127,000 | 1,645,452,923,000 | 1,645,452,923,000 | NONE | null | **Is your feature request related to a problem? Please describe.**
Would be great to see be able to understand datasets with the goal of improving data quality, and helping mitigate fairness and bias issues.
**Describe the solution you'd like**
Something like https://knowyourdata.withgoogle.com/ for HF hub | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3761/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3760 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3760/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3760/comments | https://api.github.com/repos/huggingface/datasets/issues/3760/events | https://github.com/huggingface/datasets/issues/3760 | 1,144,804,558 | I_kwDODunzps5EPFTO | 3,760 | Unable to view the Gradio flagged call back dataset | {
"login": "kingabzpro",
"id": 36753484,
"node_id": "MDQ6VXNlcjM2NzUzNDg0",
"avatar_url": "https://avatars.githubusercontent.com/u/36753484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kingabzpro",
"html_url": "https://github.com/kingabzpro",
"followers_url": "https://api.github.com/users/kingabzpro/followers",
"following_url": "https://api.github.com/users/kingabzpro/following{/other_user}",
"gists_url": "https://api.github.com/users/kingabzpro/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kingabzpro/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kingabzpro/subscriptions",
"organizations_url": "https://api.github.com/users/kingabzpro/orgs",
"repos_url": "https://api.github.com/users/kingabzpro/repos",
"events_url": "https://api.github.com/users/kingabzpro/events{/privacy}",
"received_events_url": "https://api.github.com/users/kingabzpro/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | null | [] | null | [
"Hi @kingabzpro.\r\n\r\nI think you need to create a loading script that creates the dataset from the CSV file and the image paths.\r\n\r\nAs example, you could have a look at the Food-101 dataset: https://huggingface.co/datasets/food101\r\n- Loading script: https://huggingface.co/datasets/food101/blob/main/food101.py\r\n\r\nOnce the loading script is created, the viewer will show a previsualization of your dataset. ",
"@albertvillanova I don't think this is the issue. I have created another dataset with similar files and format and it works. https://huggingface.co/datasets/kingabzpro/savtadepth-flags-V2",
"Yes, you are right, that was not the issue.\r\n\r\nJust take into account that sometimes the viewer can take some time until it shows the preview of the dataset.\r\nAfter some time, yours is finally properly shown: https://huggingface.co/datasets/kingabzpro/savtadepth-flags",
"The problem was resolved by deleted the dataset and creating new one with similar name and then clicking on flag button.",
"I think if you make manual changes to dataset the whole system breaks. "
] | 1,645,292,708,000 | 1,647,933,131,000 | 1,647,933,131,000 | NONE | null | ## Dataset viewer issue for '*savtadepth-flags*'
**Link:** *[savtadepth-flags](https://huggingface.co/datasets/kingabzpro/savtadepth-flags)*
*with the Gradio 2.8.1 the dataset viers stopped working. I tried to add values manually but its not working. The dataset is also not showing the link with the app https://huggingface.co/spaces/kingabzpro/savtadepth.*
Am I the one who added this dataset ? Yes
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3760/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3759 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3759/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3759/comments | https://api.github.com/repos/huggingface/datasets/issues/3759/events | https://github.com/huggingface/datasets/pull/3759 | 1,143,400,770 | PR_kwDODunzps4zGhQu | 3,759 | Rename GenerateMode to DownloadMode | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks! Used here: https://github.com/huggingface/datasets-preview-backend/blob/main/src/datasets_preview_backend/models/dataset.py#L26 :) "
] | 1,645,203,233,000 | 1,645,538,244,000 | 1,645,532,572,000 | MEMBER | null | This PR:
- Renames `GenerateMode` to `DownloadMode`
- Implements `DeprecatedEnum`
- Deprecates `GenerateMode`
Close #769. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3759/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3759/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3759",
"html_url": "https://github.com/huggingface/datasets/pull/3759",
"diff_url": "https://github.com/huggingface/datasets/pull/3759.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3759.patch",
"merged_at": 1645532572000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3758/comments | https://api.github.com/repos/huggingface/datasets/issues/3758/events | https://github.com/huggingface/datasets/issues/3758 | 1,143,366,393 | I_kwDODunzps5EJmL5 | 3,758 | head_qa file missing | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"We usually find issues with files hosted at Google Drive...\r\n\r\nIn this case we download the Google Drive Virus scan warning instead of the data file.",
"Fixed: https://huggingface.co/datasets/head_qa/viewer/en/train. Thanks\r\n\r\n<img width=\"1551\" alt=\"Capture d’écran 2022-02-28 à 15 29 04\" src=\"https://user-images.githubusercontent.com/1676121/156000224-fd3f62c6-8b54-4df1-8911-bdcb0bac3f1a.png\">\r\n"
] | 1,645,201,963,000 | 1,646,058,558,000 | 1,645,454,359,000 | CONTRIBUTOR | null | ## Describe the bug
A file for the `head_qa` dataset is missing (https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t/HEAD_EN/train_HEAD_EN.json)
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> load_dataset("head_qa", name="en")
```
## Expected results
The dataset should be loaded
## Actual results
```
Downloading and preparing dataset head_qa/en (download: 75.69 MiB, generated: 2.69 MiB, post-processed: Unknown size, total: 78.38 MiB) to /home/slesage/.cache/huggingface/datasets/head_qa/en/1.1.0/583ab408e8baf54aab378c93715fadc4d8aa51b393e27c3484a877e2ac0278e9...
Downloading data: 2.21kB [00:00, 2.05MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/load.py", line 1729, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 665, in _download_and_prepare
verify_checksums(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t']
```
## Environment info
- `datasets` version: 1.18.4.dev0
- Platform: Linux-5.11.0-1028-aws-x86_64-with-glibc2.31
- Python version: 3.9.6
- PyArrow version: 6.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3758/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3757 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3757/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3757/comments | https://api.github.com/repos/huggingface/datasets/issues/3757/events | https://github.com/huggingface/datasets/pull/3757 | 1,143,300,880 | PR_kwDODunzps4zGK7p | 3,757 | Add perplexity to metrics | {
"login": "emibaylor",
"id": 27527747,
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/emibaylor",
"html_url": "https://github.com/emibaylor",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Awesome thank you ! The implementation of the parent `Metric` class was assuming that all metrics were supposed to have references/predictions pairs - I just changed that so you don't have to override `compute()`. I took the liberty of doing the changes directly inside this PR to make sure it works as expected with perplexity.\r\n\r\nOther than that it looks in pretty good shape :) I just did minor changes like remove a remaining `print` as well as fixing the `Features` defined in `_info()`. I also renamed `input_text` to `input_texts` since it makes it more obvious that it's a list of strings - let me know if it sounds good to you.\r\n\r\nLet me know if you'd like to make other changes or if it's all good for you !",
"The test with the full test set seems to take too much time in the CI - maybe we can just select `split=\"test[:few_examples]\"` (around 10 maybe ?)"
] | 1,645,199,543,000 | 1,645,809,214,000 | 1,645,809,214,000 | CONTRIBUTOR | null | Adding perplexity metric
This code differs from the code in [this](https://huggingface.co/docs/transformers/perplexity) HF blog post because the blogpost code fails in at least the following circumstances:
- returns nans whenever the stride = 1
- hits a runtime error when the stride is significantly larger than the max model length (e.g. if max_model_length = 512 and stride = 1024)
Note that:
- As it is, it only works for causal models. Pseudoperplexity can be added later as another metric to work with masked language models.
- It takes in a list of strings so that it can be dataset independent. This does mean that it doesn't currently batch inputs, and is therefore relatively slow.
- It overwrites the metrics compute() function for a specific perplexity compute() function. This is because the current general metrics compute() function requires model-generated predictions, which doesn't make sense in the context of perplexity | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3757/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3757/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3757",
"html_url": "https://github.com/huggingface/datasets/pull/3757",
"diff_url": "https://github.com/huggingface/datasets/pull/3757.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3757.patch",
"merged_at": 1645809214000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3756 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3756/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3756/comments | https://api.github.com/repos/huggingface/datasets/issues/3756/events | https://github.com/huggingface/datasets/issues/3756 | 1,143,273,825 | I_kwDODunzps5EJPlh | 3,756 | Images get decoded when using `map()` with `input_columns` argument on a dataset | {
"login": "kklemon",
"id": 1430243,
"node_id": "MDQ6VXNlcjE0MzAyNDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1430243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kklemon",
"html_url": "https://github.com/kklemon",
"followers_url": "https://api.github.com/users/kklemon/followers",
"following_url": "https://api.github.com/users/kklemon/following{/other_user}",
"gists_url": "https://api.github.com/users/kklemon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kklemon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kklemon/subscriptions",
"organizations_url": "https://api.github.com/users/kklemon/orgs",
"repos_url": "https://api.github.com/users/kklemon/repos",
"events_url": "https://api.github.com/users/kklemon/events{/privacy}",
"received_events_url": "https://api.github.com/users/kklemon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi! If I'm not mistaken, this behavior is intentional, but I agree it could be more intuitive.\r\n\r\n@albertvillanova Do you remember why you decided not to decode columns in the `Audio` feature PR when `input_columns` is not `None`? IMO we should decode those columns, and we don't even have to use lazy structures here because the user explicitly requires them in the map transform. \r\n\r\ncc @lhoestq for visibility",
"I think I excluded to decorate the function when `input_columns` were passed as a quick fix for some non-passing tests: \r\n- https://github.com/huggingface/datasets/pull/2324/commits/9d7c3e8fa53e23ec636859b4407eeec904b1b3f9\r\n\r\nThat PR was quite complex and I decided to focus on the main feature requests, leaving refinements for subsequent PRs.\r\n\r\nNote that when `input_columns` are passed, the signature of the function is effectively changed, while the decorated function expects an item (whether an example or a batch) as first arg (which is not the case when passing `input_columns`.\r\n\r\nI agree we should consider supporting the case when `input_columns` are passed."
] | 1,645,198,538,000 | 1,646,135,359,000 | null | NONE | null | ## Describe the bug
The `datasets.features.Image` feature class decodes image data by default. Expectedly, when indexing a dataset or using the `map()` method, images are returned as PIL Image instances.
However, when calling `map()` and setting a specific data column with the `input_columns` argument, the image data is passed as raw byte representation to the mapping function.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from torchvision import transforms
from PIL.Image import Image
dataset = load_dataset('mnist', split='train')
def transform_all_columns(example):
# example['image'] is encoded as PIL Image
assert isinstance(example['image'], Image)
return example
def transform_image_column(image):
# image is decoded here and represented as raw bytes
assert isinstance(image, Image)
return image
# single-sample dataset for debugging purposes
dev = dataset.select([0])
dev.map(transform_all_columns)
dev.map(transform_image_column, input_columns='image')
```
## Expected results
Image data should be passed in decoded form, i.e. as PIL Image objects to the mapping function unless the `decode` attribute on the image feature is set to `False`.
## Actual results
The mapping function receives images as raw byte data.
## Environment info
- `datasets` version: 1.18.3
- Platform: Linux-5.11.0-49-generic-x86_64-with-glibc2.32
- Python version: 3.8.0b4
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3756/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3755 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3755/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3755/comments | https://api.github.com/repos/huggingface/datasets/issues/3755/events | https://github.com/huggingface/datasets/issues/3755 | 1,143,032,961 | I_kwDODunzps5EIUyB | 3,755 | Cannot preview dataset | {
"login": "frascuchon",
"id": 2518789,
"node_id": "MDQ6VXNlcjI1MTg3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2518789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frascuchon",
"html_url": "https://github.com/frascuchon",
"followers_url": "https://api.github.com/users/frascuchon/followers",
"following_url": "https://api.github.com/users/frascuchon/following{/other_user}",
"gists_url": "https://api.github.com/users/frascuchon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frascuchon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frascuchon/subscriptions",
"organizations_url": "https://api.github.com/users/frascuchon/orgs",
"repos_url": "https://api.github.com/users/frascuchon/repos",
"events_url": "https://api.github.com/users/frascuchon/events{/privacy}",
"received_events_url": "https://api.github.com/users/frascuchon/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | closed | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting. The dataset viewer depends on some backend treatments, and for now, they might take some hours to get processed. We're working on improving it.",
"It has finally been processed. Thanks for the patience.",
"Thanks for the info @severo !"
] | 1,645,189,605,000 | 1,645,281,028,000 | 1,645,198,893,000 | NONE | null | ## Dataset viewer issue for '*rubrix/news*'
**Link:https://huggingface.co/datasets/rubrix/news** *link to the dataset viewer page*
Cannot see the dataset preview:
```
Status code: 400
Exception: Status400Error
Message: Not found. Cache is waiting to be refreshed.
```
Am I the one who added this dataset ? No
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3755/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3754 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3754/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3754/comments | https://api.github.com/repos/huggingface/datasets/issues/3754/events | https://github.com/huggingface/datasets/issues/3754 | 1,142,886,536 | I_kwDODunzps5EHxCI | 3,754 | Overflowing indices in `select` | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Fixed on master (see https://github.com/huggingface/datasets/pull/3719).",
"Awesome, I did not find that one! Thanks."
] | 1,645,183,852,000 | 1,645,184,303,000 | 1,645,184,303,000 | MEMBER | null | ## Describe the bug
The `Dataset.select` function seems to accept indices that are larger than the dataset size and seems to effectively use `index %len(ds)`.
## Steps to reproduce the bug
```python
from datasets import Dataset
ds = Dataset.from_dict({"test": [1,2,3]})
ds = ds.select(range(5))
print(ds)
print()
print(ds["test"])
```
Result:
```python
Dataset({
features: ['test'],
num_rows: 5
})
[1, 2, 3, 1, 2]
```
This behaviour is not documented and can lead to unexpected behaviour when for example taking a sample larger than the dataset and thus creating a lot of duplicates.
## Expected results
It think this should throw an error or at least a very big warning:
```python
IndexError: Invalid key: 5 is out of bounds for size 3
```
## Environment info
- `datasets` version: 1.18.3
- Platform: macOS-12.0.1-x86_64-i386-64bit
- Python version: 3.9.10
- PyArrow version: 7.0.0 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3754/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3754/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3753 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3753/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3753/comments | https://api.github.com/repos/huggingface/datasets/issues/3753/events | https://github.com/huggingface/datasets/issues/3753 | 1,142,821,144 | I_kwDODunzps5EHhEY | 3,753 | Expanding streaming capabilities | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Related to: https://github.com/huggingface/datasets/issues/3444",
"Cool ! `filter` will be very useful. There can be a filter that you can apply on a streaming dataset:\r\n```python\r\nload_dataset(..., streaming=True).filter(lambda x: x[\"lang\"] == \"sw\")\r\n```\r\n\r\nOtherwise if you want to apply a filter on the source files that are going to be used for streaming, the logic has to be impIemented directly in the dataset script, or if there's no dataset script this can be done with pattern matching\r\n```python\r\nload_dataset(..., lang=\"sw\") # if the dataset script supports this parameter\r\nload_dataset(..., data_files=\"data/lang=sw/*\") # if there's no dataset script, but only data files\r\n```\r\n\r\n--------------\r\n\r\nHere are also some additional ideas of API to convert from iterable to map-style dataset:\r\n```python\r\non_disk_dataset = streaming_dataset.to_disk()\r\non_disk_dataset = streaming_dataset.to_disk(path=\"path/to/my/dataset/dir\")\r\n\r\nin_memory_dataset = streaming_dataset.take(100).to_memory() # to experiment without having to write files\r\n```\r\n--------------\r\n\r\nFinally regarding `push_to_hub`, we can replace `batch_size` by `shard_size` (same API as for on-disk datasets). The default is 500MB per file\r\n\r\nLet me know what you think !",
"Regarding conversion, I'd also ask for some kind of equivalent to `save_to_disk` for an `IterableDataset`.\r\n\r\nSimilarly to the streaming to hub idea, my use case would be to define a sequence of dataset transforms via `.map()`, using an `IterableDataset` as the input (so processing could start without doing whole download up-front), but streaming the resultant processed dataset just to disk.",
"That makes sense @athewsey , thanks for the suggestion :)\r\n\r\nMaybe instead of the `to_disk` we could simply have `save_to_disk` instead:\r\n```python\r\nstreaming_dataset.save_to_disk(\"path/to/my/dataset/dir\")\r\non_disk_dataset = load_from_disk(\"path/to/my/dataset/dir\")\r\n\r\nin_memory_dataset = Dataset.from_list(list(streaming_dataset.take(100))) # to experiment without having to write files\r\n```"
] | 1,645,181,141,000 | 1,651,587,758,000 | null | MEMBER | null | Some ideas for a few features that could be useful when working with large datasets in streaming mode.
## `filter` for `IterableDataset`
Adding filtering to streaming datasets would be useful in several scenarios:
- filter a dataset with many languages for a subset of languages
- filter a dataset for specific licenses
- other custom logic to get a subset
The only way to achieve this at the moment is I think through writing a custom loading script and implementing filters there.
## `IterableDataset` to `Dataset` conversion
In combination with the above filter a functionality to "play" the whole stream would be useful. The motivation is that often one might filter the dataset to get a manageable size for experimentation. In that case streaming mode is no longer necessary as the filtered dataset is small enough and it would be useful to be able to play through the whole stream to create a normal `Dataset` with all its benefits.
```python
ds = load_dataset("some_large_dataset", streaming=True)
ds_filter = ds.filter(lambda x: x["lang"]="fr")
ds_filter = ds_filter.stream() # here the `IterableDataset` is converted to a `Dataset`
```
Naturally, this could be expanded with `stream(n=1000)` which creates a `Dataset` with the first `n` elements similar to `take`.
## Stream to the Hub
While streaming allows to use a dataset as is without saving the whole dataset on the local machine it is currently not possible to process a dataset and add it to the hub. The only way to do this is by downloading the full dataset and saving the processed dataset again before pushing them to the hub. The API could looks something like:
```python
ds = load_dataset("some_large_dataset", streaming=True)
ds_filter = ds.filter(some_filter_func)
ds_processed = ds_filter.map(some_processing_func)
ds_processed.push_to_hub("new_better_dataset", batch_size=100_000)
```
Under the hood this could be done by processing and aggregating `batch_size` elements and then pushing that batch as a single file to the hub. With this functionality one could process and create TB scale datasets while only requiring size of `batch_size` local disk space.
cc @lhoestq @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3753/reactions",
"total_count": 4,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3753/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3752 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3752/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3752/comments | https://api.github.com/repos/huggingface/datasets/issues/3752/events | https://github.com/huggingface/datasets/pull/3752 | 1,142,627,889 | PR_kwDODunzps4zD1D9 | 3,752 | Update metadata JSON for cats_vs_dogs dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,645,173,173,000 | 1,645,196,172,000 | 1,645,196,171,000 | MEMBER | null | Note that the number of examples in the train split was already fixed in the dataset card.
Fix #3750. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3752/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3752",
"html_url": "https://github.com/huggingface/datasets/pull/3752",
"diff_url": "https://github.com/huggingface/datasets/pull/3752.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3752.patch",
"merged_at": 1645196171000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3751 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3751/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3751/comments | https://api.github.com/repos/huggingface/datasets/issues/3751/events | https://github.com/huggingface/datasets/pull/3751 | 1,142,609,327 | PR_kwDODunzps4zDw9_ | 3,751 | Fix typo in train split name | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,645,172,284,000 | 1,645,194,532,000 | 1,645,194,532,000 | MEMBER | null | In the README guide (and consequently in many datasets) there was a typo in the train split name:
```
| Tain | Valid | Test |
```
This PR:
- fixes the typo in the train split name
- fixes the column alignment of the split tables
in the README guide and in all datasets. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3751/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3751",
"html_url": "https://github.com/huggingface/datasets/pull/3751",
"diff_url": "https://github.com/huggingface/datasets/pull/3751.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3751.patch",
"merged_at": 1645194532000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3750 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3750/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3750/comments | https://api.github.com/repos/huggingface/datasets/issues/3750/events | https://github.com/huggingface/datasets/issues/3750 | 1,142,408,331 | I_kwDODunzps5EF8SL | 3,750 | `NonMatchingSplitsSizesError` for cats_vs_dogs dataset | {
"login": "jaketae",
"id": 25360440,
"node_id": "MDQ6VXNlcjI1MzYwNDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaketae",
"html_url": "https://github.com/jaketae",
"followers_url": "https://api.github.com/users/jaketae/followers",
"following_url": "https://api.github.com/users/jaketae/following{/other_user}",
"gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jaketae/subscriptions",
"organizations_url": "https://api.github.com/users/jaketae/orgs",
"repos_url": "https://api.github.com/users/jaketae/repos",
"events_url": "https://api.github.com/users/jaketae/events{/privacy}",
"received_events_url": "https://api.github.com/users/jaketae/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thnaks for reporting @jaketae. We are fixing it. "
] | 1,645,163,199,000 | 1,645,196,171,000 | 1,645,196,171,000 | MEMBER | null | ## Describe the bug
Cannot download cats_vs_dogs dataset due to `NonMatchingSplitsSizesError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("cats_vs_dogs")
```
## Expected results
Loading is successful.
## Actual results
```
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=7503250, num_examples=23422, dataset_name='cats_vs_dogs'), 'recorded': SplitInfo(name='train', num_bytes=7262410, num_examples=23410, dataset_name='cats_vs_dogs')}]
```
## Environment info
Reproduced on a fresh [Colab notebook](https://colab.research.google.com/drive/13GTvrSJbBGvL2ybDdXCBZwATd6FOkMub?usp=sharing).
## Additional Context
Originally reported in https://github.com/huggingface/transformers/issues/15698.
cc @mariosasko | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3750/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3749 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3749/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3749/comments | https://api.github.com/repos/huggingface/datasets/issues/3749/events | https://github.com/huggingface/datasets/pull/3749 | 1,142,156,678 | PR_kwDODunzps4zCKqg | 3,749 | Add tqdm arguments | {
"login": "penguinwang96825",
"id": 28087825,
"node_id": "MDQ6VXNlcjI4MDg3ODI1",
"avatar_url": "https://avatars.githubusercontent.com/u/28087825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/penguinwang96825",
"html_url": "https://github.com/penguinwang96825",
"followers_url": "https://api.github.com/users/penguinwang96825/followers",
"following_url": "https://api.github.com/users/penguinwang96825/following{/other_user}",
"gists_url": "https://api.github.com/users/penguinwang96825/gists{/gist_id}",
"starred_url": "https://api.github.com/users/penguinwang96825/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/penguinwang96825/subscriptions",
"organizations_url": "https://api.github.com/users/penguinwang96825/orgs",
"repos_url": "https://api.github.com/users/penguinwang96825/repos",
"events_url": "https://api.github.com/users/penguinwang96825/events{/privacy}",
"received_events_url": "https://api.github.com/users/penguinwang96825/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks this will be very useful :)\r\n\r\nIt looks like there are some changes in the github diff that are not related to your contribution, can you try fixing this by merging `master` into your PR, or create a new PR from an updated version of `master` ?",
"I have already solved the conflict on this latest version. This is my first time sending PR, if there's anything I need to adjust just let me know~",
"Thanks, most changes are gone :)\r\nIt still seems to include changes though - do you mind try creating a new branch from upstream/master and create a new PR please ?",
"Yeah sure, I'll try to send a new PR today!",
"Please forward to [#3850](https://github.com/huggingface/datasets/pull/3850)",
"Thanks ! Closing this one in favor of https://github.com/huggingface/datasets/pull/3850/files"
] | 1,645,148,086,000 | 1,646,732,328,000 | 1,646,732,328,000 | NONE | null | In this PR, tqdm arguments can be passed to the map() function and such, in order to be more flexible. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3749/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3749/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3749",
"html_url": "https://github.com/huggingface/datasets/pull/3749",
"diff_url": "https://github.com/huggingface/datasets/pull/3749.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3749.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3748 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3748/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3748/comments | https://api.github.com/repos/huggingface/datasets/issues/3748/events | https://github.com/huggingface/datasets/pull/3748 | 1,142,128,763 | PR_kwDODunzps4zCEyM | 3,748 | Add tqdm arguments | {
"login": "penguinwang96825",
"id": 28087825,
"node_id": "MDQ6VXNlcjI4MDg3ODI1",
"avatar_url": "https://avatars.githubusercontent.com/u/28087825?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/penguinwang96825",
"html_url": "https://github.com/penguinwang96825",
"followers_url": "https://api.github.com/users/penguinwang96825/followers",
"following_url": "https://api.github.com/users/penguinwang96825/following{/other_user}",
"gists_url": "https://api.github.com/users/penguinwang96825/gists{/gist_id}",
"starred_url": "https://api.github.com/users/penguinwang96825/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/penguinwang96825/subscriptions",
"organizations_url": "https://api.github.com/users/penguinwang96825/orgs",
"repos_url": "https://api.github.com/users/penguinwang96825/repos",
"events_url": "https://api.github.com/users/penguinwang96825/events{/privacy}",
"received_events_url": "https://api.github.com/users/penguinwang96825/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,645,145,275,000 | 1,645,145,955,000 | 1,645,145,955,000 | NONE | null | In this PR, there are two changes.
1. It is able to show the progress bar by adding the length of the iterator.
2. Pass in tqdm_kwargs so that can enable more feasibility for the control of tqdm library. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3748/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3748",
"html_url": "https://github.com/huggingface/datasets/pull/3748",
"diff_url": "https://github.com/huggingface/datasets/pull/3748.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3748.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3747 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3747/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3747/comments | https://api.github.com/repos/huggingface/datasets/issues/3747/events | https://github.com/huggingface/datasets/issues/3747 | 1,141,688,854 | I_kwDODunzps5EDMoW | 3,747 | Passing invalid subset should throw an error | {
"login": "jxmorris12",
"id": 13238952,
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jxmorris12",
"html_url": "https://github.com/jxmorris12",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,645,121,771,000 | 1,645,121,771,000 | null | CONTRIBUTOR | null | ## Describe the bug
Only some datasets have a subset (as in `load_dataset(name, subset)`). If you pass an invalid subset, an error should be thrown.
## Steps to reproduce the bug
```python
import datasets
datasets.load_dataset('rotten_tomatoes', 'asdfasdfa')
```
## Expected results
This should break, since `'asdfasdfa'` isn't a subset of the `rotten_tomatoes` dataset.
## Actual results
This API call silently succeeds. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3747/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3746 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3746/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3746/comments | https://api.github.com/repos/huggingface/datasets/issues/3746/events | https://github.com/huggingface/datasets/pull/3746 | 1,141,612,810 | PR_kwDODunzps4zAS-C | 3,746 | Use the same seed to shuffle shards and metadata in streaming mode | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,645,117,591,000 | 1,645,628,459,000 | 1,645,628,458,000 | MEMBER | null | When shuffling in streaming mode, those two entangled lists are shuffled independently. In this PR I changed this to shuffle the lists of same length with the exact same seed, in order for the files and metadata to still be aligned.
```python
gen_kwargs = {
"files": [os.path.join(data_dir, filename) for filename in all_files],
"metadata_files": [all_metadata[filename] for filename in all_files],
}
```
IMO this is important to avoid big but silent issues.
Fix https://github.com/huggingface/datasets/issues/3744 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3746/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3746",
"html_url": "https://github.com/huggingface/datasets/pull/3746",
"diff_url": "https://github.com/huggingface/datasets/pull/3746.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3746.patch",
"merged_at": 1645628458000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3745 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3745/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3745/comments | https://api.github.com/repos/huggingface/datasets/issues/3745/events | https://github.com/huggingface/datasets/pull/3745 | 1,141,520,953 | PR_kwDODunzps4y__m2 | 3,745 | Add mIoU metric | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hmm the doctest failed again - maybe the full result needs to be on one single line",
"cc @lhoestq for the final review",
"Cool ! Feel free to merge if it's all good for you"
] | 1,645,113,137,000 | 1,646,745,626,000 | 1,646,745,626,000 | CONTRIBUTOR | null | This PR adds the mean Intersection-over-Union metric to the library, useful for tasks like semantic segmentation.
It is entirely based on mmseg's [implementation](https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/core/evaluation/metrics.py).
I've removed any PyTorch dependency, and rely on Numpy only. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3745/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3745",
"html_url": "https://github.com/huggingface/datasets/pull/3745",
"diff_url": "https://github.com/huggingface/datasets/pull/3745.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3745.patch",
"merged_at": 1646745626000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3744 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3744/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3744/comments | https://api.github.com/repos/huggingface/datasets/issues/3744/events | https://github.com/huggingface/datasets/issues/3744 | 1,141,461,165 | I_kwDODunzps5ECVCt | 3,744 | Better shards shuffling in streaming mode | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | null | [] | null | [] | 1,645,110,441,000 | 1,645,628,458,000 | 1,645,628,458,000 | MEMBER | null | Sometimes a dataset script has a `_split_generators` that returns several files as well as the corresponding metadata of each file. It often happens that they end up in two separate lists in the `gen_kwargs`:
```python
gen_kwargs = {
"files": [os.path.join(data_dir, filename) for filename in all_files],
"metadata_files": [all_metadata[filename] for filename in all_files],
}
```
It happened for Multilingual Spoken Words for example in #3666
However currently **the two lists are shuffled independently** when shuffling the shards in streaming mode. This leads to `_generate_examples` not having the right metadata for each file.
To prevent this issue I suggest that we always shuffle lists of the same length the exact same way to avoid such a big but silent issue.
cc @polinaeterna | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3744/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3743 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3743/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3743/comments | https://api.github.com/repos/huggingface/datasets/issues/3743/events | https://github.com/huggingface/datasets/pull/3743 | 1,141,176,011 | PR_kwDODunzps4y-2Do | 3,743 | initial monash time series forecasting repository | {
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI fails are unrelated to this PR, merging !",
"thanks 🙇🏽 "
] | 1,645,095,091,000 | 1,647,856,481,000 | 1,647,856,216,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3743/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3743",
"html_url": "https://github.com/huggingface/datasets/pull/3743",
"diff_url": "https://github.com/huggingface/datasets/pull/3743.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3743.patch",
"merged_at": 1647856216000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3742 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3742/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3742/comments | https://api.github.com/repos/huggingface/datasets/issues/3742/events | https://github.com/huggingface/datasets/pull/3742 | 1,141,174,549 | PR_kwDODunzps4y-1v5 | 3,742 | Fix ValueError message formatting in int2str | {
"login": "akulchik",
"id": 41182803,
"node_id": "MDQ6VXNlcjQxMTgyODAz",
"avatar_url": "https://avatars.githubusercontent.com/u/41182803?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akulchik",
"html_url": "https://github.com/akulchik",
"followers_url": "https://api.github.com/users/akulchik/followers",
"following_url": "https://api.github.com/users/akulchik/following{/other_user}",
"gists_url": "https://api.github.com/users/akulchik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akulchik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akulchik/subscriptions",
"organizations_url": "https://api.github.com/users/akulchik/orgs",
"repos_url": "https://api.github.com/users/akulchik/repos",
"events_url": "https://api.github.com/users/akulchik/events{/privacy}",
"received_events_url": "https://api.github.com/users/akulchik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,645,095,008,000 | 1,645,111,922,000 | 1,645,111,922,000 | CONTRIBUTOR | null | Hi!
I bumped into this particular `ValueError` during my work (because an instance of `np.int64` was passed instead of regular Python `int`), and so I had to `print(type(values))` myself. Apparently, it's just the missing `f` to make message an f-string.
It ain't much for a contribution, but it's honest work. Hope it spares someone else a few seconds in the future 😃 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3742/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3742",
"html_url": "https://github.com/huggingface/datasets/pull/3742",
"diff_url": "https://github.com/huggingface/datasets/pull/3742.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3742.patch",
"merged_at": 1645111922000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3741 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3741/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3741/comments | https://api.github.com/repos/huggingface/datasets/issues/3741/events | https://github.com/huggingface/datasets/pull/3741 | 1,141,132,649 | PR_kwDODunzps4y-syt | 3,741 | Rm sphinx doc | {
"login": "mishig25",
"id": 11827707,
"node_id": "MDQ6VXNlcjExODI3NzA3",
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mishig25",
"html_url": "https://github.com/mishig25",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"repos_url": "https://api.github.com/users/mishig25/repos",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,645,092,697,000 | 1,645,092,917,000 | 1,645,092,912,000 | CONTRIBUTOR | null | Checklist
- [x] Update circle ci yaml
- [x] Delete sphinx static & python files in docs dir
- [x] Update readme in docs dir
- [ ] Update docs config in setup.py | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3741/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3741",
"html_url": "https://github.com/huggingface/datasets/pull/3741",
"diff_url": "https://github.com/huggingface/datasets/pull/3741.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3741.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3740 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3740/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3740/comments | https://api.github.com/repos/huggingface/datasets/issues/3740/events | https://github.com/huggingface/datasets/pull/3740 | 1,140,720,739 | PR_kwDODunzps4y9XAP | 3,740 | Support streaming for pubmed | {
"login": "abhi-mosaic",
"id": 77638579,
"node_id": "MDQ6VXNlcjc3NjM4NTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/77638579?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhi-mosaic",
"html_url": "https://github.com/abhi-mosaic",
"followers_url": "https://api.github.com/users/abhi-mosaic/followers",
"following_url": "https://api.github.com/users/abhi-mosaic/following{/other_user}",
"gists_url": "https://api.github.com/users/abhi-mosaic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhi-mosaic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhi-mosaic/subscriptions",
"organizations_url": "https://api.github.com/users/abhi-mosaic/orgs",
"repos_url": "https://api.github.com/users/abhi-mosaic/repos",
"events_url": "https://api.github.com/users/abhi-mosaic/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhi-mosaic/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@albertvillanova just FYI, since you were so helpful with the previous pubmed issue :) ",
"IIRC streaming from FTP is not fully tested yet, so I'm fine with switching to HTTPS for now, as long as the download speed/availability is great",
"@albertvillanova Thanks for pointing me to the `ET` module replacement. It should look a lot cleaner now.\r\n\r\nUnfortunately I tried keeping the `ftp://` protocol but was seeing timeout errors? in streaming mode (below). I think the `https://` performance is not an issue, when I was profiling the `open(..) -> f.read() -> etree.fromstring(xml_str)` codepath, most of the time was spent in the XML parsing rather than the data download.\r\n\r\n\r\nError when using `ftp://`:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/venv/lib/python3.8/site-packages/fsspec/implementations/ftp.py\", line 301, in _fetch_range\r\n self.fs.ftp.retrbinary(\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/ftplib.py\", line 430, in retrbinary\r\n callback(data)\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/venv/lib/python3.8/site-packages/fsspec/implementations/ftp.py\", line 293, in callback\r\n raise TransferDone\r\nfsspec.implementations.ftp.TransferDone\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"test_pubmed_streaming.py\", line 9, in <module>\r\n print (next(iter(pubmed_train_streaming)))\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/abhi-datasets/src/datasets/iterable_dataset.py\", line 365, in __iter__\r\n for key, example in self._iter():\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/abhi-datasets/src/datasets/iterable_dataset.py\", line 362, in _iter\r\n yield from ex_iterable\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/abhi-datasets/src/datasets/iterable_dataset.py\", line 79, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/Users/abhinav/.cache/huggingface/modules/datasets_modules/datasets/pubmed/af552ed918e2841e8427203530e3cfed3a8bc3213041d7853bea1ca67eec683d/pubmed.py\", line 362, in _generate_examples\r\n tree = ET.parse(filename)\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/abhi-datasets/src/datasets/streaming.py\", line 65, in wrapper\r\n return function(*args, use_auth_token=use_auth_token, **kwargs)\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/abhi-datasets/src/datasets/utils/streaming_download_manager.py\", line 636, in xet_parse\r\n return ET.parse(f, parser=parser)\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/xml/etree/ElementTree.py\", line 1202, in parse\r\n tree.parse(source, parser)\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/xml/etree/ElementTree.py\", line 595, in parse\r\n self._root = parser._parse_whole(source)\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/abhi-datasets/src/datasets/utils/streaming_download_manager.py\", line 293, in read_with_retries\r\n out = read(*args, **kwargs)\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/gzip.py\", line 292, in read\r\n return self._buffer.read(size)\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/_compression.py\", line 68, in readinto\r\n data = self.read(len(byte_view))\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/gzip.py\", line 479, in read\r\n if not self._read_gzip_header():\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/gzip.py\", line 422, in _read_gzip_header\r\n magic = self._fp.read(2)\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/gzip.py\", line 96, in read\r\n self.file.read(size-self._length+read)\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/venv/lib/python3.8/site-packages/fsspec/spec.py\", line 1485, in read\r\n out = self.cache._fetch(self.loc, self.loc + length)\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/venv/lib/python3.8/site-packages/fsspec/caching.py\", line 153, in _fetch\r\n self.cache = self.fetcher(start, end) # new block replaces old\r\n File \"/Users/abhinav/Documents/mosaicml/hf_datasets/venv/lib/python3.8/site-packages/fsspec/implementations/ftp.py\", line 311, in _fetch_range\r\n self.fs.ftp.getmultiline()\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/ftplib.py\", line 224, in getmultiline\r\n line = self.getline()\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/ftplib.py\", line 206, in getline\r\n line = self.file.readline(self.maxline + 1)\r\n File \"/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/socket.py\", line 669, in readinto\r\n return self._sock.recv_into(b)\r\nsocket.timeout: timed out\r\n```"
] | 1,645,057,102,000 | 1,645,195,333,000 | 1,645,195,333,000 | CONTRIBUTOR | null | This PR makes some minor changes to the `pubmed` dataset to allow for `streaming=True`. Fixes #3739.
Basically, I followed the C4 dataset which works in streaming mode as an example, and made the following changes:
* Change URL prefix from `ftp://` to `https://`
* Explicilty `open` the filename and pass the XML contents to `etree.fromstring(xml_str)`
The Github diff tool makes it look like the changes are larger than they are, sorry about that.
I tested locally and the `pubmed` dataset now works in both normal and streaming modes. There is some overhead at the start of each shard in streaming mode as building the XML tree online is quite slow (each pubmed .xml.gz file is ~20MB), but the overhead gets amortized over all the samples in the shard. On my laptop with a single CPU worker I am able to stream at about ~600 samples/s. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3740/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3740/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3740",
"html_url": "https://github.com/huggingface/datasets/pull/3740",
"diff_url": "https://github.com/huggingface/datasets/pull/3740.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3740.patch",
"merged_at": 1645195333000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3739 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3739/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3739/comments | https://api.github.com/repos/huggingface/datasets/issues/3739/events | https://github.com/huggingface/datasets/issues/3739 | 1,140,329,189 | I_kwDODunzps5D-Arl | 3,739 | Pubmed dataset does not work in streaming mode | {
"login": "abhi-mosaic",
"id": 77638579,
"node_id": "MDQ6VXNlcjc3NjM4NTc5",
"avatar_url": "https://avatars.githubusercontent.com/u/77638579?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhi-mosaic",
"html_url": "https://github.com/abhi-mosaic",
"followers_url": "https://api.github.com/users/abhi-mosaic/followers",
"following_url": "https://api.github.com/users/abhi-mosaic/following{/other_user}",
"gists_url": "https://api.github.com/users/abhi-mosaic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhi-mosaic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhi-mosaic/subscriptions",
"organizations_url": "https://api.github.com/users/abhi-mosaic/orgs",
"repos_url": "https://api.github.com/users/abhi-mosaic/repos",
"events_url": "https://api.github.com/users/abhi-mosaic/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhi-mosaic/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks for reporting, @abhi-mosaic (related to #3655).\r\n\r\nPlease note that `xml.etree.ElementTree.parse` already supports streaming:\r\n- #3476\r\n\r\nNo need to refactor to use `open`/`xopen`. Is is enough with importing the package `as ET` (instead of `as etree`)."
] | 1,645,031,617,000 | 1,645,195,333,000 | 1,645,195,333,000 | CONTRIBUTOR | null | ## Describe the bug
Trying to use the `pubmed` dataset with `streaming=True` fails.
## Steps to reproduce the bug
```python
import datasets
pubmed_train = datasets.load_dataset('pubmed', split='train', streaming=True)
print (next(iter(pubmed_train)))
```
## Expected results
I would expect to see the first training sample from the pubmed dataset.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/abhinav/Documents/mosaicml/mosaicml_venv/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 367, in __iter__
for key, example in self._iter():
File "/Users/abhinav/Documents/mosaicml/mosaicml_venv/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 364, in _iter
yield from ex_iterable
File "/Users/abhinav/Documents/mosaicml/mosaicml_venv/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 79, in __iter__
for key, example in self.generate_examples_fn(**self.kwargs):
File "/Users/abhinav/.cache/huggingface/modules/datasets_modules/datasets/pubmed/9715addf10c42a7877a2149ae0c5f2fddabefc775cd1bd9b03ac3f012b86ce46/pubmed.py", line 373, in _generate_examples
tree = etree.parse(filename)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/xml/etree/ElementTree.py", line 1202, in parse
tree.parse(source, parser)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/xml/etree/ElementTree.py", line 584, in parse
source = open(source, "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'gzip://pubmed21n0001.xml::ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n0001.xml.gz'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.2
- Platform: macOS-11.4-x86_64-i386-64bit
- Python version: 3.8.2
- PyArrow version: 6.0.0
## Comments
The error looks like an issue with `open` vs. `xopen` inside the `xml` package. It looks like it's trying to open the remote source URL, which has been edited with prefix `gzip://...`.
Maybe there can be an explicit `xopen` before passing the raw data to `etree`, something like:
```python
# Before
tree = etree.parse(filename)
root = tree.getroot()
# After
with xopen(filename) as f:
data_str = f.read()
root = etree.fromstring(data_str)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3739/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3738 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3738/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3738/comments | https://api.github.com/repos/huggingface/datasets/issues/3738/events | https://github.com/huggingface/datasets/issues/3738 | 1,140,164,253 | I_kwDODunzps5D9Yad | 3,738 | For data-only datasets, streaming and non-streaming don't behave the same | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Note that we might change the heuristic and create a different config per file, at least in that case.",
"Hi @severo, thanks for reporting.\r\n\r\nYes, this happens because when non-streaming, a cast of all data is done in order to \"concatenate\" it all into a single dataset (thus the error), while this casting is not done while yielding item by item in streaming mode.\r\n\r\nMaybe in streaming mode we should keep the schema (inferred from the first item) and throw an exception if a subsequent item does not conform to the inferred schema?",
"Why do we want to concatenate the files? Is it the expected behavior for most datasets that lack a script and dataset info?",
"These files are two different dataset configurations since they don't share the same schema.\r\n\r\nIMO the streaming mode should fail in this case, as @albertvillanova said.\r\n\r\nThere is one challenge though: inferring the schema from the first example is not robust enough in the general case - especially if some fields are nullable. I guess we can at least make sure that no new columns are added",
"OK. So, if we make the streaming also fail, the dataset https://huggingface.co/datasets/huggingface/transformers-metadata will never be [viewable](https://github.com/huggingface/datasets-preview-backend/issues/144) (be it using streaming or fallback to downloading the files), right?\r\n",
"Yes, until we have a way for the user to specify explicitly that those two files are different configurations.\r\n\r\nWe can maybe have some rule to detect this automatically, maybe checking the first line of each file ? That would mean that for dataset of 10,000+ files we would have to verify every single one of them just to know if there is one ore more configurations, so I'm not sure if this is a good idea",
"i think requiring the user to specify that those two files are different configurations is in that case perfectly reasonable.\r\n\r\n(Maybe at some point we could however detect this type of case and prompt them to define a config mapping etc)",
"OK, so, before closing the issue, what do you think should be done?\r\n\r\n> Maybe in streaming mode we should keep the schema (inferred from the first item) and throw an exception if a subsequent item does not conform to the inferred schema?\r\n\r\nor nothing?",
"We should at least raise an error if a new sample has column names that are missing, or if it has extra columns. No need to check for the type for now.\r\n\r\nI'm in favor of having an error especially because we want to avoid silent issues as much as possible - i.e. when something goes wrong (when schemas don't match or some data are missing) and no errors/warnings are raised.\r\n\r\nConsistency between streaming and non-streaming is also important."
] | 1,645,024,857,000 | 1,645,453,495,000 | null | CONTRIBUTOR | null | See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets as ds
iterable_dataset = ds.load_dataset("huggingface/transformers-metadata", split="train", streaming=True);
rows = list(iterable_dataset.take(100))
rows[0]
# {'model_type': 'albert', 'pytorch': True, 'tensorflow': True, 'flax': True, 'processor': 'AutoTokenizer'}
rows[99]
# {'model_class': 'BartModel', 'pipeline_tag': 'feature-extraction', 'auto_class': 'AutoModel'}
```
In normal mode, an exception is thrown:
```python
import datasets as ds
dataset = ds.load_dataset("huggingface/transformers-metadata", split="train");
```
```
ValueError: Couldn't cast
model_class: string
pipeline_tag: string
auto_class: string
to
{'model_type': Value(dtype='string', id=None), 'pytorch': Value(dtype='bool', id=None), 'tensorflow': Value(dtype='bool', id=None), 'flax': Value(dtype='bool', id=None), 'processor': Value(dtype='string', id=None)}
because column names don't match
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3738/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3738/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3737 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3737/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3737/comments | https://api.github.com/repos/huggingface/datasets/issues/3737/events | https://github.com/huggingface/datasets/pull/3737 | 1,140,148,050 | PR_kwDODunzps4y7uFf | 3,737 | Make RedCaps streamable | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,645,024,343,000 | 1,645,025,318,000 | 1,645,025,317,000 | CONTRIBUTOR | null | Make RedCaps streamable.
@lhoestq Using `data/redcaps_v1.0_annotations.zip` as a download URL gives an error locally when running `datasets-cli test` (will investigate this another time) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3737/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3737",
"html_url": "https://github.com/huggingface/datasets/pull/3737",
"diff_url": "https://github.com/huggingface/datasets/pull/3737.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3737.patch",
"merged_at": 1645025317000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3736 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3736/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3736/comments | https://api.github.com/repos/huggingface/datasets/issues/3736/events | https://github.com/huggingface/datasets/pull/3736 | 1,140,134,483 | PR_kwDODunzps4y7rMR | 3,736 | Local paths in common voice | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I just changed to `dl_manager.is_streaming` rather than an additional parameter `streaming` that has to be handled by the DatasetBuilder class - this way the streaming logic doesn't interfere with the base builder's code.\r\n\r\nI think it's better this way, but let me know if you preferred the previous way and I can revert\r\n\r\n> But on the other hand, IMHO, I think this specific solution adds complexity to handling streaming/non-streaming, and moves this complexity to the loading script and thus to the contributors/users who want to create the loading script for their canonical/community datasets (instead of keeping it hidden form the end users).\r\n\r\nI'm down to discuss this more in the future !",
"@lhoestq good idea: much cleaner this way! That way each class has its own responsibilities without mixing around..."
] | 1,645,023,689,000 | 1,645,521,224,000 | 1,645,521,223,000 | MEMBER | null | Continuation of https://github.com/huggingface/datasets/pull/3664:
- pass the `streaming` parameter to _split_generator
- update @anton-l's code to use this parameter for `common_voice`
- add a comment to explain why we use `download_and_extract` in non-streaming and `iter_archive` in streaming
Now the `common_voice` dataset has a local path back in `ds["path"]`, and this field is `None` in streaming mode.
cc @patrickvonplaten @anton-l @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3736/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3736/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3736",
"html_url": "https://github.com/huggingface/datasets/pull/3736",
"diff_url": "https://github.com/huggingface/datasets/pull/3736.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3736.patch",
"merged_at": 1645521223000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3735 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3735/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3735/comments | https://api.github.com/repos/huggingface/datasets/issues/3735/events | https://github.com/huggingface/datasets/issues/3735 | 1,140,087,891 | I_kwDODunzps5D9FxT | 3,735 | Performance of `datasets` at scale | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"> using command line git-lfs - [...] 300MB/s!\r\n\r\nwhich server location did you upload from?",
"From GCP region `us-central1-a`.",
"The most surprising part to me is the saving time. Wondering if it could be due to compression (`ParquetWriter` uses SNAPPY compression by default; it can be turned off with `to_parquet(..., compression=None)`). ",
"+1 to what @mariosasko mentioned. Also, @lvwerra did you parallelize `to_parquet` using similar approach in #2747? (we used multiprocessing at the shard level). I'm working on a similar PR to add multi_proc in `to_parquet` which might give you further speed up. \r\nStas benchmarked his approach and mine in this [gist](https://gist.github.com/stas00/dc1597a1e245c5915cfeefa0eee6902c) for `lama` dataset when we were working on adding multi_proc support for `to_json`.",
"@mariosasko I did not turn it off but I can try the next time - I have to run the pipeline again, anyway. \r\n\r\n@bhavitvyamalik Yes, I also sharded the dataset and used multiprocessing to save each shard. I'll have a closer look at your approach, too."
] | 1,645,021,412,000 | 1,647,335,729,000 | null | MEMBER | null | # Performance of `datasets` at 1TB scale
## What is this?
During the processing of a large dataset I monitored the performance of the `datasets` library to see if there are any bottlenecks. The insights of this analysis could guide the decision making to improve the performance of the library.
## Dataset
The dataset is a 1.1TB extract from GitHub with 120M code files and is stored as 5000 `.json.gz` files. The goal of the preprocessing is to remove duplicates and filter files based on their stats. While the calculating of the hashes for deduplication and stats for filtering can be parallelized the filtering itself is run with a single process. After processing the files are pushed to the hub.
## Machine
The experiment was run on a `m1` machine on GCP with 96 CPU cores and 1.3TB RAM.
## Performance breakdown
- Loading the data **3.5h** (_30sec_ from cache)
- **1h57min** single core loading (not sure what is going on here, corresponds to second progress bar)
- **1h10min** multi core json reading
- **20min** remaining time before and after the two main processes mentioned above
- Process the data **2h** (_20min_ from cache)
- **20min** Getting reading for processing
- **40min** Hashing and files stats (96 workers)
- **58min** Deduplication filtering (single worker)
- Save parquet files **5h**
- Saving 1000 parquet files (16 workers)
- Push to hub **37min**
- **34min** git add
- **3min** git push (several hours with `Repository.git_push()`)
## Conclusion
It appears that loading and saving the data is the main bottleneck at that scale (**8.5h**) whereas processing (**2h**) and pushing the data to the hub (**0.5h**) is relatively fast. To optimize the performance at this scale it would make sense to consider such an end-to-end example and target the bottlenecks which seem to be loading from and saving to disk. The processing itself seems to run relatively fast.
## Notes
- map operation on a 1TB dataset with 96 workers requires >1TB RAM
- map operation does not maintain 100% CPU utilization with 96 workers
- sometimes when the script crashes all the data files have a corresponding `*.lock` file in the data folder (or multiple e.g. `*.lock.lock` when it happened a several times). This causes the cache **not** to be triggered (which is significant at that scale) - i guess because there are new data files
- parallelizing `to_parquet` decreased the saving time from 17h to 5h, however adding more workers at this point had almost no effect. not sure if this is:
a) a bug in my parallelization logic,
b) i/o limit to load data form disk to memory or
c) i/o limit to write from memory to disk.
- Using `Repository.git_push()` was much slower than using command line `git-lfs` - 10-20MB/s vs. 300MB/s! The `Dataset.push_to_hub()` function is even slower as it only uploads one file at a time with only a few MB/s, whereas `Repository.git_push()` pushes files in parallel (each at a similar speed).
cc @lhoestq @julien-c @LysandreJik @SBrandeis
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3735/reactions",
"total_count": 13,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 9,
"rocket": 0,
"eyes": 4
} | https://api.github.com/repos/huggingface/datasets/issues/3735/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3734 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3734/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3734/comments | https://api.github.com/repos/huggingface/datasets/issues/3734/events | https://github.com/huggingface/datasets/pull/3734 | 1,140,050,336 | PR_kwDODunzps4y7ZU2 | 3,734 | Fix bugs in NewsQA dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,645,019,488,000 | 1,645,084,466,000 | 1,645,084,465,000 | MEMBER | null | Fix #3733. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3734/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3734",
"html_url": "https://github.com/huggingface/datasets/pull/3734",
"diff_url": "https://github.com/huggingface/datasets/pull/3734.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3734.patch",
"merged_at": 1645084465000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3733 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3733/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3733/comments | https://api.github.com/repos/huggingface/datasets/issues/3733/events | https://github.com/huggingface/datasets/issues/3733 | 1,140,011,378 | I_kwDODunzps5D8zFy | 3,733 | Bugs in NewsQA dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,645,017,457,000 | 1,645,084,465,000 | 1,645,084,465,000 | MEMBER | null | ## Describe the bug
NewsQA dataset has the following bugs:
- the field `validated_answers` is an exact copy of the field `answers` but with the addition of `'count': [0]` to each dict
- the field `badQuestion` does not appear in `answers` nor `validated_answers`
## Steps to reproduce the bug
By inspecting the dataset script we can see that:
- the parsing of `validated_answers` is a copy-paste of the one for `answers`
- the `badQuestion` field is ignored in the parsing of both `answers` and `validated_answers`
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3733/timeline | null | completed | null | null | false |