url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.92B
2.68B
| node_id
stringlengths 18
19
| number
int64 6.27k
7.3k
| title
stringlengths 2
159
| user
dict | labels
listlengths 0
2
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
1
| milestone
dict | comments
int64 0
24
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 3
47.9k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes | time_to_close
float64 0
7.99k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6779 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6779/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6779/comments | https://api.github.com/repos/huggingface/datasets/issues/6779/events | https://github.com/huggingface/datasets/pull/6779 | 2,226,075,551 | PR_kwDODunzps5rvSA8 | 6,779 | Install dependencies with `uv` in CI | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-04-04T17:02:51 | 2024-04-08T13:34:01 | 2024-04-08T13:27:44 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6779.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6779",
"merged_at": "2024-04-08T13:27:43",
"patch_url": "https://github.com/huggingface/datasets/pull/6779.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6779"
} | `diffusers` (https://github.com/huggingface/diffusers/pull/7116) and `huggingface_hub` (https://github.com/huggingface/huggingface_hub/pull/2072) also use `uv` to install their dependencies, so we can do the same here.
It seems to make the "Install dependencies" step in the `ubuntu` jobs 5-8x faster and 1.5-2x in the `windows` one.
Besides introducing `uv` in CI, this PR bumps the `tensorflow` minimal version requirement to align with Transformers and simplifies the SpaCy hashing tests (use blank language models instead of the pre-trained ones)
| {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6779/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6779/timeline | null | null | true | 92.414722 |
https://api.github.com/repos/huggingface/datasets/issues/6778 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6778/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6778/comments | https://api.github.com/repos/huggingface/datasets/issues/6778/events | https://github.com/huggingface/datasets/issues/6778 | 2,226,040,636 | I_kwDODunzps6Erq88 | 6,778 | Dataset.to_csv() missing commas in columns with lists | {
"avatar_url": "https://avatars.githubusercontent.com/u/100041276?v=4",
"events_url": "https://api.github.com/users/mpickard-dataprof/events{/privacy}",
"followers_url": "https://api.github.com/users/mpickard-dataprof/followers",
"following_url": "https://api.github.com/users/mpickard-dataprof/following{/other_user}",
"gists_url": "https://api.github.com/users/mpickard-dataprof/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mpickard-dataprof",
"id": 100041276,
"login": "mpickard-dataprof",
"node_id": "U_kgDOBfaCPA",
"organizations_url": "https://api.github.com/users/mpickard-dataprof/orgs",
"received_events_url": "https://api.github.com/users/mpickard-dataprof/received_events",
"repos_url": "https://api.github.com/users/mpickard-dataprof/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mpickard-dataprof/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mpickard-dataprof/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mpickard-dataprof",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 1 | 2024-04-04T16:46:13 | 2024-04-08T15:24:41 | null | NONE | null | null | null | ### Describe the bug
The `to_csv()` method does not output commas in lists. So when the Dataset is loaded back in the data structure of the column with a list is not correct.
Here's an example:
Obviously, it's not as trivial as inserting commas in the list, since its a comma-separated file. But hopefully there's a way to export the list in a way that it'll be imported by `load_dataset()` correctly.
### Steps to reproduce the bug
Here's some code to reproduce the bug:
```python
from datasets import Dataset
ds = Dataset.from_dict(
{
"pokemon": ["bulbasaur", "squirtle"],
"type": ["grass", "water"]
}
)
def ascii_to_hex(text):
return [ord(c) for c in text]
ds = ds.map(lambda x: {"int": ascii_to_hex(x['pokemon'])})
ds.to_csv('../output/temp.csv')
```
temp.csv then contains:
```
### Expected behavior
ACTUAL OUTPUT:
```
pokemon,type,int
bulbasaur,grass,[ 98 117 108 98 97 115 97 117 114]
squirtle,water,[115 113 117 105 114 116 108 101]
```
EXPECTED OUTPUT:
```
pokemon,type,int
bulbasaur,grass,[98, 117, 108, 98, 97, 115, 97, 117, 114]
squirtle,water,[115, 113, 117, 105, 114, 116, 108, 101]
```
or probably something more like this since it's a CSV file:
```
pokemon,type,int
bulbasaur,grass,"[98, 117, 108, 98, 97, 115, 97, 117, 114]"
squirtle,water,"[115, 113, 117, 105, 114, 116, 108, 101]"
```
### Environment info
### Package Version
Name: datasets
Version: 2.16.1
### Python
version: 3.10.12
### OS Info
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
...
UBUNTU_CODENAME=jammy | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6778/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6778/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6777 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6777/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6777/comments | https://api.github.com/repos/huggingface/datasets/issues/6777/events | https://github.com/huggingface/datasets/issues/6777 | 2,224,611,247 | I_kwDODunzps6EmN-v | 6,777 | .Jsonl metadata not detected | {
"avatar_url": "https://avatars.githubusercontent.com/u/81643693?v=4",
"events_url": "https://api.github.com/users/nighting0le01/events{/privacy}",
"followers_url": "https://api.github.com/users/nighting0le01/followers",
"following_url": "https://api.github.com/users/nighting0le01/following{/other_user}",
"gists_url": "https://api.github.com/users/nighting0le01/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nighting0le01",
"id": 81643693,
"login": "nighting0le01",
"node_id": "MDQ6VXNlcjgxNjQzNjkz",
"organizations_url": "https://api.github.com/users/nighting0le01/orgs",
"received_events_url": "https://api.github.com/users/nighting0le01/received_events",
"repos_url": "https://api.github.com/users/nighting0le01/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nighting0le01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nighting0le01/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nighting0le01",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 5 | 2024-04-04T06:31:53 | 2024-04-05T21:14:48 | null | NONE | null | null | null | ### Describe the bug
Hi I have the following directory structure:
|--dataset
| |-- images
| |-- metadata1000.csv
| |-- metadata1000.jsonl
| |-- padded_images
Example of metadata1000.jsonl file
{"caption": "a drawing depicts a full shot of a black t-shirt with a triangular pattern on the front there is a white label on the left side of the triangle", "image": "images/212734.png", "gaussian_padded_image": "padded_images/p_212734.png"}
{"caption": "an eye-level full shot of a large elephant and a baby elephant standing in a watering hole on the left side is a small elephant with its head turned to the right of dry land, trees, and bushes", "image": "images/212735.png", "gaussian_padded_image": "padded_images/p_212735.png"}
.
.
.
I'm trying to use dataset = load_dataset("imagefolder", data_dir='/dataset/', split='train') to load the the dataset, however it is not able to load according to the fields in the metadata1000.jsonl .
please assist to load the data properly
also getting
```
File "/workspace/train_trans_vae.py", line 1089, in <module>
print(get_metadata_patterns('/dataset/'))
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 499, in get_metadata_patterns
raise FileNotFoundError(f"The directory at {base_path} doesn't contain any metadata file") from None
FileNotFoundError: The directory at /dataset/ doesn't contain any metadata file
```
when trying
```
from datasets.data_files import get_metadata_patterns
print(get_metadata_patterns('/dataset/'))
```
### Steps to reproduce the bug
dataset Version: 2.18.0
make a similar jsonl and similar directory format
### Expected behavior
creates a dataset object with the column names, caption,image,gaussian_padded_image
### Environment info
dataset Version: 2.18.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6777/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6777/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6775 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6775/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6775/comments | https://api.github.com/repos/huggingface/datasets/issues/6775/events | https://github.com/huggingface/datasets/issues/6775 | 2,223,457,792 | I_kwDODunzps6Eh0YA | 6,775 | IndexError: Invalid key: 0 is out of bounds for size 0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/38481564?v=4",
"events_url": "https://api.github.com/users/kk2491/events{/privacy}",
"followers_url": "https://api.github.com/users/kk2491/followers",
"following_url": "https://api.github.com/users/kk2491/following{/other_user}",
"gists_url": "https://api.github.com/users/kk2491/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kk2491",
"id": 38481564,
"login": "kk2491",
"node_id": "MDQ6VXNlcjM4NDgxNTY0",
"organizations_url": "https://api.github.com/users/kk2491/orgs",
"received_events_url": "https://api.github.com/users/kk2491/received_events",
"repos_url": "https://api.github.com/users/kk2491/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kk2491/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kk2491/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kk2491",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 7 | 2024-04-03T17:06:30 | 2024-04-08T01:24:35 | null | NONE | null | null | null | ### Describe the bug
I am trying to fine-tune llama2-7b model in GCP. The notebook I am using for this can be found [here](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_llama2_peft_finetuning.ipynb).
When I use the dataset given in the example, the training gets successfully completed (example dataset can be found [here](https://huggingface.co/datasets/timdettmers/openassistant-guanaco)).
However when I use my own dataset which is in the same format as the example dataset, I get the below error (my dataset can be found [here](https://huggingface.co/datasets/kk2491/finetune_dataset_002)).
![image](https://github.com/huggingface/datasets/assets/38481564/47fa2de3-95e0-478b-a35f-58cbaf90427a)
I see the files are being read correctly from the logs:
![image](https://github.com/huggingface/datasets/assets/38481564/b0b6316c-2cc7-476c-9674-ca2222c8f4e3)
### Steps to reproduce the bug
1. Clone the [vertex-ai-samples](https://github.com/GoogleCloudPlatform/vertex-ai-samples) repository.
2. Run the [llama2-7b peft fine-tuning](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_llama2_peft_finetuning.ipynb).
3. Change the dataset `kk2491/finetune_dataset_002`
### Expected behavior
The training should complete successfully, and model gets deployed to an endpoint.
### Environment info
Python version : Python 3.10.12
Dataset : https://huggingface.co/datasets/kk2491/finetune_dataset_002
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6775/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6775/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6774 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6774/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6774/comments | https://api.github.com/repos/huggingface/datasets/issues/6774/events | https://github.com/huggingface/datasets/issues/6774 | 2,222,164,316 | I_kwDODunzps6Ec4lc | 6,774 | Generating split is very slow when Image format is PNG | {
"avatar_url": "https://avatars.githubusercontent.com/u/22740819?v=4",
"events_url": "https://api.github.com/users/Tramac/events{/privacy}",
"followers_url": "https://api.github.com/users/Tramac/followers",
"following_url": "https://api.github.com/users/Tramac/following{/other_user}",
"gists_url": "https://api.github.com/users/Tramac/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Tramac",
"id": 22740819,
"login": "Tramac",
"node_id": "MDQ6VXNlcjIyNzQwODE5",
"organizations_url": "https://api.github.com/users/Tramac/orgs",
"received_events_url": "https://api.github.com/users/Tramac/received_events",
"repos_url": "https://api.github.com/users/Tramac/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Tramac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Tramac/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Tramac",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 1 | 2024-04-03T07:47:31 | 2024-04-10T17:28:17 | null | NONE | null | null | null | ### Describe the bug
When I create a dataset, it gets stuck while generating cached data.
The image format is PNG, and it will not get stuck when the image format is jpeg.
![image](https://github.com/huggingface/datasets/assets/22740819/3b888fd8-e6d6-488f-b828-95a8f206a152)
After debugging, I know that it is because of the `pa.array` operation in [arrow_writer](https://github.com/huggingface/datasets/blob/2.13.0/src/datasets/arrow_writer.py#L553), but i don't why.
### Steps to reproduce the bug
```
from datasets import Dataset
def generator(lines):
for line in lines:
img = Image.open(open(line["url"], "rb"))
# print(img.format) # "PNG"
yield {
"image": img,
}
lines = open(dataset_path, "r")
dataset = Dataset.from_generator(
generator,
gen_kwargs={"lines": lines}
)
```
### Expected behavior
Generating split done.
### Environment info
datasets 2.13.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6774/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6774/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6773 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6773/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6773/comments | https://api.github.com/repos/huggingface/datasets/issues/6773/events | https://github.com/huggingface/datasets/issues/6773 | 2,221,049,121 | I_kwDODunzps6EYoUh | 6,773 | Dataset on Hub re-downloads every time? | {
"avatar_url": "https://avatars.githubusercontent.com/u/9099139?v=4",
"events_url": "https://api.github.com/users/manestay/events{/privacy}",
"followers_url": "https://api.github.com/users/manestay/followers",
"following_url": "https://api.github.com/users/manestay/following{/other_user}",
"gists_url": "https://api.github.com/users/manestay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/manestay",
"id": 9099139,
"login": "manestay",
"node_id": "MDQ6VXNlcjkwOTkxMzk=",
"organizations_url": "https://api.github.com/users/manestay/orgs",
"received_events_url": "https://api.github.com/users/manestay/received_events",
"repos_url": "https://api.github.com/users/manestay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/manestay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manestay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/manestay",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 5 | 2024-04-02T17:23:22 | 2024-04-08T18:43:45 | 2024-04-08T18:43:45 | NONE | null | null | null | ### Describe the bug
Hi, I have a dataset on the hub [here](https://huggingface.co/datasets/manestay/borderlines). It has 1k+ downloads, which I sure is mostly just me and my colleagues working with it. It should have far fewer, since I'm using the same machine with a properly set up HF_HOME variable. However, whenever I run the below function `load_borderlines_hf`, it downloads the entire dataset from the hub and then does the other logic:
https://github.com/manestay/borderlines/blob/4e161f444661e2ebfe643f3fe149d9258d63a57d/run_gpt/lib.py#L80
Let me know what I'm doing wrong here, or if it's a bug with the `datasets` library itself. On the hub I have my data stored in CSVs, but several columns are lists, so that's why I have the code to map splitting on `;`. I looked into dataset loading scripts, but it seemed difficult to set up. I have verified that other `datasets` and `models` on my system are using the cache properly (e.g. I have a 13B parameter model and large datasets, but those are cached and don't redownload).
__EDIT: __ as pointed out in the discussion below, it may be the `map()` calls that aren't being cached properly. Supposing the `load_dataset()` retrieve from the cache, then it should be the case that the `map()` calls also retrieve from the cached output. But the `map()` commands re-execute sometimes.
### Steps to reproduce the bug
1. Copy and paste the function from [here](https://github.com/manestay/borderlines/blob/4e161f444661e2ebfe643f3fe149d9258d63a57d/run_gpt/lib.py#L80) (lines 80-100)
2. Run it in Python `load_borderlines_hf(None)`
3. It completes successfully, downloading from HF hub, then doing the mapping logic etc.
4. If you run it again after some time, it will re-download, ignoring the cache
### Expected behavior
Re-running the code, which calls `datasets.load_dataset('manestay/borderlines', 'territories')`, should use the cached version
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.14.21-150500.55.7-default-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 1.5.3
- `fsspec` version: 2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/9099139?v=4",
"events_url": "https://api.github.com/users/manestay/events{/privacy}",
"followers_url": "https://api.github.com/users/manestay/followers",
"following_url": "https://api.github.com/users/manestay/following{/other_user}",
"gists_url": "https://api.github.com/users/manestay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/manestay",
"id": 9099139,
"login": "manestay",
"node_id": "MDQ6VXNlcjkwOTkxMzk=",
"organizations_url": "https://api.github.com/users/manestay/orgs",
"received_events_url": "https://api.github.com/users/manestay/received_events",
"repos_url": "https://api.github.com/users/manestay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/manestay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/manestay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/manestay",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6773/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6773/timeline | null | completed | false | 145.339722 |
https://api.github.com/repos/huggingface/datasets/issues/6772 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6772/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6772/comments | https://api.github.com/repos/huggingface/datasets/issues/6772/events | https://github.com/huggingface/datasets/pull/6772 | 2,220,851,533 | PR_kwDODunzps5rdKZ2 | 6,772 | `remove_columns`/`rename_columns` doc fixes | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-04-02T15:41:28 | 2024-04-02T16:28:45 | 2024-04-02T16:17:46 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6772.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6772",
"merged_at": "2024-04-02T16:17:46",
"patch_url": "https://github.com/huggingface/datasets/pull/6772.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6772"
} | Use more consistent wording in `remove_columns` to explain why it's faster than `map` and update `remove_columns`/`rename_columns` docstrings to fix in-place calls.
Reported in https://github.com/huggingface/datasets/issues/6700 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6772/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6772/timeline | null | null | true | 0.605 |
https://api.github.com/repos/huggingface/datasets/issues/6771 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6771/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6771/comments | https://api.github.com/repos/huggingface/datasets/issues/6771/events | https://github.com/huggingface/datasets/issues/6771 | 2,220,131,457 | I_kwDODunzps6EVISB | 6,771 | Datasets FileNotFoundError when trying to generate examples. | {
"avatar_url": "https://avatars.githubusercontent.com/u/26197115?v=4",
"events_url": "https://api.github.com/users/RitchieP/events{/privacy}",
"followers_url": "https://api.github.com/users/RitchieP/followers",
"following_url": "https://api.github.com/users/RitchieP/following{/other_user}",
"gists_url": "https://api.github.com/users/RitchieP/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RitchieP",
"id": 26197115,
"login": "RitchieP",
"node_id": "MDQ6VXNlcjI2MTk3MTE1",
"organizations_url": "https://api.github.com/users/RitchieP/orgs",
"received_events_url": "https://api.github.com/users/RitchieP/received_events",
"repos_url": "https://api.github.com/users/RitchieP/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RitchieP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RitchieP/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RitchieP",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-04-02T10:24:57 | 2024-04-04T14:22:03 | 2024-04-04T14:22:03 | NONE | null | null | null | ### Discussed in https://github.com/huggingface/datasets/discussions/6768
<div type='discussions-op-text'>
<sup>Originally posted by **RitchieP** April 1, 2024</sup>
Currently, I have a dataset hosted on Huggingface with a custom script [here](https://huggingface.co/datasets/RitchieP/VerbaLex_voice).
I'm loading my dataset as below.
```py
from datasets import load_dataset, IterableDatasetDict
dataset = IterableDatasetDict()
dataset["train"] = load_dataset("RitchieP/VerbaLex_voice", "ar", split="train", use_auth_token=True, streaming=True)
dataset["test"] = load_dataset("RitchieP/VerbaLex_voice", "ar", split="test", use_auth_token=True, streaming=True)
```
And when I try to see the data I have loaded with
```py
list(dataset["train"].take(1))
```
And it gives me this stack trace
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Cell In[2], line 1
----> 1 list(dataset["train"].take(1))
File /opt/conda/lib/python3.10/site-packages/datasets/iterable_dataset.py:1388, in IterableDataset.__iter__(self)
1385 yield formatter.format_row(pa_table)
1386 return
-> 1388 for key, example in ex_iterable:
1389 if self.features:
1390 # `IterableDataset` automatically fills missing columns with None.
1391 # This is done with `_apply_feature_types_on_example`.
1392 example = _apply_feature_types_on_example(
1393 example, self.features, token_per_repo_id=self._token_per_repo_id
1394 )
File /opt/conda/lib/python3.10/site-packages/datasets/iterable_dataset.py:1044, in TakeExamplesIterable.__iter__(self)
1043 def __iter__(self):
-> 1044 yield from islice(self.ex_iterable, self.n)
File /opt/conda/lib/python3.10/site-packages/datasets/iterable_dataset.py:234, in ExamplesIterable.__iter__(self)
233 def __iter__(self):
--> 234 yield from self.generate_examples_fn(**self.kwargs)
File ~/.cache/huggingface/modules/datasets_modules/datasets/RitchieP--VerbaLex_voice/9465eaee58383cf9d7c3e14111d7abaea56398185a641b646897d6df4e4732f7/VerbaLex_voice.py:127, in VerbaLexVoiceDataset._generate_examples(self, local_extracted_archive_paths, archives, meta_path)
125 for i, audio_archive in enumerate(archives):
126 print(audio_archive)
--> 127 for path, file in audio_archive:
128 _, filename = os.path.split(path)
129 if filename in metadata:
File /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:869, in _IterableFromGenerator.__iter__(self)
868 def __iter__(self):
--> 869 yield from self.generator(*self.args, **self.kwargs)
File /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:919, in ArchiveIterable._iter_from_urlpath(cls, urlpath, download_config)
915 @classmethod
916 def _iter_from_urlpath(
917 cls, urlpath: str, download_config: Optional[DownloadConfig] = None
918 ) -> Generator[Tuple, None, None]:
--> 919 compression = _get_extraction_protocol(urlpath, download_config=download_config)
920 # Set block_size=0 to get faster streaming
921 # (e.g. for hf:// and https:// it uses streaming Requests file-like instances)
922 with xopen(urlpath, "rb", download_config=download_config, block_size=0) as f:
File /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:400, in _get_extraction_protocol(urlpath, download_config)
398 urlpath, storage_options = _prepare_path_and_storage_options(urlpath, download_config=download_config)
399 try:
--> 400 with fsspec.open(urlpath, **(storage_options or {})) as f:
401 return _get_extraction_protocol_with_magic_number(f)
402 except FileNotFoundError:
File /opt/conda/lib/python3.10/site-packages/fsspec/core.py:100, in OpenFile.__enter__(self)
97 def __enter__(self):
98 mode = self.mode.replace("t", "").replace("b", "") + "b"
--> 100 f = self.fs.open(self.path, mode=mode)
102 self.fobjects = [f]
104 if self.compression is not None:
File /opt/conda/lib/python3.10/site-packages/fsspec/spec.py:1307, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)
1305 else:
1306 ac = kwargs.pop("autocommit", not self._intrans)
-> 1307 f = self._open(
1308 path,
1309 mode=mode,
1310 block_size=block_size,
1311 autocommit=ac,
1312 cache_options=cache_options,
1313 **kwargs,
1314 )
1315 if compression is not None:
1316 from fsspec.compression import compr
File /opt/conda/lib/python3.10/site-packages/fsspec/implementations/local.py:180, in LocalFileSystem._open(self, path, mode, block_size, **kwargs)
178 if self.auto_mkdir and "w" in mode:
179 self.makedirs(self._parent(path), exist_ok=True)
--> 180 return LocalFileOpener(path, mode, fs=self, **kwargs)
File /opt/conda/lib/python3.10/site-packages/fsspec/implementations/local.py:302, in LocalFileOpener.__init__(self, path, mode, autocommit, fs, compression, **kwargs)
300 self.compression = get_compression(path, compression)
301 self.blocksize = io.DEFAULT_BUFFER_SIZE
--> 302 self._open()
File /opt/conda/lib/python3.10/site-packages/fsspec/implementations/local.py:307, in LocalFileOpener._open(self)
305 if self.f is None or self.f.closed:
306 if self.autocommit or "w" not in self.mode:
--> 307 self.f = open(self.path, mode=self.mode)
308 if self.compression:
309 compress = compr[self.compression]
FileNotFoundError: [Errno 2] No such file or directory: '/kaggle/working/h'
```
After looking into the stack trace, and referring to the source codes, it looks like its trying to access a directory in the notebook's environment and I don't understand why.
Not sure if its a bug in Datasets library, so I'm opening a discussions first. Feel free to ask for more information if needed. Appreciate any help in advance!</div>
Hi, referring to the discussion title above, after further digging, I think it's an issue within the datasets library. But not quite sure where it is.
If you require any more info or actions from me, please let me know. Appreciate any help in advance! | {
"avatar_url": "https://avatars.githubusercontent.com/u/26197115?v=4",
"events_url": "https://api.github.com/users/RitchieP/events{/privacy}",
"followers_url": "https://api.github.com/users/RitchieP/followers",
"following_url": "https://api.github.com/users/RitchieP/following{/other_user}",
"gists_url": "https://api.github.com/users/RitchieP/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RitchieP",
"id": 26197115,
"login": "RitchieP",
"node_id": "MDQ6VXNlcjI2MTk3MTE1",
"organizations_url": "https://api.github.com/users/RitchieP/orgs",
"received_events_url": "https://api.github.com/users/RitchieP/received_events",
"repos_url": "https://api.github.com/users/RitchieP/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RitchieP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RitchieP/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RitchieP",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6771/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6771/timeline | null | completed | false | 51.951667 |
https://api.github.com/repos/huggingface/datasets/issues/6770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6770/comments | https://api.github.com/repos/huggingface/datasets/issues/6770/events | https://github.com/huggingface/datasets/issues/6770 | 2,218,991,883 | I_kwDODunzps6EQyEL | 6,770 | [Bug Report] `datasets==2.18.0` is not compatible with `fsspec==2023.12.2` | {
"avatar_url": "https://avatars.githubusercontent.com/u/19348888?v=4",
"events_url": "https://api.github.com/users/fshp971/events{/privacy}",
"followers_url": "https://api.github.com/users/fshp971/followers",
"following_url": "https://api.github.com/users/fshp971/following{/other_user}",
"gists_url": "https://api.github.com/users/fshp971/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fshp971",
"id": 19348888,
"login": "fshp971",
"node_id": "MDQ6VXNlcjE5MzQ4ODg4",
"organizations_url": "https://api.github.com/users/fshp971/orgs",
"received_events_url": "https://api.github.com/users/fshp971/received_events",
"repos_url": "https://api.github.com/users/fshp971/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fshp971/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fshp971/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fshp971",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2024-04-01T20:17:48 | 2024-04-11T17:31:44 | 2024-04-11T17:31:44 | NONE | null | null | null | ### Describe the bug
`Datasets==2.18.0` is not compatible with `fsspec==2023.12.2`.
I have to downgrade fsspec to `fsspec==2023.10.0` to make `Datasets==2.18.0` work properly.
### Steps to reproduce the bug
To reproduce the bug:
1. Make sure that `Datasets==2.18.0` and `fsspec==2023.12.2`.
2. Run the following code:
```
from datasets import load_dataset
dataset = load_dataset("trec")
```
3. Then one will get the following error message:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "/opt/conda/lib/python3.10/site-packages/datasets/load.py", line 2265, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/opt/conda/lib/python3.10/site-packages/datasets/builder.py", line 371, in __init__
self.config, self.config_id = self._create_builder_config(
File "/opt/conda/lib/python3.10/site-packages/datasets/builder.py", line 620, in _create_builder_config
builder_config._resolve_data_files(
File "/opt/conda/lib/python3.10/site-packages/datasets/builder.py", line 211, in _resolve_data_files
self.data_files = self.data_files.resolve(base_path, download_config)
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 799, in resolve
out[key] = data_files_patterns_list.resolve(base_path, download_config)
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 752, in resolve
resolve_pattern(
File "/opt/conda/lib/python3.10/site-packages/datasets/data_files.py", line 393, in resolve_pattern
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find 'hf://datasets/trec@65752bf53af25bc935a0dce92fb5b6c930728450/default/train/0000.parquet' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.geoparquet', '.gpq', '.arrow', '.txt', '.tar', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip']
```
4. Similar issue also found for the following code:
```
dataset = load_dataset("sst", "default")
```
### Expected behavior
If the dataset is loaded correctly, one will have:
```
>>> print(dataset)
DatasetDict({
train: Dataset({
features: ['text', 'coarse_label', 'fine_label'],
num_rows: 5452
})
test: Dataset({
features: ['text', 'coarse_label', 'fine_label'],
num_rows: 500
})
})
>>>
```
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-6.2.0-35-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.1
- Pandas version: 2.2.1
- `fsspec` version: 2023.12.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6770/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6770/timeline | null | completed | false | 237.232222 |
https://api.github.com/repos/huggingface/datasets/issues/6769 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6769/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6769/comments | https://api.github.com/repos/huggingface/datasets/issues/6769/events | https://github.com/huggingface/datasets/issues/6769 | 2,218,242,015 | I_kwDODunzps6EN6_f | 6,769 | (Willing to PR) Datasets with custom python objects | {
"avatar_url": "https://avatars.githubusercontent.com/u/5236035?v=4",
"events_url": "https://api.github.com/users/fzyzcjy/events{/privacy}",
"followers_url": "https://api.github.com/users/fzyzcjy/followers",
"following_url": "https://api.github.com/users/fzyzcjy/following{/other_user}",
"gists_url": "https://api.github.com/users/fzyzcjy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fzyzcjy",
"id": 5236035,
"login": "fzyzcjy",
"node_id": "MDQ6VXNlcjUyMzYwMzU=",
"organizations_url": "https://api.github.com/users/fzyzcjy/orgs",
"received_events_url": "https://api.github.com/users/fzyzcjy/received_events",
"repos_url": "https://api.github.com/users/fzyzcjy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fzyzcjy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fzyzcjy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fzyzcjy",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 0 | 2024-04-01T13:18:47 | 2024-04-01T13:36:58 | null | CONTRIBUTOR | null | null | null | ### Feature request
Hi thanks for the library! I would like to have a huggingface Dataset, and one of its column is custom (non-serializable) Python objects. For example, a minimal code:
```
class MyClass:
pass
dataset = datasets.Dataset.from_list([
dict(a=MyClass(), b='hello'),
])
```
It gives error:
```
ArrowInvalid: Could not convert <__main__.MyClass object at 0x7a852830d050> with type MyClass: did not recognize Python value type when inferring an Arrow data type
```
I guess it is because Dataset forces to convert everything into arrow format. However, is there any ways to make the scenario work? Thanks!
### Motivation
(see above)
### Your contribution
Yes, I am happy to PR!
Cross-posted: https://discuss.huggingface.co/t/datasets-with-custom-python-objects/79050?u=fzyzcjy
EDIT: possibly related https://github.com/huggingface/datasets/issues/5766 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6769/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6769/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6767/comments | https://api.github.com/repos/huggingface/datasets/issues/6767/events | https://github.com/huggingface/datasets/pull/6767 | 2,217,065,412 | PR_kwDODunzps5rQO9J | 6,767 | fixing the issue 6755(small typo) | {
"avatar_url": "https://avatars.githubusercontent.com/u/63234112?v=4",
"events_url": "https://api.github.com/users/JINO-ROHIT/events{/privacy}",
"followers_url": "https://api.github.com/users/JINO-ROHIT/followers",
"following_url": "https://api.github.com/users/JINO-ROHIT/following{/other_user}",
"gists_url": "https://api.github.com/users/JINO-ROHIT/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JINO-ROHIT",
"id": 63234112,
"login": "JINO-ROHIT",
"node_id": "MDQ6VXNlcjYzMjM0MTEy",
"organizations_url": "https://api.github.com/users/JINO-ROHIT/orgs",
"received_events_url": "https://api.github.com/users/JINO-ROHIT/received_events",
"repos_url": "https://api.github.com/users/JINO-ROHIT/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JINO-ROHIT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JINO-ROHIT/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JINO-ROHIT",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-31T16:13:37 | 2024-04-02T14:14:02 | 2024-04-02T14:01:18 | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6767.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6767",
"merged_at": "2024-04-02T14:01:18",
"patch_url": "https://github.com/huggingface/datasets/pull/6767.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6767"
} | Fixed the issue #6755 on the typo mistake | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6767/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6767/timeline | null | null | true | 45.794722 |
https://api.github.com/repos/huggingface/datasets/issues/6765 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6765/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6765/comments | https://api.github.com/repos/huggingface/datasets/issues/6765/events | https://github.com/huggingface/datasets/issues/6765 | 2,215,933,515 | I_kwDODunzps6EFHZL | 6,765 | Compatibility issue between s3fs, fsspec, and datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/33383515?v=4",
"events_url": "https://api.github.com/users/njbrake/events{/privacy}",
"followers_url": "https://api.github.com/users/njbrake/followers",
"following_url": "https://api.github.com/users/njbrake/following{/other_user}",
"gists_url": "https://api.github.com/users/njbrake/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/njbrake",
"id": 33383515,
"login": "njbrake",
"node_id": "MDQ6VXNlcjMzMzgzNTE1",
"organizations_url": "https://api.github.com/users/njbrake/orgs",
"received_events_url": "https://api.github.com/users/njbrake/received_events",
"repos_url": "https://api.github.com/users/njbrake/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/njbrake/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/njbrake/subscriptions",
"type": "User",
"url": "https://api.github.com/users/njbrake",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 4 | 2024-03-29T19:57:24 | 2024-11-12T14:50:48 | 2024-04-03T14:33:12 | NONE | null | null | null | ### Describe the bug
Here is the full error stack when installing:
```
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
datasets 2.18.0 requires fsspec[http]<=2024.2.0,>=2023.1.0, but you have fsspec 2024.3.1 which is incompatible.
Successfully installed aiobotocore-2.12.1 aioitertools-0.11.0 botocore-1.34.51 fsspec-2024.3.1 jmespath-1.0.1 s3fs-2024.3.1 urllib3-2.0.7 wrapt-1.16.0
```
When I install with pip, pip allows this error to exist while still installing s3fs, but this error breaks poetry, since poetry will refuse to install s3fs because of the dependency conflict.
Maybe I'm missing something so maybe it's not a bug but some mistake on my end? Any input would be helpful. Thanks!
### Steps to reproduce the bug
1. conda create -n tmp python=3.10 -y
2. conda activate tmp
3. pip install datasets
4. pip install s3fs
### Expected behavior
I would expect there to be no error.
### Environment info
MacOS (ARM), Python3.10, conda 23.11.0. | {
"avatar_url": "https://avatars.githubusercontent.com/u/33383515?v=4",
"events_url": "https://api.github.com/users/njbrake/events{/privacy}",
"followers_url": "https://api.github.com/users/njbrake/followers",
"following_url": "https://api.github.com/users/njbrake/following{/other_user}",
"gists_url": "https://api.github.com/users/njbrake/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/njbrake",
"id": 33383515,
"login": "njbrake",
"node_id": "MDQ6VXNlcjMzMzgzNTE1",
"organizations_url": "https://api.github.com/users/njbrake/orgs",
"received_events_url": "https://api.github.com/users/njbrake/received_events",
"repos_url": "https://api.github.com/users/njbrake/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/njbrake/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/njbrake/subscriptions",
"type": "User",
"url": "https://api.github.com/users/njbrake",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6765/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6765/timeline | null | completed | false | 114.596667 |
https://api.github.com/repos/huggingface/datasets/issues/6764 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6764/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6764/comments | https://api.github.com/repos/huggingface/datasets/issues/6764/events | https://github.com/huggingface/datasets/issues/6764 | 2,215,767,119 | I_kwDODunzps6EEexP | 6,764 | load_dataset can't work with symbolic links | {
"avatar_url": "https://avatars.githubusercontent.com/u/13640533?v=4",
"events_url": "https://api.github.com/users/VladimirVincan/events{/privacy}",
"followers_url": "https://api.github.com/users/VladimirVincan/followers",
"following_url": "https://api.github.com/users/VladimirVincan/following{/other_user}",
"gists_url": "https://api.github.com/users/VladimirVincan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VladimirVincan",
"id": 13640533,
"login": "VladimirVincan",
"node_id": "MDQ6VXNlcjEzNjQwNTMz",
"organizations_url": "https://api.github.com/users/VladimirVincan/orgs",
"received_events_url": "https://api.github.com/users/VladimirVincan/received_events",
"repos_url": "https://api.github.com/users/VladimirVincan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VladimirVincan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VladimirVincan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VladimirVincan",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 0 | 2024-03-29T17:49:28 | 2024-03-29T17:52:27 | null | NONE | null | null | null | ### Feature request
Enable the `load_dataset` function to load local datasets with symbolic links.
E.g, this dataset can be loaded:
├── example_dataset/
│ ├── data/
│ │ ├── train/
│ │ │ ├── file0
│ │ │ ├── file1
│ │ ├── dev/
│ │ │ ├── file2
│ │ │ ├── file3
│ ├── metadata.csv
while this dataset can't:
├── example_dataset_symlink/
│ ├── data/
│ │ ├── train/
│ │ │ ├── sym0 -> file0
│ │ │ ├── sym1 -> file1
│ │ ├── dev/
│ │ │ ├── sym2 -> file2
│ │ │ ├── sym3 -> file3
│ ├── metadata.csv
I have created an example dataset in order to reproduce the problem:
1. Unzip `example_dataset.zip`.
2. Run `no_symlink.sh`. Training should start without issues.
3. Run `symlink.sh`. You will see that all four examples will be in train split, instead of having two examples in train and two examples in dev. The script won't load the correct audio files.
[example_dataset.zip](https://github.com/huggingface/datasets/files/14807053/example_dataset.zip)
### Motivation
I have a very large dataset locally. Instead of initiating training on the entire dataset, I need to start training on smaller subsets of the data. Due to the purpose of the experiments I am running, I will need to create many smaller datasets with overlapping data. Instead of copying the all the files for each subset, I would prefer copying symbolic links of the data. This way, the memory usage would not significantly increase beyond the initial dataset size.
Advantages of this approach:
- It would leave a smaller memory footprint on the hard drive
- Creating smaller datasets would be much faster
### Your contribution
I would gladly contribute, if this is something useful to the community. It seems like a simple change of code, something like `file_path = os.path.realpath(file_path)` should be added before loading the files. If anyone has insights on how to incorporate this functionality, I would greatly appreciate your knowledge and input. | null | {
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6764/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6764/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6763 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6763/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6763/comments | https://api.github.com/repos/huggingface/datasets/issues/6763/events | https://github.com/huggingface/datasets/pull/6763 | 2,213,440,804 | PR_kwDODunzps5rENat | 6,763 | Fix issue with case sensitivity when loading dataset from local cache | {
"avatar_url": "https://avatars.githubusercontent.com/u/58537872?v=4",
"events_url": "https://api.github.com/users/Sumsky21/events{/privacy}",
"followers_url": "https://api.github.com/users/Sumsky21/followers",
"following_url": "https://api.github.com/users/Sumsky21/following{/other_user}",
"gists_url": "https://api.github.com/users/Sumsky21/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Sumsky21",
"id": 58537872,
"login": "Sumsky21",
"node_id": "MDQ6VXNlcjU4NTM3ODcy",
"organizations_url": "https://api.github.com/users/Sumsky21/orgs",
"received_events_url": "https://api.github.com/users/Sumsky21/received_events",
"repos_url": "https://api.github.com/users/Sumsky21/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Sumsky21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sumsky21/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Sumsky21",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 1 | 2024-03-28T14:52:35 | 2024-04-20T12:16:45 | null | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6763.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6763",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6763.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6763"
} | When a dataset with upper-cases in its name is first loaded using `load_dataset()`, the local cache directory is created with all lowercase letters.
However, upon subsequent loads, the current version attempts to locate the cache directory using the dataset's original name, which includes uppercase letters. This discrepancy can lead to confusion and, particularly in offline mode, results in errors.
### Reproduce
```bash
~$ python
Python 3.9.19 (main, Mar 21 2024, 17:11:28)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> dataset = load_dataset("locuslab/TOFU", "full")
>>> quit()
~$ export HF_DATASETS_OFFLINE=1
~$ python
Python 3.9.19 (main, Mar 21 2024, 17:11:28)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> dataset = load_dataset("locuslab/TOFU", "full")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "xxxxxx/anaconda3/envs/llm/lib/python3.9/site-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "xxxxxx/anaconda3/envs/llm/lib/python3.9/site-packages/datasets/load.py", line 2228, in load_dataset_builder
dataset_module = dataset_module_factory(
File "xxxxxx/anaconda3/envs/llm/lib/python3.9/site-packages/datasets/load.py", line 1871, in dataset_module_factory
raise ConnectionError(f"Couldn't reach the Hugging Face Hub for dataset '{path}': {e1}") from None
ConnectionError: Couldn't reach the Hugging Face Hub for dataset 'locuslab/TOFU': Offline mode is enabled.
>>>
```
I fix this issue by lowering the dataset name (`.lower()`) when generating cache_dir. | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6763/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6763/timeline | null | null | true | null |
https://api.github.com/repos/huggingface/datasets/issues/6762 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6762/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6762/comments | https://api.github.com/repos/huggingface/datasets/issues/6762/events | https://github.com/huggingface/datasets/pull/6762 | 2,213,275,468 | PR_kwDODunzps5rDpBe | 6,762 | Allow polars as valid output type | {
"avatar_url": "https://avatars.githubusercontent.com/u/11325244?v=4",
"events_url": "https://api.github.com/users/psmyth94/events{/privacy}",
"followers_url": "https://api.github.com/users/psmyth94/followers",
"following_url": "https://api.github.com/users/psmyth94/following{/other_user}",
"gists_url": "https://api.github.com/users/psmyth94/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/psmyth94",
"id": 11325244,
"login": "psmyth94",
"node_id": "MDQ6VXNlcjExMzI1MjQ0",
"organizations_url": "https://api.github.com/users/psmyth94/orgs",
"received_events_url": "https://api.github.com/users/psmyth94/received_events",
"repos_url": "https://api.github.com/users/psmyth94/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/psmyth94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psmyth94/subscriptions",
"type": "User",
"url": "https://api.github.com/users/psmyth94",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2024-03-28T13:40:28 | 2024-08-16T15:54:37 | 2024-08-16T13:10:37 | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6762.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6762",
"merged_at": "2024-08-16T13:10:37",
"patch_url": "https://github.com/huggingface/datasets/pull/6762.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6762"
} | I was trying out polars as an output for a map function and found that it wasn't a valid return type in `validate_function_output`. Thought that we should accommodate this by creating and adding it to the `allowed_processed_input_types` variable. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6762/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6762/timeline | null | null | true | 3,383.5025 |
https://api.github.com/repos/huggingface/datasets/issues/6761 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6761/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6761/comments | https://api.github.com/repos/huggingface/datasets/issues/6761/events | https://github.com/huggingface/datasets/pull/6761 | 2,212,805,108 | PR_kwDODunzps5rCAu8 | 6,761 | Remove deprecated code | {
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Wauplin",
"id": 11801849,
"login": "Wauplin",
"node_id": "MDQ6VXNlcjExODAxODQ5",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Wauplin",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 5 | 2024-03-28T09:57:57 | 2024-03-29T13:27:26 | 2024-03-29T13:18:13 | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6761.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6761",
"merged_at": "2024-03-29T13:18:13",
"patch_url": "https://github.com/huggingface/datasets/pull/6761.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6761"
} | What does this PR do?
1. remove `list_files_info` in favor of `list_repo_tree`. As of `0.23`, `list_files_info` will be removed for good. `datasets` had a utility to support both pre-0.20 and post-0.20 versions. Since `hfh` version is already pinned to `>=0.21.2`, I removed the legacy part.
2. `preupload_lfs_files` had also a different behavior between `<0.20` and `>=0.20`. I remove it since huggingface_hub is now pinned to `>=0.21.2`
3. `hf_hub_url` is overwritten to default to the dataset repo_type. I do think it is misleading to keep the same method naming for it. I renamed it to `get_dataset_url` for clarity. Let me know if you prefer to see this change reverted. | {
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Wauplin",
"id": 11801849,
"login": "Wauplin",
"node_id": "MDQ6VXNlcjExODAxODQ5",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Wauplin",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6761/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6761/timeline | null | null | true | 27.337778 |
https://api.github.com/repos/huggingface/datasets/issues/6760 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6760/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6760/comments | https://api.github.com/repos/huggingface/datasets/issues/6760/events | https://github.com/huggingface/datasets/issues/6760 | 2,212,288,122 | I_kwDODunzps6D3NZ6 | 6,760 | Load codeparrot/apps raising UnicodeDecodeError in datasets-2.18.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/17897916?v=4",
"events_url": "https://api.github.com/users/yucc-leon/events{/privacy}",
"followers_url": "https://api.github.com/users/yucc-leon/followers",
"following_url": "https://api.github.com/users/yucc-leon/following{/other_user}",
"gists_url": "https://api.github.com/users/yucc-leon/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yucc-leon",
"id": 17897916,
"login": "yucc-leon",
"node_id": "MDQ6VXNlcjE3ODk3OTE2",
"organizations_url": "https://api.github.com/users/yucc-leon/orgs",
"received_events_url": "https://api.github.com/users/yucc-leon/received_events",
"repos_url": "https://api.github.com/users/yucc-leon/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yucc-leon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yucc-leon/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yucc-leon",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 4 | 2024-03-28T03:44:26 | 2024-06-19T07:06:40 | null | NONE | null | null | null | ### Describe the bug
This happens with datasets-2.18.0; I downgraded the version to 2.14.6 fixing this temporarily.
```
Traceback (most recent call last):
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 2228, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 1879, in dataset_module_factory
raise e1 from None
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 1831, in dataset_module_factory
can_load_config_from_parquet_export = "DEFAULT_CONFIG_NAME" not in f.read()
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte
```
### Steps to reproduce the bug
1. Using Python3.10/3.11
2. Install datasets-2.18.0
3. test with
```
from datasets import load_dataset
dataset = load_dataset("codeparrot/apps")
```
### Expected behavior
Normally it should manage to download and load the dataset without such error.
### Environment info
Ubuntu, Python3.10/3.11 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6760/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6760/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6759 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6759/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6759/comments | https://api.github.com/repos/huggingface/datasets/issues/6759/events | https://github.com/huggingface/datasets/issues/6759 | 2,208,892,891 | I_kwDODunzps6DqQfb | 6,759 | Persistent multi-process Pool | {
"avatar_url": "https://avatars.githubusercontent.com/u/4337024?v=4",
"events_url": "https://api.github.com/users/fostiropoulos/events{/privacy}",
"followers_url": "https://api.github.com/users/fostiropoulos/followers",
"following_url": "https://api.github.com/users/fostiropoulos/following{/other_user}",
"gists_url": "https://api.github.com/users/fostiropoulos/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fostiropoulos",
"id": 4337024,
"login": "fostiropoulos",
"node_id": "MDQ6VXNlcjQzMzcwMjQ=",
"organizations_url": "https://api.github.com/users/fostiropoulos/orgs",
"received_events_url": "https://api.github.com/users/fostiropoulos/received_events",
"repos_url": "https://api.github.com/users/fostiropoulos/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fostiropoulos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fostiropoulos/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fostiropoulos",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 0 | 2024-03-26T17:35:25 | 2024-03-26T17:35:25 | null | NONE | null | null | null | ### Feature request
Running .map and filter functions with `num_procs` consecutively instantiates several multiprocessing pools iteratively.
As instantiating a Pool is very resource intensive it can be a bottleneck to performing iteratively filtering.
My ideas:
1. There should be an option to declare `persistent_workers` similar to pytorch DataLoader. Downside would be that would be complex to determine the correct resource allocation and deallocation of the pool. i.e. the dataset can outlive the utility of the pool.
2. Provide a pool as an argument. Downside would be the expertise required by the user. Upside, is that there is better resource management.
### Motivation
Is really slow to iteratively perform map and filter operations on a dataset.
### Your contribution
If approved I could integrate it. I would need to know what method would be most suitable to implement from the two options above. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6759/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6759/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6758/comments | https://api.github.com/repos/huggingface/datasets/issues/6758/events | https://github.com/huggingface/datasets/issues/6758 | 2,208,494,302 | I_kwDODunzps6DovLe | 6,758 | Passing `sample_by` to `load_dataset` when loading text data does not work | {
"avatar_url": "https://avatars.githubusercontent.com/u/823693?v=4",
"events_url": "https://api.github.com/users/ntoxeg/events{/privacy}",
"followers_url": "https://api.github.com/users/ntoxeg/followers",
"following_url": "https://api.github.com/users/ntoxeg/following{/other_user}",
"gists_url": "https://api.github.com/users/ntoxeg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ntoxeg",
"id": 823693,
"login": "ntoxeg",
"node_id": "MDQ6VXNlcjgyMzY5Mw==",
"organizations_url": "https://api.github.com/users/ntoxeg/orgs",
"received_events_url": "https://api.github.com/users/ntoxeg/received_events",
"repos_url": "https://api.github.com/users/ntoxeg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ntoxeg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ntoxeg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ntoxeg",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
}
] | null | 1 | 2024-03-26T14:55:33 | 2024-04-09T11:27:59 | 2024-04-09T11:27:59 | NONE | null | null | null | ### Describe the bug
I have a dataset that consists of a bunch of text files, each representing an example. There is an undocumented `sample_by` argument for the `TextConfig` class that is used by `Text` to decide whether to split files into lines, paragraphs or take them whole. Passing `sample_by=“document”` to `load_dataset` results in files getting split into lines regardless. I have edited `src/datasets/packaged_modules/text/text.py` for myself to switch the default and it works fine.
As a side note, the `if-else` for `sample_by` will silently load an empty dataset if someone makes a typo in the argument, which is not ideal.
### Steps to reproduce the bug
1. Prepare data as a bunch of files in a directory.
2. Load that data via `load_dataset(“text”, data_files=<data_dir>/<files_glob>, …, sample_by=“document”)`.
3. Inspect the resultant dataset — every item should have the form of `{“text”: <a line from a file>}`.
### Expected behavior
`load_dataset(“text”, data_files=<data_dir>/<files_glob>, …, sample_by=“document”)` should result in a dataset with items of the form `{“text”: <one document>}`.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.0-1046-nvidia-x86_64-with-glibc2.35
- Python version: 3.11.8
- `huggingface_hub` version: 0.21.4
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6758/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6758/timeline | null | completed | false | 332.540556 |
https://api.github.com/repos/huggingface/datasets/issues/6757 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6757/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6757/comments | https://api.github.com/repos/huggingface/datasets/issues/6757/events | https://github.com/huggingface/datasets/pull/6757 | 2,206,280,340 | PR_kwDODunzps5qr7Li | 6,757 | Test disabling transformers containers in docs CI | {
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Wauplin",
"id": 11801849,
"login": "Wauplin",
"node_id": "MDQ6VXNlcjExODAxODQ5",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Wauplin",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 3 | 2024-03-25T17:16:11 | 2024-03-27T16:26:35 | null | CONTRIBUTOR | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/6757.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6757",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6757.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6757"
} | Related to https://github.com/huggingface/doc-builder/pull/487 and [internal slack thread](https://huggingface.slack.com/archives/C04F8N7FQNL/p1711384899462349?thread_ts=1711041424.720769&cid=C04F8N7FQNL). There is now a `custom_container` option when building docs in CI. When set to `""` (instead of `"huggingface/transformers-doc-builder"` by default), we don't run the CI inside a container, therefore saving ~2min of download time. The plan is to test disabling the transformers container on a few "big" repo and if everything works correctly, we will stop making it the default container. More details on https://github.com/huggingface/doc-builder/pull/487.
cc @mishig25 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6757/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6757/timeline | null | null | true | null |
https://api.github.com/repos/huggingface/datasets/issues/6756 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6756/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6756/comments | https://api.github.com/repos/huggingface/datasets/issues/6756/events | https://github.com/huggingface/datasets/issues/6756 | 2,205,557,725 | I_kwDODunzps6DdiPd | 6,756 | Support SQLite files? | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 3 | 2024-03-25T11:48:05 | 2024-03-26T16:09:32 | 2024-03-26T16:09:32 | COLLABORATOR | null | null | null | ### Feature request
Support loading a dataset from a SQLite file
https://huggingface.co/datasets/severo/test_iris_sqlite/tree/main
### Motivation
SQLite is a popular file format.
### Your contribution
See discussion on slack: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1702481859117909 (internal)
In particular: a SQLite file can contain multiple tables, which might be matched to multiple configs. Maybe the detail of splits and configs should be defined in the README YAML, or use the same format as for ZIP files: `Iris.sqlite::Iris`.
See dataset here: https://huggingface.co/datasets/severo/test_iris_sqlite
Note: should we also support DuckDB files? | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6756/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6756/timeline | null | completed | false | 28.3575 |
https://api.github.com/repos/huggingface/datasets/issues/6755 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6755/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6755/comments | https://api.github.com/repos/huggingface/datasets/issues/6755/events | https://github.com/huggingface/datasets/issues/6755 | 2,204,573,289 | I_kwDODunzps6DZx5p | 6,755 | Small typo on the documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/4337024?v=4",
"events_url": "https://api.github.com/users/fostiropoulos/events{/privacy}",
"followers_url": "https://api.github.com/users/fostiropoulos/followers",
"following_url": "https://api.github.com/users/fostiropoulos/following{/other_user}",
"gists_url": "https://api.github.com/users/fostiropoulos/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/fostiropoulos",
"id": 4337024,
"login": "fostiropoulos",
"node_id": "MDQ6VXNlcjQzMzcwMjQ=",
"organizations_url": "https://api.github.com/users/fostiropoulos/orgs",
"received_events_url": "https://api.github.com/users/fostiropoulos/received_events",
"repos_url": "https://api.github.com/users/fostiropoulos/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/fostiropoulos/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fostiropoulos/subscriptions",
"type": "User",
"url": "https://api.github.com/users/fostiropoulos",
"user_view_type": "public"
} | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/63234112?v=4",
"events_url": "https://api.github.com/users/JINO-ROHIT/events{/privacy}",
"followers_url": "https://api.github.com/users/JINO-ROHIT/followers",
"following_url": "https://api.github.com/users/JINO-ROHIT/following{/other_user}",
"gists_url": "https://api.github.com/users/JINO-ROHIT/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JINO-ROHIT",
"id": 63234112,
"login": "JINO-ROHIT",
"node_id": "MDQ6VXNlcjYzMjM0MTEy",
"organizations_url": "https://api.github.com/users/JINO-ROHIT/orgs",
"received_events_url": "https://api.github.com/users/JINO-ROHIT/received_events",
"repos_url": "https://api.github.com/users/JINO-ROHIT/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JINO-ROHIT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JINO-ROHIT/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JINO-ROHIT",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/63234112?v=4",
"events_url": "https://api.github.com/users/JINO-ROHIT/events{/privacy}",
"followers_url": "https://api.github.com/users/JINO-ROHIT/followers",
"following_url": "https://api.github.com/users/JINO-ROHIT/following{/other_user}",
"gists_url": "https://api.github.com/users/JINO-ROHIT/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JINO-ROHIT",
"id": 63234112,
"login": "JINO-ROHIT",
"node_id": "MDQ6VXNlcjYzMjM0MTEy",
"organizations_url": "https://api.github.com/users/JINO-ROHIT/orgs",
"received_events_url": "https://api.github.com/users/JINO-ROHIT/received_events",
"repos_url": "https://api.github.com/users/JINO-ROHIT/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JINO-ROHIT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JINO-ROHIT/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JINO-ROHIT",
"user_view_type": "public"
}
] | null | 3 | 2024-03-24T21:47:52 | 2024-04-02T14:01:19 | 2024-04-02T14:01:19 | NONE | null | null | null | ### Describe the bug
There is a small typo on https://github.com/huggingface/datasets/blob/d5468836fe94e8be1ae093397dd43d4a2503b926/src/datasets/dataset_dict.py#L938
It should be `caching is enabled`.
### Steps to reproduce the bug
Please visit
https://github.com/huggingface/datasets/blob/d5468836fe94e8be1ae093397dd43d4a2503b926/src/datasets/dataset_dict.py#L938
### Expected behavior
`caching is enabled`
### Environment info
- `datasets` version: 2.17.1
- Platform: Linux-5.15.0-101-generic-x86_64-with-glibc2.35
- Python version: 3.11.7
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6755/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6755/timeline | null | completed | false | 208.224167 |
https://api.github.com/repos/huggingface/datasets/issues/6754 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6754/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6754/comments | https://api.github.com/repos/huggingface/datasets/issues/6754/events | https://github.com/huggingface/datasets/pull/6754 | 2,204,214,595 | PR_kwDODunzps5qk-nr | 6,754 | Fix cache path to snakecase for `CachedDatasetModuleFactory` and `Cache` | {
"avatar_url": "https://avatars.githubusercontent.com/u/26690193?v=4",
"events_url": "https://api.github.com/users/izhx/events{/privacy}",
"followers_url": "https://api.github.com/users/izhx/followers",
"following_url": "https://api.github.com/users/izhx/following{/other_user}",
"gists_url": "https://api.github.com/users/izhx/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/izhx",
"id": 26690193,
"login": "izhx",
"node_id": "MDQ6VXNlcjI2NjkwMTkz",
"organizations_url": "https://api.github.com/users/izhx/orgs",
"received_events_url": "https://api.github.com/users/izhx/received_events",
"repos_url": "https://api.github.com/users/izhx/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/izhx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/izhx/subscriptions",
"type": "User",
"url": "https://api.github.com/users/izhx",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 6 | 2024-03-24T06:59:15 | 2024-04-15T15:45:44 | 2024-04-15T15:38:51 | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6754.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6754",
"merged_at": "2024-04-15T15:38:51",
"patch_url": "https://github.com/huggingface/datasets/pull/6754.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6754"
} | Fix https://github.com/huggingface/datasets/issues/6750#issuecomment-2016678729
I didn't find a guideline on how to run the tests, so i just run the following steps to make sure that this bug is fixed.
1. `python test.py`,
2. then `HF_DATASETS_OFFLINE=1 python test.py`
The `test.py` is
```
import datasets
datasets.utils.logging.set_verbosity_info()
ds = datasets.load_dataset('izhx/STS17-debug')
print(ds)
ds = datasets.load_dataset('C-MTEB/AFQMC', revision='b44c3b011063adb25877c13823db83bb193913c4')
print(ds)
```
| {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6754/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6754/timeline | null | null | true | 536.66 |
https://api.github.com/repos/huggingface/datasets/issues/6753 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6753/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6753/comments | https://api.github.com/repos/huggingface/datasets/issues/6753/events | https://github.com/huggingface/datasets/issues/6753 | 2,204,155,091 | I_kwDODunzps6DYLzT | 6,753 | Type error when importing datasets on Kaggle | {
"avatar_url": "https://avatars.githubusercontent.com/u/18300717?v=4",
"events_url": "https://api.github.com/users/jtv199/events{/privacy}",
"followers_url": "https://api.github.com/users/jtv199/followers",
"following_url": "https://api.github.com/users/jtv199/following{/other_user}",
"gists_url": "https://api.github.com/users/jtv199/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jtv199",
"id": 18300717,
"login": "jtv199",
"node_id": "MDQ6VXNlcjE4MzAwNzE3",
"organizations_url": "https://api.github.com/users/jtv199/orgs",
"received_events_url": "https://api.github.com/users/jtv199/received_events",
"repos_url": "https://api.github.com/users/jtv199/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jtv199/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jtv199/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jtv199",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 8 | 2024-03-24T03:01:30 | 2024-10-02T11:49:35 | 2024-03-30T00:23:49 | NONE | null | null | null | ### Describe the bug
When trying to run
```
import datasets
print(datasets.__version__)
```
It generates the following error
```
TypeError: expected string or bytes-like object
```
It looks like It cannot find the valid versions of `fsspec`
though fsspec version is fine when I checked Via command
```
import fsspec
print(fsspec.__version__)
# output: 2024.3.1
```
Detailed crash report
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[1], line 1
----> 1 import datasets
2 print(datasets.__version__)
File /opt/conda/lib/python3.10/site-packages/datasets/__init__.py:18
1 # ruff: noqa
2 # Copyright 2020 The HuggingFace Datasets Authors and the TensorFlow Datasets Authors.
3 #
(...)
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
16 __version__ = "2.18.0"
---> 18 from .arrow_dataset import Dataset
19 from .arrow_reader import ReadInstruction
20 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder
File /opt/conda/lib/python3.10/site-packages/datasets/arrow_dataset.py:66
63 from multiprocess import Pool
64 from tqdm.contrib.concurrent import thread_map
---> 66 from . import config
67 from .arrow_reader import ArrowReader
68 from .arrow_writer import ArrowWriter, OptimizedTypedSequence
File /opt/conda/lib/python3.10/site-packages/datasets/config.py:41
39 # Imports
40 DILL_VERSION = version.parse(importlib.metadata.version("dill"))
---> 41 FSSPEC_VERSION = version.parse(importlib.metadata.version("fsspec"))
42 PANDAS_VERSION = version.parse(importlib.metadata.version("pandas"))
43 PYARROW_VERSION = version.parse(importlib.metadata.version("pyarrow"))
File /opt/conda/lib/python3.10/site-packages/packaging/version.py:49, in parse(version)
43 """
44 Parse the given version string and return either a :class:`Version` object
45 or a :class:`LegacyVersion` object depending on if the given version is
46 a valid PEP 440 version or a legacy version.
47 """
48 try:
---> 49 return Version(version)
50 except InvalidVersion:
51 return LegacyVersion(version)
File /opt/conda/lib/python3.10/site-packages/packaging/version.py:264, in Version.__init__(self, version)
261 def __init__(self, version: str) -> None:
262
263 # Validate the version and parse it into pieces
--> 264 match = self._regex.search(version)
265 if not match:
266 raise InvalidVersion(f"Invalid version: '{version}'")
TypeError: expected string or bytes-like object
```
### Steps to reproduce the bug
1. run `!pip install -U datasets` on kaggle
2. check datasets is installed via
```
import datasets
print(datasets.__version__)
```
### Expected behavior
Expected to print datasets version, like `2.18.0`
### Environment info
Running on Kaggle, latest enviornment , here is the notebook https://www.kaggle.com/code/jtv199/mistrial-7b-part2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/18300717?v=4",
"events_url": "https://api.github.com/users/jtv199/events{/privacy}",
"followers_url": "https://api.github.com/users/jtv199/followers",
"following_url": "https://api.github.com/users/jtv199/following{/other_user}",
"gists_url": "https://api.github.com/users/jtv199/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jtv199",
"id": 18300717,
"login": "jtv199",
"node_id": "MDQ6VXNlcjE4MzAwNzE3",
"organizations_url": "https://api.github.com/users/jtv199/orgs",
"received_events_url": "https://api.github.com/users/jtv199/received_events",
"repos_url": "https://api.github.com/users/jtv199/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jtv199/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jtv199/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jtv199",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6753/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6753/timeline | null | completed | false | 141.371944 |
https://api.github.com/repos/huggingface/datasets/issues/6752 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6752/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6752/comments | https://api.github.com/repos/huggingface/datasets/issues/6752/events | https://github.com/huggingface/datasets/issues/6752 | 2,204,043,839 | I_kwDODunzps6DXwo_ | 6,752 | Precision being changed from float16 to float32 unexpectedly | {
"avatar_url": "https://avatars.githubusercontent.com/u/21228908?v=4",
"events_url": "https://api.github.com/users/gcervantes8/events{/privacy}",
"followers_url": "https://api.github.com/users/gcervantes8/followers",
"following_url": "https://api.github.com/users/gcervantes8/following{/other_user}",
"gists_url": "https://api.github.com/users/gcervantes8/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gcervantes8",
"id": 21228908,
"login": "gcervantes8",
"node_id": "MDQ6VXNlcjIxMjI4OTA4",
"organizations_url": "https://api.github.com/users/gcervantes8/orgs",
"received_events_url": "https://api.github.com/users/gcervantes8/received_events",
"repos_url": "https://api.github.com/users/gcervantes8/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gcervantes8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gcervantes8/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gcervantes8",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 1 | 2024-03-23T20:53:56 | 2024-04-10T15:21:33 | null | NONE | null | null | null | ### Describe the bug
I'm loading a HuggingFace Dataset for images.
I'm running a preprocessing (map operation) step that runs a few operations, one of them being conversion to float16. The Dataset features also say that the 'img' is of type float16. Whenever I take an image from that HuggingFace Dataset instance, the type turns out to be float32.
### Steps to reproduce the bug
```python
import torchvision.transforms.v2 as transforms
from datasets import load_dataset
dataset = load_dataset('cifar10', split='test')
dataset = dataset.with_format("torch")
data_transform = transforms.Compose([transforms.Resize((32, 32)),
transforms.ToDtype(torch.float16, scale=True),
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),
])
def _preprocess(examples):
# Permutes from (BS x H x W x C) to (BS x C x H x W)
images = torch.permute(examples['img'], (0, 3, 2, 1))
examples['img'] = data_transform(images)
return examples
dataset = dataset.map(_preprocess, batched=True, batch_size=8)
```
Now at this point the dataset.features are showing float16 which is great because that's what I want.
```python
print(data_loader.features['img'])
Sequence(feature=Sequence(feature=Sequence(feature=Value(dtype='float16', id=None), length=-1, id=None), length=-1, id=None), length=-1, id=None)
```
But when I try to sample an image from this dataloader; I'm getting a float32 image, when I'm expecting float16:
```python
print(next(iter(data_loader))['img'].dtype)
torch.float32
```
### Expected behavior
I'm expecting the images loaded after the transformation to stay in float16.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.9
- `huggingface_hub` version: 0.21.4
- PyArrow version: 14.0.2
- Pandas version: 2.0.3
- `fsspec` version: 2023.10.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6752/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6752/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6751 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6751/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6751/comments | https://api.github.com/repos/huggingface/datasets/issues/6751/events | https://github.com/huggingface/datasets/pull/6751 | 2,203,951,501 | PR_kwDODunzps5qkKLH | 6,751 | Use 'with' operator for some download functions | {
"avatar_url": "https://avatars.githubusercontent.com/u/31669?v=4",
"events_url": "https://api.github.com/users/Moisan/events{/privacy}",
"followers_url": "https://api.github.com/users/Moisan/followers",
"following_url": "https://api.github.com/users/Moisan/following{/other_user}",
"gists_url": "https://api.github.com/users/Moisan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Moisan",
"id": 31669,
"login": "Moisan",
"node_id": "MDQ6VXNlcjMxNjY5",
"organizations_url": "https://api.github.com/users/Moisan/orgs",
"received_events_url": "https://api.github.com/users/Moisan/received_events",
"repos_url": "https://api.github.com/users/Moisan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Moisan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moisan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Moisan",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-23T16:32:08 | 2024-03-26T00:40:57 | 2024-03-26T00:40:57 | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6751.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6751",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6751.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6751"
} | Some functions in `streaming_download_manager.py` are not closing the file they open which lead to `Unclosed file` warnings in our code. This fixes a few of them. | {
"avatar_url": "https://avatars.githubusercontent.com/u/31669?v=4",
"events_url": "https://api.github.com/users/Moisan/events{/privacy}",
"followers_url": "https://api.github.com/users/Moisan/followers",
"following_url": "https://api.github.com/users/Moisan/following{/other_user}",
"gists_url": "https://api.github.com/users/Moisan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Moisan",
"id": 31669,
"login": "Moisan",
"node_id": "MDQ6VXNlcjMxNjY5",
"organizations_url": "https://api.github.com/users/Moisan/orgs",
"received_events_url": "https://api.github.com/users/Moisan/received_events",
"repos_url": "https://api.github.com/users/Moisan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Moisan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Moisan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Moisan",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6751/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6751/timeline | null | null | true | 56.146944 |
https://api.github.com/repos/huggingface/datasets/issues/6750 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6750/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6750/comments | https://api.github.com/repos/huggingface/datasets/issues/6750/events | https://github.com/huggingface/datasets/issues/6750 | 2,203,590,658 | I_kwDODunzps6DWCAC | 6,750 | `load_dataset` requires a network connection for local download? | {
"avatar_url": "https://avatars.githubusercontent.com/u/6306695?v=4",
"events_url": "https://api.github.com/users/MiroFurtado/events{/privacy}",
"followers_url": "https://api.github.com/users/MiroFurtado/followers",
"following_url": "https://api.github.com/users/MiroFurtado/following{/other_user}",
"gists_url": "https://api.github.com/users/MiroFurtado/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MiroFurtado",
"id": 6306695,
"login": "MiroFurtado",
"node_id": "MDQ6VXNlcjYzMDY2OTU=",
"organizations_url": "https://api.github.com/users/MiroFurtado/orgs",
"received_events_url": "https://api.github.com/users/MiroFurtado/received_events",
"repos_url": "https://api.github.com/users/MiroFurtado/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MiroFurtado/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MiroFurtado/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MiroFurtado",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2024-03-23T01:06:32 | 2024-04-15T15:38:52 | 2024-04-15T15:38:52 | NONE | null | null | null | ### Describe the bug
Hi all - I see that in the past a network dependency has been mistakenly introduced into `load_dataset` even for local loads. Is it possible this has happened again?
### Steps to reproduce the bug
```
>>> import datasets
>>> datasets.load_dataset("hh-rlhf")
Repo card metadata block was not found. Setting CardData to empty.
*hangs bc i'm firewalled*
````
stack trace from ctrl-c:
```
^CTraceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/load.py", line 2582, in load_dataset
builder_instance.download_and_prepare(
output_path = get_from_cache( [0/122]
File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 532, in get_from_cache
response = http_head(
File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 419, in http_head
response = _request_with_retry(
File "/home/jobuser/.local/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 304, in _request_with_retry
response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/requests/adapters.py", line 487, in send
resp = conn.urlopen(
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connection.py", line 363, in connect
self.sock = conn = self._new_conn()
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "/home/jobuser/build/lipy-flytekit-image/environments/satellites/python/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
sock.connect(sa)
KeyboardInterrupt
```
### Expected behavior
loads the dataset
### Environment info
```
> pip show datasets
Name: datasets
Version: 2.18.0
```
Python 3.10.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6750/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6750/timeline | null | completed | false | 566.538889 |
https://api.github.com/repos/huggingface/datasets/issues/6749 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6749/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6749/comments | https://api.github.com/repos/huggingface/datasets/issues/6749/events | https://github.com/huggingface/datasets/pull/6749 | 2,202,310,116 | PR_kwDODunzps5qeoSk | 6,749 | Fix fsspec tqdm callback | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-22T11:44:11 | 2024-03-22T14:51:45 | 2024-03-22T14:45:39 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6749.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6749",
"merged_at": "2024-03-22T14:45:39",
"patch_url": "https://github.com/huggingface/datasets/pull/6749.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6749"
} | Following changes at https://github.com/fsspec/filesystem_spec/pull/1497 for `fsspec>=2024.2.0` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6749/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6749/timeline | null | null | true | 3.024444 |
https://api.github.com/repos/huggingface/datasets/issues/6748 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6748/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6748/comments | https://api.github.com/repos/huggingface/datasets/issues/6748/events | https://github.com/huggingface/datasets/issues/6748 | 2,201,517,348 | I_kwDODunzps6DOH0k | 6,748 | Strange slicing behavior | {
"avatar_url": "https://avatars.githubusercontent.com/u/20135317?v=4",
"events_url": "https://api.github.com/users/Luciennnnnnn/events{/privacy}",
"followers_url": "https://api.github.com/users/Luciennnnnnn/followers",
"following_url": "https://api.github.com/users/Luciennnnnnn/following{/other_user}",
"gists_url": "https://api.github.com/users/Luciennnnnnn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Luciennnnnnn",
"id": 20135317,
"login": "Luciennnnnnn",
"node_id": "MDQ6VXNlcjIwMTM1MzE3",
"organizations_url": "https://api.github.com/users/Luciennnnnnn/orgs",
"received_events_url": "https://api.github.com/users/Luciennnnnnn/received_events",
"repos_url": "https://api.github.com/users/Luciennnnnnn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Luciennnnnnn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Luciennnnnnn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Luciennnnnnn",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 1 | 2024-03-22T01:49:13 | 2024-03-22T16:43:57 | null | NONE | null | null | null | ### Describe the bug
I have loaded a dataset, and then slice first 300 samples using `:` ops, however, the resulting dataset is not expected, as the output below:
```bash
len(dataset)=1050324
len(dataset[:300])=2
len(dataset[0:300])=2
len(dataset.select(range(300)))=300
```
### Steps to reproduce the bug
load a dataset then:
```bash
dataset = load_from_disk(args.train_data_dir)
print(f"{len(dataset)=}", flush=True)
print(f"{len(dataset[:300])=}", flush=True)
print(f"{len(dataset[0:300])=}", flush=True)
print(f"{len(dataset.select(range(300)))=}", flush=True)
```
### Expected behavior
```bash
len(dataset)=1050324
len(dataset[:300])=300
len(dataset[0:300])=300
len(dataset.select(range(300)))=300
```
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- `huggingface_hub` version: 0.20.2
- PyArrow version: 10.0.1
- Pandas version: 1.5.3
- `fsspec` version: 2023.10.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6748/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6748/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6747 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6747/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6747/comments | https://api.github.com/repos/huggingface/datasets/issues/6747/events | https://github.com/huggingface/datasets/pull/6747 | 2,201,219,384 | PR_kwDODunzps5qa5L- | 6,747 | chore(deps): bump fsspec | {
"avatar_url": "https://avatars.githubusercontent.com/u/3659196?v=4",
"events_url": "https://api.github.com/users/shcheklein/events{/privacy}",
"followers_url": "https://api.github.com/users/shcheklein/followers",
"following_url": "https://api.github.com/users/shcheklein/following{/other_user}",
"gists_url": "https://api.github.com/users/shcheklein/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shcheklein",
"id": 3659196,
"login": "shcheklein",
"node_id": "MDQ6VXNlcjM2NTkxOTY=",
"organizations_url": "https://api.github.com/users/shcheklein/orgs",
"received_events_url": "https://api.github.com/users/shcheklein/received_events",
"repos_url": "https://api.github.com/users/shcheklein/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shcheklein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shcheklein/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shcheklein",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-21T21:25:49 | 2024-03-22T16:40:15 | 2024-03-22T16:28:40 | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6747.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6747",
"merged_at": "2024-03-22T16:28:40",
"patch_url": "https://github.com/huggingface/datasets/pull/6747.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6747"
} | There were a few fixes released recently, some DVC ecosystem packages require newer version of `fsspec`. | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6747/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6747/timeline | null | null | true | 19.0475 |
https://api.github.com/repos/huggingface/datasets/issues/6746 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6746/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6746/comments | https://api.github.com/repos/huggingface/datasets/issues/6746/events | https://github.com/huggingface/datasets/issues/6746 | 2,198,993,949 | I_kwDODunzps6DEfwd | 6,746 | ExpectedMoreSplits error when loading C4 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/65165345?v=4",
"events_url": "https://api.github.com/users/billwang485/events{/privacy}",
"followers_url": "https://api.github.com/users/billwang485/followers",
"following_url": "https://api.github.com/users/billwang485/following{/other_user}",
"gists_url": "https://api.github.com/users/billwang485/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/billwang485",
"id": 65165345,
"login": "billwang485",
"node_id": "MDQ6VXNlcjY1MTY1MzQ1",
"organizations_url": "https://api.github.com/users/billwang485/orgs",
"received_events_url": "https://api.github.com/users/billwang485/received_events",
"repos_url": "https://api.github.com/users/billwang485/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/billwang485/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/billwang485/subscriptions",
"type": "User",
"url": "https://api.github.com/users/billwang485",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 8 | 2024-03-21T02:53:04 | 2024-09-18T19:57:14 | 2024-07-29T07:21:08 | NONE | null | null | null | ### Describe the bug
I encounter bug when running the example command line
```python
python main.py \
--model decapoda-research/llama-7b-hf \
--prune_method wanda \
--sparsity_ratio 0.5 \
--sparsity_type unstructured \
--save out/llama_7b/unstructured/wanda/
```
The bug occurred at these lines of code (when loading c4 dataset)
```python
traindata = load_dataset('allenai/c4', 'allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train')
valdata = load_dataset('allenai/c4', 'allenai--c4', data_files={'validation': 'en/c4-validation.00000-of-00008.json.gz'}, split='validation')
```
The error message states:
```
raise ExpectedMoreSplits(str(set(expected_splits) - set(recorded_splits)))
datasets.utils.info_utils.ExpectedMoreSplits: {'validation'}
```
### Steps to reproduce the bug
1. I encounter bug when running the example command line
### Expected behavior
The error message states:
```
raise ExpectedMoreSplits(str(set(expected_splits) - set(recorded_splits)))
datasets.utils.info_utils.ExpectedMoreSplits: {'validation'}
```
### Environment info
I'm using cuda 12.4, so I use ```pip install pytorch``` instead of conda provided in install.md
Also, I've tried another environment using the same commands in install.md, but the same bug occured | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6746/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6746/timeline | null | completed | false | 3,124.467778 |
https://api.github.com/repos/huggingface/datasets/issues/6745 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6745/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6745/comments | https://api.github.com/repos/huggingface/datasets/issues/6745/events | https://github.com/huggingface/datasets/issues/6745 | 2,198,541,732 | I_kwDODunzps6DCxWk | 6,745 | Scraping the whole of github including private repos is bad; kindly stop | {
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghost",
"id": 10137,
"login": "ghost",
"node_id": "MDQ6VXNlcjEwMTM3",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"repos_url": "https://api.github.com/users/ghost/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghost",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 1 | 2024-03-20T20:54:06 | 2024-03-21T12:28:04 | 2024-03-21T10:24:56 | NONE | null | null | null | ### Feature request
https://github.com/bigcode-project/opt-out-v2 - opt out is not consent. kindly quit this ridiculous nonsense.
### Motivation
[EDITED: insults not tolerated]
### Your contribution
[EDITED: insults not tolerated] | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6745/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6745/timeline | null | completed | false | 13.513889 |
https://api.github.com/repos/huggingface/datasets/issues/6744 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6744/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6744/comments | https://api.github.com/repos/huggingface/datasets/issues/6744/events | https://github.com/huggingface/datasets/issues/6744 | 2,197,910,168 | I_kwDODunzps6DAXKY | 6,744 | Option to disable file locking | {
"avatar_url": "https://avatars.githubusercontent.com/u/35767167?v=4",
"events_url": "https://api.github.com/users/VRehnberg/events{/privacy}",
"followers_url": "https://api.github.com/users/VRehnberg/followers",
"following_url": "https://api.github.com/users/VRehnberg/following{/other_user}",
"gists_url": "https://api.github.com/users/VRehnberg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/VRehnberg",
"id": 35767167,
"login": "VRehnberg",
"node_id": "MDQ6VXNlcjM1NzY3MTY3",
"organizations_url": "https://api.github.com/users/VRehnberg/orgs",
"received_events_url": "https://api.github.com/users/VRehnberg/received_events",
"repos_url": "https://api.github.com/users/VRehnberg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/VRehnberg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VRehnberg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/VRehnberg",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 0 | 2024-03-20T15:59:45 | 2024-03-20T15:59:45 | null | NONE | null | null | null | ### Feature request
Commands such as `load_dataset` creates file locks with `filelock.FileLock`. It would be good if there was a way to disable this.
### Motivation
File locking doesn't work on all file-systems (in my case NFS mounted Weka). If the `cache_dir` only had small files then it would be possible to point to local disk and the problem would be solved. However, as cache_dir is both where the small info files are written and the processed datasets are put this isn't a feasible solution.
Considering https://github.com/huggingface/datasets/issues/6395 I still do think this is something that belongs in HuggingFace. The possibility to control packages separately is valuable. It might be that a user has their dataset on a file-system that doesn't support file-locking while they are using file locking on local disk to control some other type of access.
### Your contribution
My suggested solution:
```
diff --git a/src/datasets/utils/_filelock.py b/src/datasets/utils/_filelock.py
index 19620e6e..58f41a02 100644
--- a/src/datasets/utils/_filelock.py
+++ b/src/datasets/utils/_filelock.py
@@ -18,11 +18,15 @@
import os
from filelock import FileLock as FileLock_
-from filelock import UnixFileLock
+from filelock import SoftFileLock, UnixFileLock
from filelock import __version__ as _filelock_version
from packaging import version
+if os.getenv('HF_USE_SOFTFILELOCK', 'false').lower() in ('true', '1'):
+ FileLock_ = SoftFileLock
+
+
class FileLock(FileLock_):
"""
A `filelock.FileLock` initializer that handles long paths.
```
| null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6744/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6744/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6743 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6743/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6743/comments | https://api.github.com/repos/huggingface/datasets/issues/6743/events | https://github.com/huggingface/datasets/pull/6743 | 2,195,481,697 | PR_kwDODunzps5qHeMZ | 6,743 | Allow null values in dict columns | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2024-03-19T16:54:22 | 2024-04-08T13:08:42 | 2024-03-19T20:05:19 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6743.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6743",
"merged_at": "2024-03-19T20:05:19",
"patch_url": "https://github.com/huggingface/datasets/pull/6743.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6743"
} | Fix #6738 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6743/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6743/timeline | null | null | true | 3.1825 |
https://api.github.com/repos/huggingface/datasets/issues/6742 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6742/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6742/comments | https://api.github.com/repos/huggingface/datasets/issues/6742/events | https://github.com/huggingface/datasets/pull/6742 | 2,195,134,854 | PR_kwDODunzps5qGSfG | 6,742 | Fix missing download_config in get_data_patterns | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-19T14:29:25 | 2024-03-19T18:24:39 | 2024-03-19T18:15:13 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6742.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6742",
"merged_at": "2024-03-19T18:15:13",
"patch_url": "https://github.com/huggingface/datasets/pull/6742.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6742"
} | Reported in https://github.com/huggingface/datasets-server/issues/2607 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6742/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6742/timeline | null | null | true | 3.763333 |
https://api.github.com/repos/huggingface/datasets/issues/6741 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6741/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6741/comments | https://api.github.com/repos/huggingface/datasets/issues/6741/events | https://github.com/huggingface/datasets/pull/6741 | 2,194,626,108 | PR_kwDODunzps5qEiu3 | 6,741 | Fix offline mode with single config | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-19T10:48:32 | 2024-03-25T16:35:21 | 2024-03-25T16:23:59 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6741.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6741",
"merged_at": "2024-03-25T16:23:59",
"patch_url": "https://github.com/huggingface/datasets/pull/6741.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6741"
} | Reported in https://github.com/huggingface/datasets/issues/4760
The cache was not able to reload a dataset with a single config form the cache if the config name is not specificed
For example
```python
from datasets import load_dataset, config
config.HF_DATASETS_OFFLINE = True
load_dataset("openai_humaneval")
```
This was due to a regression in https://github.com/huggingface/datasets/pull/6632 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6741/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6741/timeline | null | null | true | 149.590833 |
https://api.github.com/repos/huggingface/datasets/issues/6740 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6740/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6740/comments | https://api.github.com/repos/huggingface/datasets/issues/6740/events | https://github.com/huggingface/datasets/issues/6740 | 2,193,172,074 | I_kwDODunzps6CuSZq | 6,740 | Support for loading geotiff files as a part of the ImageFolder | {
"avatar_url": "https://avatars.githubusercontent.com/u/31362090?v=4",
"events_url": "https://api.github.com/users/sunny1401/events{/privacy}",
"followers_url": "https://api.github.com/users/sunny1401/followers",
"following_url": "https://api.github.com/users/sunny1401/following{/other_user}",
"gists_url": "https://api.github.com/users/sunny1401/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sunny1401",
"id": 31362090,
"login": "sunny1401",
"node_id": "MDQ6VXNlcjMxMzYyMDkw",
"organizations_url": "https://api.github.com/users/sunny1401/orgs",
"received_events_url": "https://api.github.com/users/sunny1401/received_events",
"repos_url": "https://api.github.com/users/sunny1401/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sunny1401/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunny1401/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sunny1401",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 0 | 2024-03-18T20:00:39 | 2024-03-27T18:19:48 | 2024-03-27T18:19:20 | NONE | null | null | null | ### Feature request
Request for adding rasterio support to load geotiff as a part of ImageFolder, instead of using PIL
### Motivation
As of now, there are many datasets in HuggingFace Hub which are predominantly focussed towards RemoteSensing or are from RemoteSensing. The current ImageFolder (if I have understood correctly) uses PIL. This is not really optimized because mostly these datasets have images with many channels and additional metadata. Using PIL makes one loose it unless we provide a custom script. Hence, maybe an API could be added to have this in common?
### Your contribution
If the issue is accepted - i can contribute the code, because I would like to have it automated and generalised. | {
"avatar_url": "https://avatars.githubusercontent.com/u/31362090?v=4",
"events_url": "https://api.github.com/users/sunny1401/events{/privacy}",
"followers_url": "https://api.github.com/users/sunny1401/followers",
"following_url": "https://api.github.com/users/sunny1401/following{/other_user}",
"gists_url": "https://api.github.com/users/sunny1401/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sunny1401",
"id": 31362090,
"login": "sunny1401",
"node_id": "MDQ6VXNlcjMxMzYyMDkw",
"organizations_url": "https://api.github.com/users/sunny1401/orgs",
"received_events_url": "https://api.github.com/users/sunny1401/received_events",
"repos_url": "https://api.github.com/users/sunny1401/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sunny1401/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunny1401/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sunny1401",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6740/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6740/timeline | null | not_planned | false | 214.311389 |
https://api.github.com/repos/huggingface/datasets/issues/6739 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6739/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6739/comments | https://api.github.com/repos/huggingface/datasets/issues/6739/events | https://github.com/huggingface/datasets/pull/6739 | 2,192,730,134 | PR_kwDODunzps5p-Bwe | 6,739 | Transpose images with EXIF Orientation tag | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-18T16:43:06 | 2024-03-19T15:35:57 | 2024-03-19T15:29:42 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6739.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6739",
"merged_at": "2024-03-19T15:29:41",
"patch_url": "https://github.com/huggingface/datasets/pull/6739.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6739"
} | Closes https://github.com/huggingface/datasets/issues/6252 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6739/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6739/timeline | null | null | true | 22.776667 |
https://api.github.com/repos/huggingface/datasets/issues/6738 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6738/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6738/comments | https://api.github.com/repos/huggingface/datasets/issues/6738/events | https://github.com/huggingface/datasets/issues/6738 | 2,192,386,536 | I_kwDODunzps6CrSno | 6,738 | Dict feature is non-nullable while nested dict feature is | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | 3 | 2024-03-18T14:31:47 | 2024-03-20T10:24:15 | 2024-03-19T20:05:20 | CONTRIBUTOR | null | null | null | When i try to create a `Dataset` object with None values inside a dict column, like this:
```python
from datasets import Dataset, Features, Value
Dataset.from_dict(
{
"dict": [{"a": 0, "b": 0}, None],
}, features=Features(
{"dict": {"a": Value("int16"), "b": Value("int16")}}
)
)
```
i get `ValueError: Got None but expected a dictionary instead`.
At the same time, having None in _nested_ dict feature works, for example, this doesn't throw any errors:
```python
from datasets import Dataset, Features, Value, Sequence
dataset = Dataset.from_dict(
{
"list_dict": [[{"a": 0, "b": 0}], None],
"sequence_dict": [[{"a": 0, "b": 0}], None],
}, features=Features({
"list_dict": [{"a": Value("int16"), "b": Value("int16")}],
"sequence_dict": Sequence({"a": Value("int16"), "b": Value("int16")}),
})
)
```
Other types of features also seem to be nullable (but I haven't checked all of them).
Version of `datasets` is the latest atm (2.18.0)
Is this an expected behavior or a bug? | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6738/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6738/timeline | null | completed | false | 29.559167 |
https://api.github.com/repos/huggingface/datasets/issues/6737 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6737/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6737/comments | https://api.github.com/repos/huggingface/datasets/issues/6737/events | https://github.com/huggingface/datasets/issues/6737 | 2,190,198,425 | I_kwDODunzps6Ci8aZ | 6,737 | Invalid pattern: '**' can only be an entire path component | {
"avatar_url": "https://avatars.githubusercontent.com/u/28976175?v=4",
"events_url": "https://api.github.com/users/JPonsa/events{/privacy}",
"followers_url": "https://api.github.com/users/JPonsa/followers",
"following_url": "https://api.github.com/users/JPonsa/following{/other_user}",
"gists_url": "https://api.github.com/users/JPonsa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JPonsa",
"id": 28976175,
"login": "JPonsa",
"node_id": "MDQ6VXNlcjI4OTc2MTc1",
"organizations_url": "https://api.github.com/users/JPonsa/orgs",
"received_events_url": "https://api.github.com/users/JPonsa/received_events",
"repos_url": "https://api.github.com/users/JPonsa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JPonsa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JPonsa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JPonsa",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 7 | 2024-03-16T19:28:46 | 2024-07-23T14:23:28 | 2024-05-13T11:32:57 | NONE | null | null | null | ### Describe the bug
ValueError: Invalid pattern: '**' can only be an entire path component
when loading any dataset
### Steps to reproduce the bug
import datasets
ds = datasets.load_dataset("TokenBender/code_instructions_122k_alpaca_style")
### Expected behavior
loading the dataset successfully
### Environment info
- `datasets` version: 2.18.0
- Platform: Windows-10-10.0.22631-SP0
- Python version: 3.11.7
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2023.12.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 7,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 7,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6737/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6737/timeline | null | completed | false | 1,384.069722 |
https://api.github.com/repos/huggingface/datasets/issues/6736 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6736/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6736/comments | https://api.github.com/repos/huggingface/datasets/issues/6736/events | https://github.com/huggingface/datasets/issues/6736 | 2,190,181,422 | I_kwDODunzps6Ci4Qu | 6,736 | Mosaic Streaming (MDS) Support | {
"avatar_url": "https://avatars.githubusercontent.com/u/2498509?v=4",
"events_url": "https://api.github.com/users/siddk/events{/privacy}",
"followers_url": "https://api.github.com/users/siddk/followers",
"following_url": "https://api.github.com/users/siddk/following{/other_user}",
"gists_url": "https://api.github.com/users/siddk/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/siddk",
"id": 2498509,
"login": "siddk",
"node_id": "MDQ6VXNlcjI0OTg1MDk=",
"organizations_url": "https://api.github.com/users/siddk/orgs",
"received_events_url": "https://api.github.com/users/siddk/received_events",
"repos_url": "https://api.github.com/users/siddk/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/siddk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/siddk/subscriptions",
"type": "User",
"url": "https://api.github.com/users/siddk",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 1 | 2024-03-16T18:42:04 | 2024-03-18T15:13:34 | null | NONE | null | null | null | ### Feature request
I'm a huge fan of the current HF Datasets `webdataset` integration (especially the built-in streaming support). However, I'd love to upload some robotics and multimodal datasets I've processed for use with [Mosaic Streaming](https://docs.mosaicml.com/projects/streaming/en/stable/), specifically their [MDS Format](https://docs.mosaicml.com/projects/streaming/en/stable/fundamentals/dataset_format.html#mds).
Because the shard files have similar semantics to WebDataset, I'm hoping that adding such support won't be too much trouble?
### Motivation
One of the downsides with WebDataset is a lack of out-of-the-box determinism (especially for large-scale training and reproducibility), easy job resumption, and the ability to quickly debug / visualize individual examples.
Mosaic Streaming provides a [great interface for this out of the box](https://docs.mosaicml.com/projects/streaming/en/stable/#key-features), so I'd love to see it supported in HF Datasets.
### Your contribution
Happy to help test things / provide example data. Can potentially submit a PR if maintainers could point me to the necessary WebDataset logic / steps for adding a new streaming format! | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6736/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6736/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6735 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6735/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6735/comments | https://api.github.com/repos/huggingface/datasets/issues/6735/events | https://github.com/huggingface/datasets/pull/6735 | 2,189,132,932 | PR_kwDODunzps5px84g | 6,735 | Add `mode` parameter to `Image` feature | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-15T17:21:12 | 2024-03-18T15:47:48 | 2024-03-18T15:41:33 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6735.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6735",
"merged_at": "2024-03-18T15:41:33",
"patch_url": "https://github.com/huggingface/datasets/pull/6735.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6735"
} | Fix https://github.com/huggingface/datasets/issues/6675 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6735/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6735/timeline | null | null | true | 70.339167 |
https://api.github.com/repos/huggingface/datasets/issues/6734 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6734/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6734/comments | https://api.github.com/repos/huggingface/datasets/issues/6734/events | https://github.com/huggingface/datasets/issues/6734 | 2,187,646,694 | I_kwDODunzps6CZNbm | 6,734 | Tokenization slows towards end of dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/98723285?v=4",
"events_url": "https://api.github.com/users/ethansmith2000/events{/privacy}",
"followers_url": "https://api.github.com/users/ethansmith2000/followers",
"following_url": "https://api.github.com/users/ethansmith2000/following{/other_user}",
"gists_url": "https://api.github.com/users/ethansmith2000/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ethansmith2000",
"id": 98723285,
"login": "ethansmith2000",
"node_id": "U_kgDOBeJl1Q",
"organizations_url": "https://api.github.com/users/ethansmith2000/orgs",
"received_events_url": "https://api.github.com/users/ethansmith2000/received_events",
"repos_url": "https://api.github.com/users/ethansmith2000/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ethansmith2000/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ethansmith2000/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ethansmith2000",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 3 | 2024-03-15T03:27:36 | 2024-04-11T10:48:07 | null | NONE | null | null | null | ### Describe the bug
Mapped tokenization slows down substantially towards end of dataset.
train set started off very slow, caught up to 20k then tapered off til the end.
what's particularly strange is that the tokenization crashed a few times before due to errors with invalid tokens somewhere or corrupted downloads, and the speed ups/downs consistently happened the same times
```bash
Running tokenizer on dataset (num_proc=48): 0%| | 847000/881416735 [12:18<252:45:45, 967.72 examples/s]
Running tokenizer on dataset (num_proc=48): 0%| | 848000/881416735 [12:19<224:16:10, 1090.66 examples/s]
Running tokenizer on dataset (num_proc=48): 10%|▉ | 84964000/881416735 [3:48:00<11:21:34, 19476.01 examples/s]
Running tokenizer on dataset (num_proc=48): 10%|▉ | 84967000/881416735 [3:48:00<12:04:01, 18333.79 examples/s]
Running tokenizer on dataset (num_proc=48): 61%|██████ | 538631977/881416735 [13:46:40<27:50:04, 3420.84 examples/s]
Running tokenizer on dataset (num_proc=48): 61%|██████ | 538632977/881416735 [13:46:40<23:48:20, 3999.77 examples/s]
Running tokenizer on dataset (num_proc=48): 100%|█████████▉| 881365886/881416735 [38:30:19<04:34, 185.10 examples/s]
Running tokenizer on dataset (num_proc=48): 100%|█████████▉| 881366886/881416735 [38:30:25<04:36, 180.57 examples/s]
```
and validation set as well
```bash
Running tokenizer on dataset (num_proc=48): 90%|████████▉ | 41544000/46390354 [28:44<02:37, 30798.76 examples/s]
Running tokenizer on dataset (num_proc=48): 90%|████████▉ | 41550000/46390354 [28:44<02:08, 37698.08 examples/s]
Running tokenizer on dataset (num_proc=48): 96%|█████████▋| 44747422/46390354 [2:15:48<12:22:44, 36.87 examples/s]
Running tokenizer on dataset (num_proc=48): 96%|█████████▋| 44747422/46390354 [2:16:00<12:22:44, 36.87 examples/s]
```
### Steps to reproduce the bug
using the following kwargs
```python
with accelerator.main_process_first():
lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=48
load_from_cache_file=True,
desc=f"Grouping texts in chunks of {block_size}",
)
```
running through slurm script
```bash
#SBATCH --partition=gpu-nvidia-a100
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --gpus-per-task=8
#SBATCH --cpus-per-task=96
```
using this dataset https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T
### Expected behavior
Constant speed throughout
### Environment info
- `datasets` version: 2.15.0
- Platform: Linux-5.15.0-1049-aws-x86_64-with-glibc2.10
- Python version: 3.8.18
- `huggingface_hub` version: 0.19.4
- PyArrow version: 14.0.1
- Pandas version: 2.0.3
- `fsspec` version: 2023.10.0 | null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6734/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6734/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6733 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6733/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6733/comments | https://api.github.com/repos/huggingface/datasets/issues/6733/events | https://github.com/huggingface/datasets/issues/6733 | 2,186,811,724 | I_kwDODunzps6CWBlM | 6,733 | EmptyDatasetError when loading dataset downloaded with HuggingFace cli | {
"avatar_url": "https://avatars.githubusercontent.com/u/77196999?v=4",
"events_url": "https://api.github.com/users/StwayneXG/events{/privacy}",
"followers_url": "https://api.github.com/users/StwayneXG/followers",
"following_url": "https://api.github.com/users/StwayneXG/following{/other_user}",
"gists_url": "https://api.github.com/users/StwayneXG/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/StwayneXG",
"id": 77196999,
"login": "StwayneXG",
"node_id": "MDQ6VXNlcjc3MTk2OTk5",
"organizations_url": "https://api.github.com/users/StwayneXG/orgs",
"received_events_url": "https://api.github.com/users/StwayneXG/received_events",
"repos_url": "https://api.github.com/users/StwayneXG/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/StwayneXG/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StwayneXG/subscriptions",
"type": "User",
"url": "https://api.github.com/users/StwayneXG",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 1 | 2024-03-14T16:41:27 | 2024-03-15T18:09:02 | null | NONE | null | null | null | ### Describe the bug
I am using a cluster that does not have access to the internet when given a job. I tried downloading the dataset using the huggingface-cli command and then loading it with load_dataset but I get an error:
```raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files") from None```
The dataset I'm using is "lmsys/chatbot_arena_conversations". The folder structure is
- README.md
- data
- train-00000-of-00001-cced8514c7ed782a.parquet
### Steps to reproduce the bug
1. Download dataset using HuggingFace CLI: ```huggingface-cli download lmsys/chatbot_arena_conversations --local-dir ./lmsys/chatbot_arena_conversations```
2. In Python
```
from datasets import load_dataset
load_dataset("lmsys/chatbot_arena_conversations")
```
### Expected behavior
Should return a Dataset Dict in the form of
```
DatasetDict({
train: Dataset({
features: [...],
num_rows: 33,000
})
})
```
### Environment info
Python 3.11.5
Datasets 2.18.0
Transformers 4.38.2
Pytorch 2.2.0
Pyarrow 15.0.1
Rocky Linux release 8.9 (Green Obsidian)
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6733/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6733/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6731 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6731/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6731/comments | https://api.github.com/repos/huggingface/datasets/issues/6731/events | https://github.com/huggingface/datasets/issues/6731 | 2,182,844,673 | I_kwDODunzps6CG5EB | 6,731 | Unexpected behavior when using load_dataset with streaming=True in a for loop | {
"avatar_url": "https://avatars.githubusercontent.com/u/42908296?v=4",
"events_url": "https://api.github.com/users/uApiv/events{/privacy}",
"followers_url": "https://api.github.com/users/uApiv/followers",
"following_url": "https://api.github.com/users/uApiv/following{/other_user}",
"gists_url": "https://api.github.com/users/uApiv/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/uApiv",
"id": 42908296,
"login": "uApiv",
"node_id": "MDQ6VXNlcjQyOTA4Mjk2",
"organizations_url": "https://api.github.com/users/uApiv/orgs",
"received_events_url": "https://api.github.com/users/uApiv/received_events",
"repos_url": "https://api.github.com/users/uApiv/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/uApiv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uApiv/subscriptions",
"type": "User",
"url": "https://api.github.com/users/uApiv",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-12T23:26:43 | 2024-04-16T00:00:00 | 2024-04-16T00:00:00 | NONE | null | null | null | ### Describe the bug
### My Code
```
from datasets import load_dataset
res=[]
for i in [0,1]:
di=load_dataset(
"json",
data_files='path_to.json',
split='train',
streaming=True,
).map(lambda x: {"source": i})
res.append(di)
for e in res[0]:
print(e)
```
### Unexpected Behavior
Data in `res[0]` has `source=1`. However the expected value is 0.
### FYI
I further switch `streaming` to `False`. And the output value is as expected (0). So there may exist bugs in setting `streaming=True` in a for loop.
### Environment
Python 3.8.0
datasets==2.18.0
transformers==4.28.1
### Steps to reproduce the bug
1. Create a Json file with any content.
2. Run the provided code.
3. Switch `streaming` to `False` and run again to see the expected behavior.
### Expected behavior
The expected behavior is the data are mapped with its corresponding value in the for loop.
### Environment info
Python 3.8.0
datasets==2.18.0
transformers==4.28.1
Ubuntu 20.04 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42908296?v=4",
"events_url": "https://api.github.com/users/uApiv/events{/privacy}",
"followers_url": "https://api.github.com/users/uApiv/followers",
"following_url": "https://api.github.com/users/uApiv/following{/other_user}",
"gists_url": "https://api.github.com/users/uApiv/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/uApiv",
"id": 42908296,
"login": "uApiv",
"node_id": "MDQ6VXNlcjQyOTA4Mjk2",
"organizations_url": "https://api.github.com/users/uApiv/orgs",
"received_events_url": "https://api.github.com/users/uApiv/received_events",
"repos_url": "https://api.github.com/users/uApiv/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/uApiv/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/uApiv/subscriptions",
"type": "User",
"url": "https://api.github.com/users/uApiv",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6731/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6731/timeline | null | completed | false | 816.554722 |
https://api.github.com/repos/huggingface/datasets/issues/6730 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6730/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6730/comments | https://api.github.com/repos/huggingface/datasets/issues/6730/events | https://github.com/huggingface/datasets/pull/6730 | 2,181,881,499 | PR_kwDODunzps5pZDsB | 6,730 | Deprecate Pandas builder | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-12T15:12:13 | 2024-03-12T17:42:33 | 2024-03-12T17:36:24 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6730.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6730",
"merged_at": "2024-03-12T17:36:24",
"patch_url": "https://github.com/huggingface/datasets/pull/6730.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6730"
} | The Pandas packaged builder is undocumented and relies on `pickle` to read the data, making it **unsafe**. Moreover, I haven't seen a single instance of this builder being used (not even using the GH/Hub search), so we should deprecate it. | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6730/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6730/timeline | null | null | true | 2.403056 |
https://api.github.com/repos/huggingface/datasets/issues/6729 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6729/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6729/comments | https://api.github.com/repos/huggingface/datasets/issues/6729/events | https://github.com/huggingface/datasets/issues/6729 | 2,180,237,159 | I_kwDODunzps6B88dn | 6,729 | Support zipfiles that span multiple disks? | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | [] | null | 6 | 2024-03-11T21:07:41 | 2024-06-26T05:08:59 | 2024-06-26T05:05:28 | COLLABORATOR | null | null | null | See https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream
The dataset viewer gives the following error:
```
Error code: ConfigNamesError
Exception: BadZipFile
Message: zipfiles that span multiple disks are not supported
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 67, in compute_config_names_response
get_dataset_config_names(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1871, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1846, in dataset_module_factory
return HubDatasetModuleFactoryWithoutScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1240, in get_module
module_name, default_builder_kwargs = infer_module_for_data_files(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 584, in infer_module_for_data_files
split_modules = {
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 585, in <dictcomp>
split: infer_module_for_data_files_list(data_files_list, download_config=download_config)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 526, in infer_module_for_data_files_list
return infer_module_for_data_files_list_in_archives(data_files_list, download_config=download_config)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 554, in infer_module_for_data_files_list_in_archives
for f in xglob(extracted, recursive=True, download_config=download_config)[
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 576, in xglob
fs, *_ = fsspec.get_fs_token_paths(urlpath, storage_options=storage_options)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/core.py", line 622, in get_fs_token_paths
fs = filesystem(protocol, **inkwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/registry.py", line 290, in filesystem
return cls(**storage_options)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/spec.py", line 79, in __call__
obj = super().__call__(*args, **kwargs)
File "/src/services/worker/.venv/lib/python3.9/site-packages/fsspec/implementations/zip.py", line 57, in __init__
self.zip = zipfile.ZipFile(
File "/usr/local/lib/python3.9/zipfile.py", line 1266, in __init__
self._RealGetContents()
File "/usr/local/lib/python3.9/zipfile.py", line 1329, in _RealGetContents
endrec = _EndRecData(fp)
File "/usr/local/lib/python3.9/zipfile.py", line 286, in _EndRecData
return _EndRecData64(fpin, -sizeEndCentDir, endrec)
File "/usr/local/lib/python3.9/zipfile.py", line 232, in _EndRecData64
raise BadZipFile("zipfiles that span multiple disks are not supported")
zipfile.BadZipFile: zipfiles that span multiple disks are not supported
```
The files (https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream/tree/main/data) are:
<img width="629" alt="Capture d’écran 2024-03-11 à 22 07 30" src="https://github.com/huggingface/datasets/assets/1676121/0bb15a51-d54f-4d73-8572-e427ea644b36">
| {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6729/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6729/timeline | null | not_planned | false | 2,551.963056 |
https://api.github.com/repos/huggingface/datasets/issues/6728 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6728/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6728/comments | https://api.github.com/repos/huggingface/datasets/issues/6728/events | https://github.com/huggingface/datasets/issues/6728 | 2,178,607,012 | I_kwDODunzps6B2uek | 6,728 | Issue Downloading Certain Datasets After Setting Custom `HF_ENDPOINT` | {
"avatar_url": "https://avatars.githubusercontent.com/u/10057041?v=4",
"events_url": "https://api.github.com/users/padeoe/events{/privacy}",
"followers_url": "https://api.github.com/users/padeoe/followers",
"following_url": "https://api.github.com/users/padeoe/following{/other_user}",
"gists_url": "https://api.github.com/users/padeoe/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/padeoe",
"id": 10057041,
"login": "padeoe",
"node_id": "MDQ6VXNlcjEwMDU3MDQx",
"organizations_url": "https://api.github.com/users/padeoe/orgs",
"received_events_url": "https://api.github.com/users/padeoe/received_events",
"repos_url": "https://api.github.com/users/padeoe/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/padeoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/padeoe/subscriptions",
"type": "User",
"url": "https://api.github.com/users/padeoe",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2024-03-11T09:06:38 | 2024-03-15T14:52:07 | 2024-03-15T14:52:07 | NONE | null | null | null | ### Describe the bug
This bug is triggered under the following conditions:
- datasets repo ids without organization names trigger errors, such as `bookcorpus`, `gsm8k`, `wikipedia`, rather than in the form of `A/B`.
- If `HF_ENDPOINT` is set and the hostname is not in the form of `(hub-ci.)?huggingface.co`.
- This issue occurs with `datasets>2.15.0` or `huggingface-hub>0.19.4`. For example, using the latest versions: `datasets==2.18.0` and `huggingface-hub==0.21.4`,
### Steps to reproduce the bug
the issue can be reproduced with the following code:
1. install specific datasets and huggingface_hub.
```bash
pip install datasets==2.18.0
pip install huggingface_hub==0.21.4
```
2. execute python code.
```Python
import os
os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'
from datasets import load_dataset
bookcorpus = load_dataset('bookcorpus', split='train')
```
console output:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/padeoe/.local/lib/python3.10/site-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder(
File "/home/padeoe/.local/lib/python3.10/site-packages/datasets/load.py", line 2228, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/padeoe/.local/lib/python3.10/site-packages/datasets/load.py", line 1879, in dataset_module_factory
raise e1 from None
File "/home/padeoe/.local/lib/python3.10/site-packages/datasets/load.py", line 1830, in dataset_module_factory
with fs.open(f"datasets/{path}/{filename}", "r", encoding="utf-8") as f:
File "/home/padeoe/.local/lib/python3.10/site-packages/fsspec/spec.py", line 1295, in open
self.open(
File "/home/padeoe/.local/lib/python3.10/site-packages/fsspec/spec.py", line 1307, in open
f = self._open(
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 228, in _open
return HfFileSystemFile(self, path, mode=mode, revision=revision, block_size=block_size, **kwargs)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 615, in __init__
self.resolved_path = fs.resolve_path(path, revision=revision)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 180, in resolve_path
repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_file_system.py", line 117, in _repo_and_revision_exist
self._api.repo_info(repo_id, revision=revision, repo_type=repo_type)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2413, in repo_info
return method(
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn
return fn(*args, **kwargs)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2286, in dataset_info
hf_raise_for_status(r)
File "/home/padeoe/.local/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 362, in hf_raise_for_status
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 401 Client Error: Unauthorized for url: https://hf-mirror.com/api/datasets/bookcorpus/bookcorpus.py (Request ID: Root=1-65ee8659-5ab10eec5960c63e71f2bb58;b00bdbea-fd6e-4a74-8fe0-bc4682ae090e)
```
### Expected behavior
The dataset was downloaded correctly without any errors.
### Environment info
datasets==2.18.0
huggingface-hub==0.21.4 | {
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Wauplin",
"id": 11801849,
"login": "Wauplin",
"node_id": "MDQ6VXNlcjExODAxODQ5",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Wauplin",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6728/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6728/timeline | null | completed | false | 101.758056 |
https://api.github.com/repos/huggingface/datasets/issues/6727 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6727/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6727/comments | https://api.github.com/repos/huggingface/datasets/issues/6727/events | https://github.com/huggingface/datasets/pull/6727 | 2,177,826,110 | PR_kwDODunzps5pLJyE | 6,727 | Using a registry instead of calling globals for fetching feature types | {
"avatar_url": "https://avatars.githubusercontent.com/u/11325244?v=4",
"events_url": "https://api.github.com/users/psmyth94/events{/privacy}",
"followers_url": "https://api.github.com/users/psmyth94/followers",
"following_url": "https://api.github.com/users/psmyth94/following{/other_user}",
"gists_url": "https://api.github.com/users/psmyth94/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/psmyth94",
"id": 11325244,
"login": "psmyth94",
"node_id": "MDQ6VXNlcjExMzI1MjQ0",
"organizations_url": "https://api.github.com/users/psmyth94/orgs",
"received_events_url": "https://api.github.com/users/psmyth94/received_events",
"repos_url": "https://api.github.com/users/psmyth94/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/psmyth94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/psmyth94/subscriptions",
"type": "User",
"url": "https://api.github.com/users/psmyth94",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 6 | 2024-03-10T17:47:51 | 2024-03-13T12:08:49 | 2024-03-13T10:46:02 | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6727.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6727",
"merged_at": "2024-03-13T10:46:02",
"patch_url": "https://github.com/huggingface/datasets/pull/6727.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6727"
} | Hello,
When working with bio-data, each feature often has metadata associated with it (e.g. species, lineage, snp position, etc). To store this, I like to use the feature classes with the added `metadata` attribute. However, when saving or loading with custom features, you get an error since that class doesn't exist in the global namespace in `datasets.features.features`. Take for example,
```python
from dataclasses import dataclass, field
from datasets import Dataset
from datasets.features.features import Value, Features
@dataclass
class FeatureA(Value):
metadata: dict = field(default=dict)
_type: str = field(default="FeatureA", init=False, repr=False)
@dataclass
class FeatureB(Value):
metadata: dict = field(default_factory=dict)
_type: str = field(default="FeatureB", init=False, repr=False)
test_data = {
"a": [1, 2, 3],
"b": [4, 5, 6],
}
test_data = Dataset.from_dict(
test_data,
features=Features({
"a": FeatureA("int32", metadata={"species": "lactobacillus acetotolerans"}),
"b": FeatureB("int32", metadata={"species": "lactobacillus iners"}),
})
)
# returns an error since FeatureA and FeatureB are not in the global namespace
test_data.save_to_disk('./test_data')
```
Saving the dataset (0/1 shards): 0%| | 0/3 [00:00<?, ? examples/s]
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[2], line 28
19 test_data = Dataset.from_dict(
20 test_data,
21 features=Features({
(...)
24 })
25 )
27 # returns an error since FeatureA and FeatureB are not in the global namespace
---> 28 test_data.save_to_disk('./test_data')
...
File ~\Documents\datasets\src\datasets\features\features.py:1361, in generate_from_dict(obj)
1359 return {key: generate_from_dict(value) for key, value in obj.items()}
1360 obj = dict(obj)
-> 1361 class_type = globals()[obj.pop("_type")]
1363 if class_type == Sequence:
1364 return Sequence(feature=generate_from_dict(obj["feature"]), length=obj.get("length", -1))
KeyError: 'FeatureA'
We can avoid this by having a registry (like formatters) and doing
```python
from datasets.features.features import register_feature
register_feature(FeatureA, "FeatureA")
register_feature(FeatureB, "FeatureB")
test_data.save_to_disk('./test_data')
```
Saving the dataset (1/1 shards): 100%|------| 3/3 [00:00<00:00, 211.13 examples/s]
and loading from disk returns with all metadata information
```python
from datasets import load_from_disk
test_data = load_from_disk('./test_data')
test_data.features
```
{'a': FeatureA(dtype='int32', id=None, metadata={'species': 'lactobacillus acetotolerans'}),
'b': FeatureB(dtype='int32', id=None, metadata={'species': 'lactobacillus iners'})}
| {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6727/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6727/timeline | null | null | true | 64.969722 |
https://api.github.com/repos/huggingface/datasets/issues/6726 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6726/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6726/comments | https://api.github.com/repos/huggingface/datasets/issues/6726/events | https://github.com/huggingface/datasets/issues/6726 | 2,177,097,232 | I_kwDODunzps6Bw94Q | 6,726 | Profiling for HF Filesystem shows there are easy performance gains to be made | {
"avatar_url": "https://avatars.githubusercontent.com/u/159512661?v=4",
"events_url": "https://api.github.com/users/awgr/events{/privacy}",
"followers_url": "https://api.github.com/users/awgr/followers",
"following_url": "https://api.github.com/users/awgr/following{/other_user}",
"gists_url": "https://api.github.com/users/awgr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/awgr",
"id": 159512661,
"login": "awgr",
"node_id": "U_kgDOCYH4VQ",
"organizations_url": "https://api.github.com/users/awgr/orgs",
"received_events_url": "https://api.github.com/users/awgr/received_events",
"repos_url": "https://api.github.com/users/awgr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/awgr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/awgr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/awgr",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 2 | 2024-03-09T07:08:45 | 2024-03-09T07:11:08 | null | NONE | null | null | null | ### Describe the bug
# Let's make it faster
First, an evidence...
![image](https://github.com/huggingface/datasets/assets/159512661/a703a82c-43a0-426c-9d99-24c563d70965)
Figure 1: CProfile for loading 3 files from cerebras/SlimPajama-627B train split, and 3 files from test split using streaming=True. X axis is 1106 seconds long.
See? It's pretty slow.
What is resolve pattern doing?
```
resolve_pattern called with **/train/** and hf://datasets/cerebras/SlimPajama-627B@2d0accdd58c5d5511943ca1f5ff0e3eb5e293543
resolve_pattern took 20.815081119537354 seconds
```
Makes sense. How to improve it?
## Bigger project, biggest payoff
Databricks (and consequently, spark) store a compressed manifest file of the files contained in the remote filesystem.
Then, you download one tiny file, decompress it, and all the operations are local instead of this shenanigans.
It seems pretty straightforward to make dataset uploads compute a manifest and upload it alongside their data.
This would make resolution time so fast that nobody would ever think about it again.
It also means you either need to have the uploader compute it _every time_, or have a hook that computes it.
## Smaller project, immediate payoff: Be diligent in avoiding deepcopy
Revise the _ls_tree method to avoid deepcopy:
```
def _ls_tree(
self,
path: str,
recursive: bool = False,
refresh: bool = False,
revision: Optional[str] = None,
expand_info: bool = True,
):
..... omitted .....
for path_info in tree:
if isinstance(path_info, RepoFile):
cache_path_info = {
"name": root_path + "/" + path_info.path,
"size": path_info.size,
"type": "file",
"blob_id": path_info.blob_id,
"lfs": path_info.lfs,
"last_commit": path_info.last_commit,
"security": path_info.security,
}
else:
cache_path_info = {
"name": root_path + "/" + path_info.path,
"size": 0,
"type": "directory",
"tree_id": path_info.tree_id,
"last_commit": path_info.last_commit,
}
parent_path = self._parent(cache_path_info["name"])
self.dircache.setdefault(parent_path, []).append(cache_path_info)
out.append(cache_path_info)
return copy.deepcopy(out) # copy to not let users modify the dircache
```
Observe this deepcopy at the end. It is making a copy of a very simple data structure. We do not need to copy. We can simply generate the data structure twice instead. It will be much faster.
```
def _ls_tree(
self,
path: str,
recursive: bool = False,
refresh: bool = False,
revision: Optional[str] = None,
expand_info: bool = True,
):
..... omitted .....
def make_cache_path_info(path_info):
if isinstance(path_info, RepoFile):
return {
"name": root_path + "/" + path_info.path,
"size": path_info.size,
"type": "file",
"blob_id": path_info.blob_id,
"lfs": path_info.lfs,
"last_commit": path_info.last_commit,
"security": path_info.security,
}
else:
return {
"name": root_path + "/" + path_info.path,
"size": 0,
"type": "directory",
"tree_id": path_info.tree_id,
"last_commit": path_info.last_commit,
}
for path_info in tree:
cache_path_info = make_cache_path_info(path_info)
out_cache_path_info = make_cache_path_info(path_info) # copy to not let users modify the dircache
parent_path = self._parent(cache_path_info["name"])
self.dircache.setdefault(parent_path, []).append(cache_path_info)
out.append(out_cache_path_info)
return out
```
Note there is no longer a deepcopy in this method. We have replaced it with generating the output twice. This is substantially faster. For me, the entire resolution went from 1100s to 360s.
## Medium project, medium payoff
After the above change, we have this profile:
![image](https://github.com/huggingface/datasets/assets/159512661/db7b83da-2dfc-4c2e-abab-0ede9477876c)
Figure 2: x-axis is 355 seconds. Note that globbing and _ls_tree deep copy is gone. No surprise there. It's much faster now, but we still spend ~187seconds in get_fs_token_paths.
Well get_fs_token_paths is part of fsspec. We don't need to fix that because we can trust their developers to write high performance code. Probably the caller has misconfigured something. Let's take a look at the storage_options being provided to the filesystem that is constructed during this call.
Ah yes, streaming_download_manager::_prepare_single_hop_path_and_storage_options. We know streaming download manager is not compatible with async right now, but we really need this specific part of the code to be async. We're spending so much time checking isDir on the remote filesystem, it's a huge waste.
We can make the call easily 20-30x faster by using async, removing this performance bottleneck almost entirely (and reducing the total time of this part of the code to <30s. There is no reason to block async isDir calls for streaming.
I'm not going to mess w/ this one myself; I didn't write the streaming impl, and I don't know how it works, but I know the isDir check can be async.
### Steps to reproduce the bug
```
with cProfile.Profile() as pr:
pr.enable()
# Begin Data
if not os.path.exists(data_cache_dir):
os.makedirs(data_cache_dir, exist_ok=True)
training_dataset = load_dataset(training_dataset_name, split=training_split, cache_dir=data_cache_dir, streaming=True).take(training_slice)
eval_dataset = load_dataset(eval_dataset_name, split=eval_split, cache_dir=data_cache_dir, streaming=True).take(eval_slice)
# End Data
pr.disable()
pr.create_stats()
if not os.path.exists(profiling_path):
os.makedirs(profiling_path, exist_ok=True)
pr.dump_stats(os.path.join(profiling_path, "cprofile.prof"))
```
run this code for "cerebras/SlimPajama-627B" and whatever other params
### Expected behavior
Something better.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.21.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6726/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6726/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6725 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6725/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6725/comments | https://api.github.com/repos/huggingface/datasets/issues/6725/events | https://github.com/huggingface/datasets/issues/6725 | 2,175,527,530 | I_kwDODunzps6Bq-pq | 6,725 | Request for a comparison of huggingface datasets compared with other data format especially webdataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/20135317?v=4",
"events_url": "https://api.github.com/users/Luciennnnnnn/events{/privacy}",
"followers_url": "https://api.github.com/users/Luciennnnnnn/followers",
"following_url": "https://api.github.com/users/Luciennnnnnn/following{/other_user}",
"gists_url": "https://api.github.com/users/Luciennnnnnn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Luciennnnnnn",
"id": 20135317,
"login": "Luciennnnnnn",
"node_id": "MDQ6VXNlcjIwMTM1MzE3",
"organizations_url": "https://api.github.com/users/Luciennnnnnn/orgs",
"received_events_url": "https://api.github.com/users/Luciennnnnnn/received_events",
"repos_url": "https://api.github.com/users/Luciennnnnnn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Luciennnnnnn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Luciennnnnnn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Luciennnnnnn",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 0 | 2024-03-08T08:23:01 | 2024-03-08T08:23:01 | null | NONE | null | null | null | ### Feature request
Request for a comparison of huggingface datasets compared with other data format especially webdataset
### Motivation
I see huggingface datasets uses Apache Arrow as its backend, it seems to be great, but I'm curious about how it is good compared with other dataset format, like webdataset, what's the pros/cons of them.
### Your contribution
More information | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6725/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6725/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6724 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6724/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6724/comments | https://api.github.com/repos/huggingface/datasets/issues/6724/events | https://github.com/huggingface/datasets/issues/6724 | 2,174,398,227 | I_kwDODunzps6Bmq8T | 6,724 | Dataset with loading script does not work in renamed repos | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 0 | 2024-03-07T17:38:38 | 2024-03-07T20:06:25 | null | CONTRIBUTOR | null | null | null | ### Describe the bug
My data repository was first called `BramVanroy/hplt-mono-v1-2` but I then renamed to use underscores instead of dashes. However, it seems that `datasets` retrieves the old repo name when it checks whether the repo contains data loading scripts in this line.
https://github.com/huggingface/datasets/blob/6fb6c834f008996c994b0a86c3808d0a33d44525/src/datasets/load.py#L1845
When I print `filename` it returns `hplt-mono-v1-2.py` but the files in the repo are of course `['.gitattributes', 'README.md', 'hplt_mono_v1_2.py']`. So the `filename` is the original reponame instead of the renamed one.
I am not sure if this is a caching issue or not or how I can resolve it.
### Steps to reproduce the bug
```
from datasets import load_dataset
ds = load_dataset(
"BramVanroy/hplt-mono-v1-2",
"ky",
trust_remote_code=True
)
```
### Expected behavior
That the most recent repo name is used when `filename` is generated.
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.2
- PyArrow version: 14.0.1
- Pandas version: 2.1.3
- `fsspec` version: 2023.10.0
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6724/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6724/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6723 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6723/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6723/comments | https://api.github.com/repos/huggingface/datasets/issues/6723/events | https://github.com/huggingface/datasets/pull/6723 | 2,174,344,456 | PR_kwDODunzps5o_fPU | 6,723 | get_dataset_default_config_name docstring | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-07T17:09:29 | 2024-03-07T17:27:29 | 2024-03-07T17:21:20 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6723.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6723",
"merged_at": "2024-03-07T17:21:20",
"patch_url": "https://github.com/huggingface/datasets/pull/6723.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6723"
} | fix https://github.com/huggingface/datasets/pull/6722 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6723/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6723/timeline | null | null | true | 0.1975 |
https://api.github.com/repos/huggingface/datasets/issues/6722 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6722/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6722/comments | https://api.github.com/repos/huggingface/datasets/issues/6722/events | https://github.com/huggingface/datasets/pull/6722 | 2,174,332,127 | PR_kwDODunzps5o_ch0 | 6,722 | Add details in docstring | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2024-03-07T17:02:07 | 2024-03-07T17:21:10 | 2024-03-07T17:21:08 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6722.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6722",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6722.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6722"
} | see https://github.com/huggingface/datasets-server/pull/2554#discussion_r1516516867 | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6722/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6722/timeline | null | null | true | 0.316944 |
https://api.github.com/repos/huggingface/datasets/issues/6721 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6721/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6721/comments | https://api.github.com/repos/huggingface/datasets/issues/6721/events | https://github.com/huggingface/datasets/issues/6721 | 2,173,931,714 | I_kwDODunzps6Bk5DC | 6,721 | Hi,do you know how to load the dataset from local file now? | {
"avatar_url": "https://avatars.githubusercontent.com/u/50232044?v=4",
"events_url": "https://api.github.com/users/Gera001/events{/privacy}",
"followers_url": "https://api.github.com/users/Gera001/followers",
"following_url": "https://api.github.com/users/Gera001/following{/other_user}",
"gists_url": "https://api.github.com/users/Gera001/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Gera001",
"id": 50232044,
"login": "Gera001",
"node_id": "MDQ6VXNlcjUwMjMyMDQ0",
"organizations_url": "https://api.github.com/users/Gera001/orgs",
"received_events_url": "https://api.github.com/users/Gera001/received_events",
"repos_url": "https://api.github.com/users/Gera001/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Gera001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gera001/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Gera001",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 3 | 2024-03-07T13:58:40 | 2024-03-31T08:09:25 | null | NONE | null | null | null | Hi, if I want to load the dataset from local file, then how to specify the configuration name?
_Originally posted by @WHU-gentle in https://github.com/huggingface/datasets/issues/2976#issuecomment-1333455222_
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6721/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6721/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6720/comments | https://api.github.com/repos/huggingface/datasets/issues/6720/events | https://github.com/huggingface/datasets/issues/6720 | 2,173,603,459 | I_kwDODunzps6Bjo6D | 6,720 | TypeError: 'str' object is not callable | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-07T11:07:09 | 2024-03-08T07:34:53 | 2024-03-07T15:13:58 | CONTRIBUTOR | null | null | null | ### Describe the bug
I am trying to get the HPLT datasets on the hub. Downloading/re-uploading would be too time- and resource consuming so I wrote [a dataset loader script](https://huggingface.co/datasets/BramVanroy/hplt_mono_v1_2/blob/main/hplt_mono_v1_2.py). I think I am very close but for some reason I always get the error below. It happens during the clean-up phase where the directory cannot be removed because it is not empty.
My only guess would be that this may have to do with zstandard
```
Traceback (most recent call last):
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1744, in _prepare_split_single
writer.write(example, key)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 492, in write
self.write_examples_on_file()
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 434, in write_examples_on_file
if self.schema
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 409, in schema
else (pa.schema(self._features.type) if self._features is not None else None)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1643, in type
return get_nested_type(self)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in get_nested_type
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in <dictcomp>
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1221, in get_nested_type
value_type = get_nested_type(schema.feature)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1228, in get_nested_type
return schema()
TypeError: 'str' object is not callable
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1753, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 588, in finalize
self.write_examples_on_file()
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 434, in write_examples_on_file
if self.schema
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 409, in schema
else (pa.schema(self._features.type) if self._features is not None else None)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1643, in type
return get_nested_type(self)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in get_nested_type
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1209, in <dictcomp>
{key: get_nested_type(schema[key]) for key in schema}
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1221, in get_nested_type
value_type = get_nested_type(schema.feature)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/features/features.py", line 1228, in get_nested_type
return schema()
TypeError: 'str' object is not callable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 959, in incomplete_dir
yield tmp_dir
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1005, in download_and_prepare
self._download_and_prepare(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1767, in _download_and_prepare
super()._download_and_prepare(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1100, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1605, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 1762, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/pricie/vanroy/.config/JetBrains/PyCharm2023.3/scratches/scratch_5.py", line 4, in <module>
ds = load_dataset(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/load.py", line 2549, in load_dataset
builder_instance.download_and_prepare(
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 985, in download_and_prepare
with incomplete_dir(self._output_dir) as tmp_output_dir:
File "/home/pricie/vanroy/.pyenv/versions/3.10.13/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/home/local/vanroy/dutch-instruction-datasets/.venv/lib/python3.10/site-packages/datasets/builder.py", line 966, in incomplete_dir
shutil.rmtree(tmp_dir)
File "/home/pricie/vanroy/.pyenv/versions/3.10.13/lib/python3.10/shutil.py", line 731, in rmtree
onerror(os.rmdir, path, sys.exc_info())
File "/home/pricie/vanroy/.pyenv/versions/3.10.13/lib/python3.10/shutil.py", line 729, in rmtree
os.rmdir(path)
OSError: [Errno 39] Directory not empty: '/home/pricie/vanroy/.cache/huggingface/datasets/BramVanroy___hplt_mono_v1_2/ky/1.2.0/7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47.incomplete'
```
Interestingly, though, this directory _does_ appear to be empty:
```shell
> cd /home/pricie/vanroy/.cache/huggingface/datasets/BramVanroy___hplt_mono_v1_2/ky/1.2.0/7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47.incomplete
> ls -lah
total 0
drwxr-xr-x. 1 vanroy vanroy 0 Mar 7 12:01 .
drwxr-xr-x. 1 vanroy vanroy 304 Mar 7 11:52 ..
> cd ..
> ls
7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47_builder.lock 7ab138629fe7e9e29fe93ce63d809d5ef9d963273b829f61ab538e012dc9cc47.incomplete
```
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset(
"BramVanroy/hplt_mono_v1_2",
"ky",
trust_remote_code=True
)
```
### Expected behavior
No error.
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.34
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.2
- PyArrow version: 14.0.1
- Pandas version: 2.1.3
- `fsspec` version: 2023.10.0
| {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6720/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6720/timeline | null | completed | false | 4.113611 |
https://api.github.com/repos/huggingface/datasets/issues/6719 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6719/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6719/comments | https://api.github.com/repos/huggingface/datasets/issues/6719/events | https://github.com/huggingface/datasets/issues/6719 | 2,169,585,727 | I_kwDODunzps6BUUA_ | 6,719 | Is there any way to solve hanging of IterableDataset using split by node + filtering during inference | {
"avatar_url": "https://avatars.githubusercontent.com/u/8136905?v=4",
"events_url": "https://api.github.com/users/ssharpe42/events{/privacy}",
"followers_url": "https://api.github.com/users/ssharpe42/followers",
"following_url": "https://api.github.com/users/ssharpe42/following{/other_user}",
"gists_url": "https://api.github.com/users/ssharpe42/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ssharpe42",
"id": 8136905,
"login": "ssharpe42",
"node_id": "MDQ6VXNlcjgxMzY5MDU=",
"organizations_url": "https://api.github.com/users/ssharpe42/orgs",
"received_events_url": "https://api.github.com/users/ssharpe42/received_events",
"repos_url": "https://api.github.com/users/ssharpe42/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ssharpe42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ssharpe42/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ssharpe42",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 0 | 2024-03-05T15:55:13 | 2024-03-05T15:55:13 | null | NONE | null | null | null | ### Describe the bug
I am using an iterable dataset in a multi-node setup, trying to do training/inference while filtering the data on the fly. I usually do not use `split_dataset_by_node` but it is very slow using the IterableDatasetShard in `accelerate` and `transformers`. When I filter after applying `split_dataset_by_node`, it results in shards that are not equal sizes due to unequal samples filtered from each one.
The distributed process hangs when trying to accomplish this. Is there any way to resolve this or is it impossible to implement?
### Steps to reproduce the bug
Here is a toy example of what I am trying to do that reproduces the behavior
```
# torchrun --nproc-per-node 2 file.py
import os
import pandas as pd
import torch
from accelerate import Accelerator
from datasets import Features, Value, load_dataset
from datasets.distributed import split_dataset_by_node
from torch.utils.data import DataLoader
accelerator = Accelerator(device_placement=True, dispatch_batches=False)
if accelerator.is_main_process:
if not os.path.exists("scratch_data"):
os.mkdir("scratch_data")
n_shards = 4
for i in range(n_shards):
df = pd.DataFrame({"id": list(range(10 * i, 10 * (i + 1)))})
df.to_parquet(f"scratch_data/shard_{i}.parquet")
world_size = accelerator.num_processes
local_rank = accelerator.process_index
def collate_fn(examples):
input_ids = []
for example in examples:
input_ids.append(example["id"])
return torch.LongTensor(input_ids)
dataset = load_dataset(
"parquet", data_dir="scratch_data", split="train", streaming=True
)
dataset = (
split_dataset_by_node(dataset, rank=local_rank, world_size=world_size)
.filter(lambda x: x["id"] < 35)
.shuffle(seed=42, buffer_size=100)
)
batch_size = 2
train_dataloader = DataLoader(
dataset,
batch_size=batch_size,
collate_fn=collate_fn,
num_workers=2
)
for x in train_dataloader:
x = x.to(accelerator.device)
print({"rank": local_rank, "id": x})
y = accelerator.gather_for_metrics(x)
if accelerator.is_main_process:
print("gathered", y)
```
### Expected behavior
Is there any way to continue training/inference on the GPUs that have remaining data left without waiting for the others? Is it impossible to filter when
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-5.10.209-198.812.amzn2.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.21.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2023.6.0 | null | {
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6719/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6719/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6718 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6718/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6718/comments | https://api.github.com/repos/huggingface/datasets/issues/6718/events | https://github.com/huggingface/datasets/pull/6718 | 2,169,468,488 | PR_kwDODunzps5ouwwE | 6,718 | Fix concurrent script loading with force_redownload | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-05T15:04:20 | 2024-03-07T14:05:53 | 2024-03-07T13:58:04 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6718.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6718",
"merged_at": "2024-03-07T13:58:04",
"patch_url": "https://github.com/huggingface/datasets/pull/6718.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6718"
} | I added `lock_importable_file` in `get_dataset_builder_class` and `extend_dataset_builder_for_streaming` to fix the issue, and I also added a test
cc @clefourrier | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6718/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6718/timeline | null | null | true | 46.895556 |
https://api.github.com/repos/huggingface/datasets/issues/6717 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6717/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6717/comments | https://api.github.com/repos/huggingface/datasets/issues/6717/events | https://github.com/huggingface/datasets/issues/6717 | 2,168,726,432 | I_kwDODunzps6BRCOg | 6,717 | `remove_columns` method used with a streaming enable dataset mode produces a LibsndfileError on multichannel audio | {
"avatar_url": "https://avatars.githubusercontent.com/u/53187038?v=4",
"events_url": "https://api.github.com/users/jhauret/events{/privacy}",
"followers_url": "https://api.github.com/users/jhauret/followers",
"following_url": "https://api.github.com/users/jhauret/following{/other_user}",
"gists_url": "https://api.github.com/users/jhauret/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jhauret",
"id": 53187038,
"login": "jhauret",
"node_id": "MDQ6VXNlcjUzMTg3MDM4",
"organizations_url": "https://api.github.com/users/jhauret/orgs",
"received_events_url": "https://api.github.com/users/jhauret/received_events",
"repos_url": "https://api.github.com/users/jhauret/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jhauret/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jhauret/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jhauret",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 2 | 2024-03-05T09:33:26 | 2024-08-14T17:54:20 | null | NONE | null | null | null | ### Describe the bug
When loading a HF dataset in streaming mode and removing some columns, it is impossible to load a sample if the audio contains more than one channel. I have the impression that the time axis and channels are swapped or concatenated.
### Steps to reproduce the bug
Minimal error code:
```python
from datasets import load_dataset
dataset_name = "zinc75/Vibravox_dummy"
config_name = "BWE_Larynx_microphone"
# if we use "ASR_Larynx_microphone" subset which is a monochannel audio, no error is thrown.
dataset = load_dataset(
path=dataset_name, name=config_name, split="train", streaming=True
)
dataset = dataset.remove_columns(["sensor_id"])
# dataset = dataset.map(lambda x:x, remove_columns=["sensor_id"])
# The commented version does not produce an error, but loses the dataset features.
sample = next(iter(dataset))
```
Error:
```
Traceback (most recent call last):
File "/home/julien/Bureau/github/vibravox/tmp.py", line 15, in <module>
sample = next(iter(dataset))
^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1392, in __iter__
example = _apply_feature_types_on_example(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/iterable_dataset.py", line 1080, in _apply_feature_types_on_example
encoded_example = features.encode_example(example)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/features.py", line 1889, in encode_example
return encode_nested_example(self, example)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/features.py", line 1244, in encode_nested_example
{k: encode_nested_example(schema[k], obj.get(k), level=level + 1) for k in schema}
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/features.py", line 1244, in <dictcomp>
{k: encode_nested_example(schema[k], obj.get(k), level=level + 1) for k in schema}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/features.py", line 1300, in encode_nested_example
return schema.encode_example(obj) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/datasets/features/audio.py", line 98, in encode_example
sf.write(buffer, value["array"], value["sampling_rate"], format="wav")
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/soundfile.py", line 343, in write
with SoundFile(file, 'w', samplerate, channels,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/soundfile.py", line 658, in __init__
self._file = self._open(file, mode_int, closefd)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/julien/.pyenv/versions/vibravox/lib/python3.11/site-packages/soundfile.py", line 1216, in _open
raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
soundfile.LibsndfileError: Error opening <_io.BytesIO object at 0x7fd795d24680>: Format not recognised.
Process finished with exit code 1
```
### Expected behavior
I would expect this code to run without error.
### Environment info
- `datasets` version: 2.18.0
- Platform: Linux-6.5.0-21-generic-x86_64-with-glibc2.35
- Python version: 3.11.0
- `huggingface_hub` version: 0.21.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.1
- `fsspec` version: 2023.10.0 | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6717/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6717/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6716 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6716/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6716/comments | https://api.github.com/repos/huggingface/datasets/issues/6716/events | https://github.com/huggingface/datasets/issues/6716 | 2,168,706,558 | I_kwDODunzps6BQ9X- | 6,716 | Non-deterministic `Dataset.builder_name` value | {
"avatar_url": "https://avatars.githubusercontent.com/u/17039389?v=4",
"events_url": "https://api.github.com/users/harupy/events{/privacy}",
"followers_url": "https://api.github.com/users/harupy/followers",
"following_url": "https://api.github.com/users/harupy/following{/other_user}",
"gists_url": "https://api.github.com/users/harupy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/harupy",
"id": 17039389,
"login": "harupy",
"node_id": "MDQ6VXNlcjE3MDM5Mzg5",
"organizations_url": "https://api.github.com/users/harupy/orgs",
"received_events_url": "https://api.github.com/users/harupy/received_events",
"repos_url": "https://api.github.com/users/harupy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/harupy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harupy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/harupy",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 6 | 2024-03-05T09:23:21 | 2024-03-19T07:58:14 | 2024-03-19T07:58:14 | NONE | null | null | null | ### Describe the bug
I'm not sure if this is a bug, but `print(ds.builder_name)` in the following code sometimes prints out `rotten_tomatoes` instead of `parquet`:
```python
import datasets
for _ in range(100):
ds = datasets.load_dataset("rotten_tomatoes", split="train")
print(ds.builder_name) # prints out "rotten_tomatoes" sometimes instead of "parquet"
```
Output:
```
...
parquet
parquet
parquet
rotten_tomatoes
parquet
parquet
parquet
...
```
Here's a reproduction using GitHub Actions:
https://github.com/mlflow/mlflow/actions/runs/8153247984/job/22284263613?pr=11329#step:12:241
One of our tests is flaky because `builder_name` is not deterministic.
### Steps to reproduce the bug
1. Run the code above.
### Expected behavior
Always prints out `parquet`?
### Environment info
```
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.18.0
- Platform: Linux-6.5.0-1015-azure-x86_64-with-glibc2.34
- Python version: 3.8.18
- `huggingface_hub` version: 0.21.3
- PyArrow version: 15.0.0
- Pandas version: 2.0.3
- `fsspec` version: 2024.2.0
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/17039389?v=4",
"events_url": "https://api.github.com/users/harupy/events{/privacy}",
"followers_url": "https://api.github.com/users/harupy/followers",
"following_url": "https://api.github.com/users/harupy/following{/other_user}",
"gists_url": "https://api.github.com/users/harupy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/harupy",
"id": 17039389,
"login": "harupy",
"node_id": "MDQ6VXNlcjE3MDM5Mzg5",
"organizations_url": "https://api.github.com/users/harupy/orgs",
"received_events_url": "https://api.github.com/users/harupy/received_events",
"repos_url": "https://api.github.com/users/harupy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/harupy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harupy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/harupy",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6716/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6716/timeline | null | completed | false | 334.581389 |
https://api.github.com/repos/huggingface/datasets/issues/6715 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6715/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6715/comments | https://api.github.com/repos/huggingface/datasets/issues/6715/events | https://github.com/huggingface/datasets/pull/6715 | 2,167,747,095 | PR_kwDODunzps5oo36i | 6,715 | Fix sliced ConcatenationTable pickling with mixed schemas vertically | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-04T21:02:07 | 2024-03-05T11:23:05 | 2024-03-05T11:17:04 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6715.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6715",
"merged_at": "2024-03-05T11:17:04",
"patch_url": "https://github.com/huggingface/datasets/pull/6715.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6715"
} | A sliced + pickled ConcatenationTable could end up with a different schema than the original schema, if the slice only contains blocks with only a subset of the columns.
This can lead to issues when saving datasets from a concatenation of datasets with mixed schemas
Reported in https://discuss.huggingface.co/t/datasetdict-save-to-disk-with-num-proc-1-seems-to-hang-with-error/75595 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6715/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6715/timeline | null | null | true | 14.249167 |
https://api.github.com/repos/huggingface/datasets/issues/6714 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6714/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6714/comments | https://api.github.com/repos/huggingface/datasets/issues/6714/events | https://github.com/huggingface/datasets/pull/6714 | 2,167,569,080 | PR_kwDODunzps5ooQd2 | 6,714 | Expand no-code dataset info with datasets-server info | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-04T19:18:10 | 2024-03-04T20:28:30 | 2024-03-04T20:22:15 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6714.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6714",
"merged_at": "2024-03-04T20:22:15",
"patch_url": "https://github.com/huggingface/datasets/pull/6714.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6714"
} | E.g., to have info about a dataset's number of examples for more informative TQDM bars. | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6714/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6714/timeline | null | null | true | 1.068056 |
https://api.github.com/repos/huggingface/datasets/issues/6713 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6713/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6713/comments | https://api.github.com/repos/huggingface/datasets/issues/6713/events | https://github.com/huggingface/datasets/pull/6713 | 2,166,797,560 | PR_kwDODunzps5olmqh | 6,713 | Bump huggingface-hub lower version to 0.21.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 4 | 2024-03-04T13:00:52 | 2024-03-04T18:14:03 | 2024-03-04T18:06:05 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6713.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6713",
"merged_at": "2024-03-04T18:06:05",
"patch_url": "https://github.com/huggingface/datasets/pull/6713.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6713"
} | This should fix the version compatibility issue when using `huggingface_hub` < 0.21.2 and latest fsspec (>=2023.12.0).
See my comment: https://github.com/huggingface/datasets/pull/6687#issuecomment-1976493336
>> EDIT: the fix has been released in `huggingface_hub` 0.21.2 - I removed my commits that were using `huggingface_hub@main`
>
>Please note that people using `huggingface_hub` < 0.21.2 and latest `fsspec` will have issues when using `datasets`:
>- https://github.com/huggingface/lighteval/actions/runs/8139147047/job/22241658122?pr=86
>- https://github.com/huggingface/lighteval/pull/84
CC: @clefourrier
| {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6713/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6713/timeline | null | null | true | 5.086944 |
https://api.github.com/repos/huggingface/datasets/issues/6712 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6712/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6712/comments | https://api.github.com/repos/huggingface/datasets/issues/6712/events | https://github.com/huggingface/datasets/pull/6712 | 2,166,588,373 | PR_kwDODunzps5ok4VF | 6,712 | fix CastError pickling | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-04T11:14:18 | 2024-03-04T20:23:47 | 2024-03-04T20:17:17 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6712.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6712",
"merged_at": "2024-03-04T20:17:17",
"patch_url": "https://github.com/huggingface/datasets/pull/6712.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6712"
} | reported in https://discuss.huggingface.co/t/datasetdict-save-to-disk-with-num-proc-1-seems-to-hang-with-error/75595 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6712/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6712/timeline | null | null | true | 9.049722 |
https://api.github.com/repos/huggingface/datasets/issues/6711 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6711/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6711/comments | https://api.github.com/repos/huggingface/datasets/issues/6711/events | https://github.com/huggingface/datasets/pull/6711 | 2,165,507,817 | PR_kwDODunzps5ohM1a | 6,711 | 3x Faster Text Preprocessing | {
"avatar_url": "https://avatars.githubusercontent.com/u/1983160?v=4",
"events_url": "https://api.github.com/users/ashvardanian/events{/privacy}",
"followers_url": "https://api.github.com/users/ashvardanian/followers",
"following_url": "https://api.github.com/users/ashvardanian/following{/other_user}",
"gists_url": "https://api.github.com/users/ashvardanian/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ashvardanian",
"id": 1983160,
"login": "ashvardanian",
"node_id": "MDQ6VXNlcjE5ODMxNjA=",
"organizations_url": "https://api.github.com/users/ashvardanian/orgs",
"received_events_url": "https://api.github.com/users/ashvardanian/received_events",
"repos_url": "https://api.github.com/users/ashvardanian/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ashvardanian/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ashvardanian/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ashvardanian",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 3 | 2024-03-03T19:03:04 | 2024-06-26T06:28:14 | null | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6711.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6711",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6711.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6711"
} | I was preparing some datasets for AI training and noticed that `datasets` by HuggingFace uses the conventional `open` mechanism to read the file and split it into chunks. I thought it can be significantly accelerated, and [started with a benchmark](https://gist.github.com/ashvardanian/55c2052e9f78b05b8d614aa90cb12347):
```sh
$ pip install --upgrade --force-reinstall datasets
$ python benchmark_huggingface_datasets.py xlsum.csv
Generating train split: 1004598 examples [00:47, 21116.16 examples/s]
Time taken to load the dataset: 48.66838526725769 seconds
Time taken to chunk the dataset into parts of size 10000: 0.11466407775878906 seconds
Total time taken: 48.78304934501648 seconds
```
For benchmarks I've used a [large CSV file with mixed UTF-8 content](https://github.com/ashvardanian/StringZilla/blob/main/CONTRIBUTING.md#benchmarking-datasets), most common in modern large-scale pre-training pipelines. I've later patched the `datasets` library to use `stringzilla`, which resulted in significantly lower memory consumption and in 2.9x throughput improvement on the AWS `r7iz` instances. That's using slow SSDs mounted over the network. Performance on local SSDs on something like a DGX-H100 should be even higher:
```sh
$ pip install -e .
$ python benchmark_huggingface_datasets.py xlsum.csv
Generating train split: 1004598 examples [00:15, 64529.90 examples/s]
Time taken to load the dataset: 16.45028805732727 seconds
Time taken to chunk the dataset into parts of size 10000: 0.1291060447692871 seconds
Total time taken: 16.579394102096558 seconds
```
I've already [pushed the patches to my fork](https://github.com/ashvardanian/datasets/tree/faster-text-parsers), and would love to contribute them to the upstream repository.
---
All the tests pass, but they leave a couple of important questions open. The default Python `open(..., newline=None)` uses universal newlines, where `\n`, `\r`, and `\r\n` are all converted to `\n` on the fly. I am not sure if its a good idea for a general purpose dataset preparation pipeline?
I can simulate the same behavior (which I don't yet do) for `"line"` splitter. Adjusting it for `"paragraph"`-splitter would be harder. Should we stick exactly to the old Pythonic behavior or stay closer to how C and other programming languages do that? | null | {
"+1": 8,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 8,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6711/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6711/timeline | null | null | true | null |
https://api.github.com/repos/huggingface/datasets/issues/6710 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6710/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6710/comments | https://api.github.com/repos/huggingface/datasets/issues/6710/events | https://github.com/huggingface/datasets/pull/6710 | 2,164,781,564 | PR_kwDODunzps5oe4ov | 6,710 | Persist IterableDataset epoch in workers | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-02T12:08:50 | 2024-07-01T17:51:25 | 2024-07-01T17:45:30 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6710.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6710",
"merged_at": "2024-07-01T17:45:30",
"patch_url": "https://github.com/huggingface/datasets/pull/6710.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6710"
} | Use shared memory for the IterableDataset epoch.
This way calling `ds.set_epoch()` in the main process will update the epoch in the DataLoader workers as well.
This is useful especially because the epoch is used to compute the `effective_seed` used for shuffling.
I used torch's shared memory in case users want to send dataset copies without shared memory using pickle. I also find it easier to use than using `multiprocessing.shared_memory` than requires unlinking only in the main process, or `mp.Value` that is not picklable.
close https://github.com/huggingface/datasets/issues/6673
cc @rwightman | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6710/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6710/timeline | null | null | true | 2,909.611111 |
https://api.github.com/repos/huggingface/datasets/issues/6709 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6709/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6709/comments | https://api.github.com/repos/huggingface/datasets/issues/6709/events | https://github.com/huggingface/datasets/pull/6709 | 2,164,169,913 | PR_kwDODunzps5oc2Fg | 6,709 | set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-01T21:01:14 | 2024-03-01T21:07:35 | 2024-03-01T21:01:23 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6709.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6709",
"merged_at": "2024-03-01T21:01:23",
"patch_url": "https://github.com/huggingface/datasets/pull/6709.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6709"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6709/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6709/timeline | null | null | true | 0.0025 |
https://api.github.com/repos/huggingface/datasets/issues/6708 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6708/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6708/comments | https://api.github.com/repos/huggingface/datasets/issues/6708/events | https://github.com/huggingface/datasets/pull/6708 | 2,164,158,579 | PR_kwDODunzps5oczmi | 6,708 | Release: 2.18.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-01T20:52:17 | 2024-03-01T21:03:01 | 2024-03-01T20:56:50 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6708.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6708",
"merged_at": "2024-03-01T20:56:50",
"patch_url": "https://github.com/huggingface/datasets/pull/6708.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6708"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6708/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6708/timeline | null | null | true | 0.075833 |
https://api.github.com/repos/huggingface/datasets/issues/6707 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6707/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6707/comments | https://api.github.com/repos/huggingface/datasets/issues/6707/events | https://github.com/huggingface/datasets/pull/6707 | 2,163,799,868 | PR_kwDODunzps5obkhA | 6,707 | Silence ruff deprecation messages | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-01T16:52:29 | 2024-03-01T17:32:14 | 2024-03-01T17:25:46 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6707.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6707",
"merged_at": "2024-03-01T17:25:46",
"patch_url": "https://github.com/huggingface/datasets/pull/6707.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6707"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6707/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6707/timeline | null | null | true | 0.554722 |
https://api.github.com/repos/huggingface/datasets/issues/6706 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6706/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6706/comments | https://api.github.com/repos/huggingface/datasets/issues/6706/events | https://github.com/huggingface/datasets/pull/6706 | 2,163,783,123 | PR_kwDODunzps5obgt- | 6,706 | Update ruff | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-01T16:44:58 | 2024-03-01T17:02:13 | 2024-03-01T16:52:17 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6706.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6706",
"merged_at": "2024-03-01T16:52:17",
"patch_url": "https://github.com/huggingface/datasets/pull/6706.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6706"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6706/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6706/timeline | null | null | true | 0.121944 |
https://api.github.com/repos/huggingface/datasets/issues/6705 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6705/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6705/comments | https://api.github.com/repos/huggingface/datasets/issues/6705/events | https://github.com/huggingface/datasets/pull/6705 | 2,163,768,640 | PR_kwDODunzps5obdbY | 6,705 | Fix data_files when passing data_dir | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-03-01T16:38:53 | 2024-03-01T18:59:06 | 2024-03-01T18:52:49 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6705.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6705",
"merged_at": "2024-03-01T18:52:49",
"patch_url": "https://github.com/huggingface/datasets/pull/6705.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6705"
} | This code should not return empty data files
```python
from datasets import load_dataset_builder
revision = "3d406e70bc21c3ca92a9a229b4c6fc3ed88279fd"
b = load_dataset_builder("bigcode/the-stack-v2-dedup", data_dir="data/Dockerfile", revision=revision)
print(b.config.data_files)
```
Previously it would return no data files because it would apply the YAML `data_files: data/**/train-*` pattern to this directory
cc @anton-l | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6705/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6705/timeline | null | null | true | 2.232222 |
https://api.github.com/repos/huggingface/datasets/issues/6704 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6704/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6704/comments | https://api.github.com/repos/huggingface/datasets/issues/6704/events | https://github.com/huggingface/datasets/pull/6704 | 2,163,752,391 | PR_kwDODunzps5obZyj | 6,704 | Improve default patterns resolution | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 11 | 2024-03-01T16:31:25 | 2024-04-23T09:43:09 | 2024-03-15T15:22:03 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6704.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6704",
"merged_at": "2024-03-15T15:22:03",
"patch_url": "https://github.com/huggingface/datasets/pull/6704.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6704"
} | Separate the default patterns that match directories from the ones matching files and ensure directories are checked first (reverts the change from https://github.com/huggingface/datasets/pull/6244, which merged these patterns). Also, ensure that the glob patterns do not overlap to avoid duplicates in the result.
Additionally, replace `get_fs_token_paths` with `url_to_fs` to avoid [unnecessary glob calls](https://github.com/fsspec/filesystem_spec/blob/14dce8ca78f7aa509a20edb263bff83a7760c24d/fsspec/core.py#L655-L656).
fix https://github.com/huggingface/datasets/issues/6259
fix https://github.com/huggingface/datasets/issues/6272 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6704/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6704/timeline | null | null | true | 334.843889 |
https://api.github.com/repos/huggingface/datasets/issues/6703 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6703/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6703/comments | https://api.github.com/repos/huggingface/datasets/issues/6703/events | https://github.com/huggingface/datasets/issues/6703 | 2,163,250,590 | I_kwDODunzps6A8JWe | 6,703 | Unable to load dataset that was saved with `save_to_disk` | {
"avatar_url": "https://avatars.githubusercontent.com/u/27340033?v=4",
"events_url": "https://api.github.com/users/casper-hansen/events{/privacy}",
"followers_url": "https://api.github.com/users/casper-hansen/followers",
"following_url": "https://api.github.com/users/casper-hansen/following{/other_user}",
"gists_url": "https://api.github.com/users/casper-hansen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/casper-hansen",
"id": 27340033,
"login": "casper-hansen",
"node_id": "MDQ6VXNlcjI3MzQwMDMz",
"organizations_url": "https://api.github.com/users/casper-hansen/orgs",
"received_events_url": "https://api.github.com/users/casper-hansen/received_events",
"repos_url": "https://api.github.com/users/casper-hansen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/casper-hansen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/casper-hansen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/casper-hansen",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 8 | 2024-03-01T11:59:56 | 2024-03-04T13:46:20 | 2024-03-04T13:46:20 | NONE | null | null | null | ### Describe the bug
I get the following error message: You are trying to load a dataset that was saved using `save_to_disk`. Please use `load_from_disk` instead.
### Steps to reproduce the bug
1. Save a dataset with `save_to_disk`
2. Try to load it with `load_datasets`
### Expected behavior
I am able to load the dataset again with `load_datasets` which most packages uses over `load_from_disk`. I want to have a workaround that allows me to create the same indexing that `push_to_hub` creates for you before using `save_to_disk` - how can that be achieved?
### Environment info
datasets 2.17.1, python 3.10 | {
"avatar_url": "https://avatars.githubusercontent.com/u/27340033?v=4",
"events_url": "https://api.github.com/users/casper-hansen/events{/privacy}",
"followers_url": "https://api.github.com/users/casper-hansen/followers",
"following_url": "https://api.github.com/users/casper-hansen/following{/other_user}",
"gists_url": "https://api.github.com/users/casper-hansen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/casper-hansen",
"id": 27340033,
"login": "casper-hansen",
"node_id": "MDQ6VXNlcjI3MzQwMDMz",
"organizations_url": "https://api.github.com/users/casper-hansen/orgs",
"received_events_url": "https://api.github.com/users/casper-hansen/received_events",
"repos_url": "https://api.github.com/users/casper-hansen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/casper-hansen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/casper-hansen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/casper-hansen",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6703/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6703/timeline | null | completed | false | 73.773333 |
https://api.github.com/repos/huggingface/datasets/issues/6702 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6702/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6702/comments | https://api.github.com/repos/huggingface/datasets/issues/6702/events | https://github.com/huggingface/datasets/issues/6702 | 2,161,938,484 | I_kwDODunzps6A3JA0 | 6,702 | Push samples to dataset on hub without having the dataset locally | {
"avatar_url": "https://avatars.githubusercontent.com/u/17854096?v=4",
"events_url": "https://api.github.com/users/jbdel/events{/privacy}",
"followers_url": "https://api.github.com/users/jbdel/followers",
"following_url": "https://api.github.com/users/jbdel/following{/other_user}",
"gists_url": "https://api.github.com/users/jbdel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jbdel",
"id": 17854096,
"login": "jbdel",
"node_id": "MDQ6VXNlcjE3ODU0MDk2",
"organizations_url": "https://api.github.com/users/jbdel/orgs",
"received_events_url": "https://api.github.com/users/jbdel/received_events",
"repos_url": "https://api.github.com/users/jbdel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jbdel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbdel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jbdel",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 2 | 2024-02-29T19:17:12 | 2024-03-08T21:08:38 | 2024-03-08T21:08:38 | NONE | null | null | null | ### Feature request
Say I have the following code:
```
from datasets import Dataset
import pandas as pd
new_data = {
"column_1": ["value1", "value2"],
"column_2": ["value3", "value4"],
}
df_new = pd.DataFrame(new_data)
dataset_new = Dataset.from_pandas(df_new)
# add these samples to a remote dataset
```
It would be great to have a way to push dataset_new to a remote dataset that respects the same schema. This way one would not have to do the following:
```
from datasets import load_dataset
dataset = load_dataset('username/dataset_name', use_auth_token='your_hf_token_here')
updated_dataset = dataset['train'].concatenate(dataset_new)
updated_dataset.push_to_hub('username/dataset_name', use_auth_token='your_hf_token_here')
```
### Motivation
No need to download the dataset.
### Your contribution
Maybe this feature already exists, didnt see it though. I do not have the expertise to do this. | {
"avatar_url": "https://avatars.githubusercontent.com/u/17854096?v=4",
"events_url": "https://api.github.com/users/jbdel/events{/privacy}",
"followers_url": "https://api.github.com/users/jbdel/followers",
"following_url": "https://api.github.com/users/jbdel/following{/other_user}",
"gists_url": "https://api.github.com/users/jbdel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jbdel",
"id": 17854096,
"login": "jbdel",
"node_id": "MDQ6VXNlcjE3ODU0MDk2",
"organizations_url": "https://api.github.com/users/jbdel/orgs",
"received_events_url": "https://api.github.com/users/jbdel/received_events",
"repos_url": "https://api.github.com/users/jbdel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jbdel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbdel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jbdel",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6702/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6702/timeline | null | completed | false | 193.857222 |
https://api.github.com/repos/huggingface/datasets/issues/6701 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6701/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6701/comments | https://api.github.com/repos/huggingface/datasets/issues/6701/events | https://github.com/huggingface/datasets/pull/6701 | 2,161,448,017 | PR_kwDODunzps5oTfO_ | 6,701 | Base parquet batch_size on parquet row group size | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-02-29T14:53:01 | 2024-02-29T15:15:18 | 2024-02-29T15:08:55 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6701.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6701",
"merged_at": "2024-02-29T15:08:55",
"patch_url": "https://github.com/huggingface/datasets/pull/6701.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6701"
} | This allows to stream datasets like [Major-TOM/Core-S2L2A](https://huggingface.co/datasets/Major-TOM/Core-S2L2A) which have row groups with few rows (one row is ~10MB). Previously the cold start would take a lot of time and OOM because it would download many row groups before yielding the first example.
I tried on OpenOrca and imagenet-hard and it does't affect overall throughput.
Even if the overall throughput doesn't change for datasets like imagenet-hard with big rows, note that it does create shorter and more frequent pauses to download the next row group. Though I find it fine because previously the pauses were less frequent but very long (downloading multiple row groups at a time) | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6701/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6701/timeline | null | null | true | 0.265 |
https://api.github.com/repos/huggingface/datasets/issues/6700 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6700/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6700/comments | https://api.github.com/repos/huggingface/datasets/issues/6700/events | https://github.com/huggingface/datasets/issues/6700 | 2,158,871,038 | I_kwDODunzps6ArcH- | 6,700 | remove_columns is not in-place but the doc shows it is in-place | {
"avatar_url": "https://avatars.githubusercontent.com/u/32047804?v=4",
"events_url": "https://api.github.com/users/shelfofclub/events{/privacy}",
"followers_url": "https://api.github.com/users/shelfofclub/followers",
"following_url": "https://api.github.com/users/shelfofclub/following{/other_user}",
"gists_url": "https://api.github.com/users/shelfofclub/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shelfofclub",
"id": 32047804,
"login": "shelfofclub",
"node_id": "MDQ6VXNlcjMyMDQ3ODA0",
"organizations_url": "https://api.github.com/users/shelfofclub/orgs",
"received_events_url": "https://api.github.com/users/shelfofclub/received_events",
"repos_url": "https://api.github.com/users/shelfofclub/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shelfofclub/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shelfofclub/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shelfofclub",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2024-02-28T12:36:22 | 2024-04-02T17:15:28 | 2024-04-02T17:15:28 | NONE | null | null | null | ### Describe the bug
The doc of `datasets` v2.17.0/v2.17.1 shows that `remove_columns` is in-place. [link](https://huggingface.co/docs/datasets/v2.17.1/en/package_reference/main_classes#datasets.DatasetDict.remove_columns)
In the text classification example of transformers v4.38.1, the columns are not removed.
https://github.com/huggingface/transformers/blob/a0857740c0e6127485c11476650314df3accc2b6/examples/pytorch/text-classification/run_classification.py#L421
### Steps to reproduce the bug
https://github.com/huggingface/transformers/blob/a0857740c0e6127485c11476650314df3accc2b6/examples/pytorch/text-classification/run_classification.py#L421
### Expected behavior
Actually remove the columns.
### Environment info
1. datasets v2.17.0
2. transformers v4.38.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ArthurZucker",
"id": 48595927,
"login": "ArthurZucker",
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ArthurZucker",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6700/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6700/timeline | null | completed | false | 820.651667 |
https://api.github.com/repos/huggingface/datasets/issues/6699 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6699/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6699/comments | https://api.github.com/repos/huggingface/datasets/issues/6699/events | https://github.com/huggingface/datasets/issues/6699 | 2,158,152,341 | I_kwDODunzps6AosqV | 6,699 | `Dataset` unexpected changed dict data and may cause error | {
"avatar_url": "https://avatars.githubusercontent.com/u/16933298?v=4",
"events_url": "https://api.github.com/users/scruel/events{/privacy}",
"followers_url": "https://api.github.com/users/scruel/followers",
"following_url": "https://api.github.com/users/scruel/following{/other_user}",
"gists_url": "https://api.github.com/users/scruel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/scruel",
"id": 16933298,
"login": "scruel",
"node_id": "MDQ6VXNlcjE2OTMzMjk4",
"organizations_url": "https://api.github.com/users/scruel/orgs",
"received_events_url": "https://api.github.com/users/scruel/received_events",
"repos_url": "https://api.github.com/users/scruel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/scruel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/scruel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/scruel",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 2 | 2024-02-28T05:30:10 | 2024-02-28T19:14:36 | null | NONE | null | null | null | ### Describe the bug
Will unexpected get keys with `None` value in the parsed json dict.
### Steps to reproduce the bug
```jsonl test.jsonl
{"id": 0, "indexs": {"-1": [0, 10]}}
{"id": 1, "indexs": {"-1": [0, 10]}}
```
```python
dataset = Dataset.from_json('.test.jsonl')
print(dataset[0])
```
Result:
```
{'id': 0, 'indexs': {'-1': [...], '-2': None, '-3': None, '-4': None, '-5': None, '-6': None, '-7': None, '-8': None, '-9': None, ...}}
```
Those keys with `None` value will unexpected appear in the dict.
### Expected behavior
Result should be
```
{'id': 0, 'indexs': {'-1': [0, 10]}}
```
### Environment info
- `datasets` version: 2.16.1
- Platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35
- Python version: 3.11.6
- `huggingface_hub` version: 0.20.2
- PyArrow version: 14.0.2
- Pandas version: 2.1.4
- `fsspec` version: 2023.10.0
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6699/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6699/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6698 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6698/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6698/comments | https://api.github.com/repos/huggingface/datasets/issues/6698/events | https://github.com/huggingface/datasets/pull/6698 | 2,157,752,392 | PR_kwDODunzps5oG6Xt | 6,698 | Faster `xlistdir` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2024-02-27T22:55:08 | 2024-02-27T23:44:49 | 2024-02-27T23:38:14 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6698.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6698",
"merged_at": "2024-02-27T23:38:14",
"patch_url": "https://github.com/huggingface/datasets/pull/6698.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6698"
} | Pass `detail=False` to the `fsspec` `listdir` to avoid unnecessarily fetching expensive metadata about the paths. | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6698/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6698/timeline | null | null | true | 0.718333 |
https://api.github.com/repos/huggingface/datasets/issues/6697 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6697/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6697/comments | https://api.github.com/repos/huggingface/datasets/issues/6697/events | https://github.com/huggingface/datasets/issues/6697 | 2,157,322,224 | I_kwDODunzps6Alh_w | 6,697 | Unable to Load Dataset in Kaggle | {
"avatar_url": "https://avatars.githubusercontent.com/u/97465624?v=4",
"events_url": "https://api.github.com/users/vrunm/events{/privacy}",
"followers_url": "https://api.github.com/users/vrunm/followers",
"following_url": "https://api.github.com/users/vrunm/following{/other_user}",
"gists_url": "https://api.github.com/users/vrunm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vrunm",
"id": 97465624,
"login": "vrunm",
"node_id": "U_kgDOBc81GA",
"organizations_url": "https://api.github.com/users/vrunm/orgs",
"received_events_url": "https://api.github.com/users/vrunm/received_events",
"repos_url": "https://api.github.com/users/vrunm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vrunm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vrunm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vrunm",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 4 | 2024-02-27T18:19:34 | 2024-02-29T17:32:42 | 2024-02-29T17:32:41 | NONE | null | null | null | ### Describe the bug
Having installed the latest versions of transformers==4.38.1 and datasets==2.17.1 Unable to load the dataset in a kaggle notebook.
Get this Error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[8], line 3
1 from datasets import load_dataset
----> 3 dataset = load_dataset("llm-blender/mix-instruct")
File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1664, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1661 ignore_verifications = ignore_verifications or save_infos
1663 # Create a dataset builder
-> 1664 builder_instance = load_dataset_builder(
1665 path=path,
1666 name=name,
1667 data_dir=data_dir,
1668 data_files=data_files,
1669 cache_dir=cache_dir,
1670 features=features,
1671 download_config=download_config,
1672 download_mode=download_mode,
1673 revision=revision,
1674 use_auth_token=use_auth_token,
1675 **config_kwargs,
1676 )
1678 # Return iterable dataset in case of streaming
1679 if streaming:
File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1490, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1488 download_config = download_config.copy() if download_config else DownloadConfig()
1489 download_config.use_auth_token = use_auth_token
-> 1490 dataset_module = dataset_module_factory(
1491 path,
1492 revision=revision,
1493 download_config=download_config,
1494 download_mode=download_mode,
1495 data_dir=data_dir,
1496 data_files=data_files,
1497 )
1499 # Get dataset builder class from the processing script
1500 builder_cls = import_main_class(dataset_module.module_path)
File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1242, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1237 if isinstance(e1, FileNotFoundError):
1238 raise FileNotFoundError(
1239 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1240 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
1241 ) from None
-> 1242 raise e1 from None
1243 else:
1244 raise FileNotFoundError(
1245 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory."
1246 )
File /opt/conda/lib/python3.10/site-packages/datasets/load.py:1230, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1215 return HubDatasetModuleFactoryWithScript(
1216 path,
1217 revision=revision,
(...)
1220 dynamic_modules_path=dynamic_modules_path,
1221 ).get_module()
1222 else:
1223 return HubDatasetModuleFactoryWithoutScript(
1224 path,
1225 revision=revision,
1226 data_dir=data_dir,
1227 data_files=data_files,
1228 download_config=download_config,
1229 download_mode=download_mode,
-> 1230 ).get_module()
1231 except Exception as e1: # noqa: all the attempts failed, before raising the error we should check if the module is already cached.
1232 try:
File /opt/conda/lib/python3.10/site-packages/datasets/load.py:846, in HubDatasetModuleFactoryWithoutScript.get_module(self)
836 token = self.download_config.use_auth_token
837 hfh_dataset_info = HfApi(config.HF_ENDPOINT).dataset_info(
838 self.name,
839 revision=self.revision,
840 token=token,
841 timeout=100.0,
842 )
843 patterns = (
844 sanitize_patterns(self.data_files)
845 if self.data_files is not None
--> 846 else get_patterns_in_dataset_repository(hfh_dataset_info)
847 )
848 data_files = DataFilesDict.from_hf_repo(
849 patterns,
850 dataset_info=hfh_dataset_info,
851 allowed_extensions=ALL_ALLOWED_EXTENSIONS,
852 )
853 infered_module_names = {
854 key: infer_module_for_data_files(data_files_list, use_auth_token=self.download_config.use_auth_token)
855 for key, data_files_list in data_files.items()
856 }
File /opt/conda/lib/python3.10/site-packages/datasets/data_files.py:471, in get_patterns_in_dataset_repository(dataset_info)
469 resolver = partial(_resolve_single_pattern_in_dataset_repository, dataset_info)
470 try:
--> 471 return _get_data_files_patterns(resolver)
472 except FileNotFoundError:
473 raise FileNotFoundError(
474 f"The dataset repository at '{dataset_info.id}' doesn't contain any data file."
475 ) from None
File /opt/conda/lib/python3.10/site-packages/datasets/data_files.py:99, in _get_data_files_patterns(pattern_resolver)
97 try:
98 for pattern in patterns:
---> 99 data_files = pattern_resolver(pattern)
100 if len(data_files) > 0:
101 non_empty_splits.append(split)
File /opt/conda/lib/python3.10/site-packages/datasets/data_files.py:303, in _resolve_single_pattern_in_dataset_repository(dataset_info, pattern, allowed_extensions)
301 data_files_ignore = FILES_TO_IGNORE
302 fs = HfFileSystem(repo_info=dataset_info)
--> 303 glob_iter = [PurePath(filepath) for filepath in fs.glob(PurePath(pattern).as_posix()) if fs.isfile(filepath)]
304 matched_paths = [
305 filepath
306 for filepath in glob_iter
307 if filepath.name not in data_files_ignore and not filepath.name.startswith(".")
308 ]
309 if allowed_extensions is not None:
File /opt/conda/lib/python3.10/site-packages/fsspec/spec.py:606, in AbstractFileSystem.glob(self, path, maxdepth, **kwargs)
602 depth = None
604 allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs)
--> 606 pattern = glob_translate(path + ("/" if ends_with_sep else ""))
607 pattern = re.compile(pattern)
609 out = {
610 p: info
611 for p, info in sorted(allpaths.items())
(...)
618 )
619 }
File /opt/conda/lib/python3.10/site-packages/fsspec/utils.py:734, in glob_translate(pat)
732 continue
733 elif "**" in part:
--> 734 raise ValueError(
735 "Invalid pattern: '**' can only be an entire path component"
736 )
737 if part:
738 results.extend(_translate(part, f"{not_sep}*", not_sep))
ValueError: Invalid pattern: '**' can only be an entire path component
```
```
After loading this dataset
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("llm-blender/mix-instruct")
```
### Expected behavior
The dataset should load with desired split.
### Environment info
- `datasets` version: 2.17.1
- Platform: Linux-5.15.133+-x86_64-with-glibc2.31
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0
| {
"avatar_url": "https://avatars.githubusercontent.com/u/97465624?v=4",
"events_url": "https://api.github.com/users/vrunm/events{/privacy}",
"followers_url": "https://api.github.com/users/vrunm/followers",
"following_url": "https://api.github.com/users/vrunm/following{/other_user}",
"gists_url": "https://api.github.com/users/vrunm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vrunm",
"id": 97465624,
"login": "vrunm",
"node_id": "U_kgDOBc81GA",
"organizations_url": "https://api.github.com/users/vrunm/orgs",
"received_events_url": "https://api.github.com/users/vrunm/received_events",
"repos_url": "https://api.github.com/users/vrunm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vrunm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vrunm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vrunm",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6697/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6697/timeline | null | completed | false | 47.218611 |
https://api.github.com/repos/huggingface/datasets/issues/6696 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6696/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6696/comments | https://api.github.com/repos/huggingface/datasets/issues/6696/events | https://github.com/huggingface/datasets/pull/6696 | 2,154,161,357 | PR_kwDODunzps5n6ipH | 6,696 | Make JSON builder support an array of strings | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-02-26T13:18:31 | 2024-02-28T06:45:23 | 2024-02-28T06:39:12 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6696.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6696",
"merged_at": "2024-02-28T06:39:12",
"patch_url": "https://github.com/huggingface/datasets/pull/6696.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6696"
} | Support JSON file with an array of strings.
Fix #6695. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6696/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6696/timeline | null | null | true | 41.344722 |
https://api.github.com/repos/huggingface/datasets/issues/6695 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6695/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6695/comments | https://api.github.com/repos/huggingface/datasets/issues/6695/events | https://github.com/huggingface/datasets/issues/6695 | 2,154,075,509 | I_kwDODunzps6AZJV1 | 6,695 | Support JSON file with an array of strings | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | 1 | 2024-02-26T12:35:11 | 2024-03-08T14:16:25 | 2024-02-28T06:39:13 | MEMBER | null | null | null | Support loading a dataset from a JSON file with an array of strings.
See: https://huggingface.co/datasets/CausalLM/Refined-Anime-Text/discussions/1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6695/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6695/timeline | null | completed | false | 42.067222 |
https://api.github.com/repos/huggingface/datasets/issues/6694 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6694/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6694/comments | https://api.github.com/repos/huggingface/datasets/issues/6694/events | https://github.com/huggingface/datasets/pull/6694 | 2,153,086,984 | PR_kwDODunzps5n23Jz | 6,694 | __add__ for Dataset, IterableDataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/79557937?v=4",
"events_url": "https://api.github.com/users/oh-gnues-iohc/events{/privacy}",
"followers_url": "https://api.github.com/users/oh-gnues-iohc/followers",
"following_url": "https://api.github.com/users/oh-gnues-iohc/following{/other_user}",
"gists_url": "https://api.github.com/users/oh-gnues-iohc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/oh-gnues-iohc",
"id": 79557937,
"login": "oh-gnues-iohc",
"node_id": "MDQ6VXNlcjc5NTU3OTM3",
"organizations_url": "https://api.github.com/users/oh-gnues-iohc/orgs",
"received_events_url": "https://api.github.com/users/oh-gnues-iohc/received_events",
"repos_url": "https://api.github.com/users/oh-gnues-iohc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/oh-gnues-iohc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oh-gnues-iohc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/oh-gnues-iohc",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 1 | 2024-02-26T01:46:55 | 2024-02-29T16:52:58 | null | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6694.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6694",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6694.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6694"
} | It's too cumbersome to write this command every time we perform a dataset merging operation. ```pythonfrom datasets import concatenate_datasets``` We have added a simple `__add__` magic method to each class using `concatenate_datasets.`
```python
from datasets import load_dataset
bookcorpus = load_dataset("bookcorpus", split="train")
wiki = load_dataset("wikimedia/wikipedia", "20231101.ab", split="train")
wiki = wiki.remove_columns([col for col in wiki.column_names if col != "text"]) # only keep the 'text' column
bookcorpus + wiki
#Dataset({
# features: ['text'],
# num_rows: 74004228
#})
#Dataset({
# features: ['text'],
# num_rows: 6152
#})
#Dataset({
# features: ['text'],
# num_rows: 74010380
#})
``` | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6694/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6694/timeline | null | null | true | null |
https://api.github.com/repos/huggingface/datasets/issues/6693 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6693/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6693/comments | https://api.github.com/repos/huggingface/datasets/issues/6693/events | https://github.com/huggingface/datasets/pull/6693 | 2,152,887,712 | PR_kwDODunzps5n2ObO | 6,693 | Update the print message for chunked_dataset in process.mdx | {
"avatar_url": "https://avatars.githubusercontent.com/u/142939562?v=4",
"events_url": "https://api.github.com/users/gzbfgjf2/events{/privacy}",
"followers_url": "https://api.github.com/users/gzbfgjf2/followers",
"following_url": "https://api.github.com/users/gzbfgjf2/following{/other_user}",
"gists_url": "https://api.github.com/users/gzbfgjf2/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gzbfgjf2",
"id": 142939562,
"login": "gzbfgjf2",
"node_id": "U_kgDOCIUVqg",
"organizations_url": "https://api.github.com/users/gzbfgjf2/orgs",
"received_events_url": "https://api.github.com/users/gzbfgjf2/received_events",
"repos_url": "https://api.github.com/users/gzbfgjf2/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gzbfgjf2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gzbfgjf2/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gzbfgjf2",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-02-25T18:37:07 | 2024-02-25T19:57:12 | 2024-02-25T19:51:02 | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6693.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6693",
"merged_at": "2024-02-25T19:51:02",
"patch_url": "https://github.com/huggingface/datasets/pull/6693.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6693"
} | Update documentation to align with `Dataset.__repr__` change after #423 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6693/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6693/timeline | null | null | true | 1.231944 |
https://api.github.com/repos/huggingface/datasets/issues/6692 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6692/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6692/comments | https://api.github.com/repos/huggingface/datasets/issues/6692/events | https://github.com/huggingface/datasets/pull/6692 | 2,152,270,987 | PR_kwDODunzps5n0XN1 | 6,692 | Enhancement: Enable loading TSV files in load_dataset() | {
"avatar_url": "https://avatars.githubusercontent.com/u/77767961?v=4",
"events_url": "https://api.github.com/users/harsh1504660/events{/privacy}",
"followers_url": "https://api.github.com/users/harsh1504660/followers",
"following_url": "https://api.github.com/users/harsh1504660/following{/other_user}",
"gists_url": "https://api.github.com/users/harsh1504660/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/harsh1504660",
"id": 77767961,
"login": "harsh1504660",
"node_id": "MDQ6VXNlcjc3NzY3OTYx",
"organizations_url": "https://api.github.com/users/harsh1504660/orgs",
"received_events_url": "https://api.github.com/users/harsh1504660/received_events",
"repos_url": "https://api.github.com/users/harsh1504660/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/harsh1504660/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harsh1504660/subscriptions",
"type": "User",
"url": "https://api.github.com/users/harsh1504660",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2024-02-24T11:38:59 | 2024-02-26T15:33:50 | 2024-02-26T07:14:03 | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6692.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6692",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6692.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6692"
} | Fix #6691 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6692/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6692/timeline | null | null | true | 43.584444 |
https://api.github.com/repos/huggingface/datasets/issues/6691 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6691/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6691/comments | https://api.github.com/repos/huggingface/datasets/issues/6691/events | https://github.com/huggingface/datasets/issues/6691 | 2,152,134,041 | I_kwDODunzps6ARvWZ | 6,691 | load_dataset() does not support tsv | {
"avatar_url": "https://avatars.githubusercontent.com/u/26873178?v=4",
"events_url": "https://api.github.com/users/dipsivenkatesh/events{/privacy}",
"followers_url": "https://api.github.com/users/dipsivenkatesh/followers",
"following_url": "https://api.github.com/users/dipsivenkatesh/following{/other_user}",
"gists_url": "https://api.github.com/users/dipsivenkatesh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dipsivenkatesh",
"id": 26873178,
"login": "dipsivenkatesh",
"node_id": "MDQ6VXNlcjI2ODczMTc4",
"organizations_url": "https://api.github.com/users/dipsivenkatesh/orgs",
"received_events_url": "https://api.github.com/users/dipsivenkatesh/received_events",
"repos_url": "https://api.github.com/users/dipsivenkatesh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dipsivenkatesh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dipsivenkatesh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dipsivenkatesh",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/77767961?v=4",
"events_url": "https://api.github.com/users/harsh1504660/events{/privacy}",
"followers_url": "https://api.github.com/users/harsh1504660/followers",
"following_url": "https://api.github.com/users/harsh1504660/following{/other_user}",
"gists_url": "https://api.github.com/users/harsh1504660/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/harsh1504660",
"id": 77767961,
"login": "harsh1504660",
"node_id": "MDQ6VXNlcjc3NzY3OTYx",
"organizations_url": "https://api.github.com/users/harsh1504660/orgs",
"received_events_url": "https://api.github.com/users/harsh1504660/received_events",
"repos_url": "https://api.github.com/users/harsh1504660/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/harsh1504660/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harsh1504660/subscriptions",
"type": "User",
"url": "https://api.github.com/users/harsh1504660",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/77767961?v=4",
"events_url": "https://api.github.com/users/harsh1504660/events{/privacy}",
"followers_url": "https://api.github.com/users/harsh1504660/followers",
"following_url": "https://api.github.com/users/harsh1504660/following{/other_user}",
"gists_url": "https://api.github.com/users/harsh1504660/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/harsh1504660",
"id": 77767961,
"login": "harsh1504660",
"node_id": "MDQ6VXNlcjc3NzY3OTYx",
"organizations_url": "https://api.github.com/users/harsh1504660/orgs",
"received_events_url": "https://api.github.com/users/harsh1504660/received_events",
"repos_url": "https://api.github.com/users/harsh1504660/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/harsh1504660/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/harsh1504660/subscriptions",
"type": "User",
"url": "https://api.github.com/users/harsh1504660",
"user_view_type": "public"
}
] | null | 2 | 2024-02-24T05:56:04 | 2024-02-26T07:15:07 | 2024-02-26T07:09:35 | NONE | null | null | null | ### Feature request
the load_dataset() for local functions support file types like csv, json etc but not of type tsv (tab separated values).
### Motivation
cant easily load files of type tsv, have to convert them to another type like csv then load
### Your contribution
Can try by raising a PR with a little help, currently went through the code but didn't fully understand | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6691/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6691/timeline | null | completed | false | 49.225278 |
https://api.github.com/repos/huggingface/datasets/issues/6690 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6690/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6690/comments | https://api.github.com/repos/huggingface/datasets/issues/6690/events | https://github.com/huggingface/datasets/issues/6690 | 2,150,800,065 | I_kwDODunzps6AMprB | 6,690 | Add function to convert a script-dataset to Parquet | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | 0 | 2024-02-23T10:28:20 | 2024-04-12T15:27:05 | 2024-04-12T15:27:05 | MEMBER | null | null | null | Add function to convert a script-dataset to Parquet and push it to the Hub, analogously to the Space: "Convert a Hugging Face dataset to Parquet" | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6690/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6690/timeline | null | completed | false | 1,180.979167 |
https://api.github.com/repos/huggingface/datasets/issues/6689 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6689/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6689/comments | https://api.github.com/repos/huggingface/datasets/issues/6689/events | https://github.com/huggingface/datasets/issues/6689 | 2,149,581,147 | I_kwDODunzps6AIAFb | 6,689 | .load_dataset() method defaults to zstandard | {
"avatar_url": "https://avatars.githubusercontent.com/u/87243032?v=4",
"events_url": "https://api.github.com/users/ElleLeonne/events{/privacy}",
"followers_url": "https://api.github.com/users/ElleLeonne/followers",
"following_url": "https://api.github.com/users/ElleLeonne/following{/other_user}",
"gists_url": "https://api.github.com/users/ElleLeonne/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ElleLeonne",
"id": 87243032,
"login": "ElleLeonne",
"node_id": "MDQ6VXNlcjg3MjQzMDMy",
"organizations_url": "https://api.github.com/users/ElleLeonne/orgs",
"received_events_url": "https://api.github.com/users/ElleLeonne/received_events",
"repos_url": "https://api.github.com/users/ElleLeonne/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ElleLeonne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ElleLeonne/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ElleLeonne",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 4 | 2024-02-22T17:39:27 | 2024-03-07T14:54:16 | 2024-03-07T14:54:15 | NONE | null | null | null | ### Describe the bug
Regardless of what method I use, datasets defaults to zstandard for unpacking my datasets.
This is poor behavior, because not only is zstandard not a dependency in the huggingface package (and therefore, your dataset loading will be interrupted while it asks you to install the package), but it happens on datasets that are uploaded in json format too, meaning the dataset loader will attempt to convert the data to a zstandard compatible format, and THEN try to unpackage it.
My 4tb drive runs out of room when using zstandard on slimpajama. It loads fine on 1.5tb when using json, however I lack the understanding of the "magic numbers" system used to select the unpackaging algorithm, so I can't push a change myself.
Commenting out this line, in "/datasets/utils/extract.py" fixes the issue, and causes SlimPajama to properly extract using rational amounts of storage, however it completely disables zstandard, which is probably undesirable behavior. Someone with an understanding of the "magic numbers" system should probably take a pass over this issue.
```
class Extractor:
# Put zip file to the last, b/c it is possible wrongly detected as zip (I guess it means: as tar or gzip)
extractors: Dict[str, Type[BaseExtractor]] = {
"tar": TarExtractor,
"gzip": GzipExtractor,
"zip": ZipExtractor,
"xz": XzExtractor,
#"zstd": ZstdExtractor, # This line needs to go, in order for datasets to work w/o non-dependent packages
"rar": RarExtractor,
"bz2": Bzip2Extractor,
"7z": SevenZipExtractor, # <Added version="2.4.0"/>
"lz4": Lz4Extractor, # <Added version="2.4.0"/>
}
```
### Steps to reproduce the bug
'''
from datasaets import load_dataset
load_dataset(path="/cerebras/SlimPajama-627B")
'''
This alone should trigger the error on any system that does not have zstandard pip installed.
### Expected behavior
This repository (which is encoded in json format, not zstandard) should check whether zstandard is installed before defaulting to it. Additionally, using zstandard should not use more than 3x the required space that other extraction mechanisms use.
### Environment info
- `datasets` version: 2.17.1
- Platform: Linux-6.5.0-18-generic-x86_64-with-glibc2.35
- Python version: 3.12.0
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/87243032?v=4",
"events_url": "https://api.github.com/users/ElleLeonne/events{/privacy}",
"followers_url": "https://api.github.com/users/ElleLeonne/followers",
"following_url": "https://api.github.com/users/ElleLeonne/following{/other_user}",
"gists_url": "https://api.github.com/users/ElleLeonne/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ElleLeonne",
"id": 87243032,
"login": "ElleLeonne",
"node_id": "MDQ6VXNlcjg3MjQzMDMy",
"organizations_url": "https://api.github.com/users/ElleLeonne/orgs",
"received_events_url": "https://api.github.com/users/ElleLeonne/received_events",
"repos_url": "https://api.github.com/users/ElleLeonne/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ElleLeonne/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ElleLeonne/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ElleLeonne",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6689/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6689/timeline | null | completed | false | 333.246667 |
https://api.github.com/repos/huggingface/datasets/issues/6688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6688/comments | https://api.github.com/repos/huggingface/datasets/issues/6688/events | https://github.com/huggingface/datasets/issues/6688 | 2,148,609,859 | I_kwDODunzps6AES9D | 6,688 | Tensor type (e.g. from `return_tensors`) ignored in map | {
"avatar_url": "https://avatars.githubusercontent.com/u/11166137?v=4",
"events_url": "https://api.github.com/users/srossi93/events{/privacy}",
"followers_url": "https://api.github.com/users/srossi93/followers",
"following_url": "https://api.github.com/users/srossi93/following{/other_user}",
"gists_url": "https://api.github.com/users/srossi93/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/srossi93",
"id": 11166137,
"login": "srossi93",
"node_id": "MDQ6VXNlcjExMTY2MTM3",
"organizations_url": "https://api.github.com/users/srossi93/orgs",
"received_events_url": "https://api.github.com/users/srossi93/received_events",
"repos_url": "https://api.github.com/users/srossi93/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/srossi93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/srossi93/subscriptions",
"type": "User",
"url": "https://api.github.com/users/srossi93",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 3 | 2024-02-22T09:27:57 | 2024-02-22T15:56:21 | null | NONE | null | null | null | ### Describe the bug
I don't know if it is a bug or an expected behavior, but the tensor type seems to be ignored after applying map. For example, mapping over to tokenize text with a transformers' tokenizer always returns lists and it ignore the `return_tensors` argument.
If this is an expected behaviour (e.g., for caching/Arrow compatibility/etc.) it should be clearly documented. For example, current documentation (see [here](https://huggingface.co/docs/datasets/v2.17.1/en/nlp_process#map)) clearly state to "set `return_tensors="np"` when you tokenize your text" to have Numpy arrays.
### Steps to reproduce the bug
```py
# %%%
import datasets
import numpy as np
import tensorflow as tf
import torch
from transformers import AutoTokenizer
# %%
ds = datasets.load_dataset("cnn_dailymail", "1.0.0", split="train[:1%]")
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
#%%
for return_tensors in [None, "np", "pt", "tf", "jax"]:
print(f"********** no map, return_tensors={return_tensors} **********")
_ds = tokenizer(ds["article"], return_tensors=return_tensors, truncation=True, padding=True)
print('Type <input_ids>:', type(_ds["input_ids"]))
# %%
for return_tensors in [None, "np", "pt", "tf", "jax"]:
print(f"********** map, return_tensors={return_tensors} **********")
_ds = ds.map(
lambda examples: tokenizer(examples["article"], return_tensors=return_tensors, truncation=True, padding=True),
batched=True,
remove_columns=["article"],
)
print('Type <input_ids>:', type(_ds[0]["input_ids"]))
```
### Expected behavior
The output from the script above. I would expect the second half to be the same.
```
********** no map, return_tensors=None **********
Type <input_ids>: <class 'list'>
********** no map, return_tensors=np **********
Type <input_ids>: <class 'numpy.ndarray'>
********** no map, return_tensors=pt **********
Type <input_ids>: <class 'torch.Tensor'>
********** no map, return_tensors=tf **********
Type <input_ids>: <class 'tensorflow.python.framework.ops.EagerTensor'>
********** no map, return_tensors=jax **********
Type <input_ids>: <class 'jaxlib.xla_extension.ArrayImpl'>
********** map, return_tensors=None **********
Type <input_ids>: <class 'list'>
********** map, return_tensors=np **********
Type <input_ids>: <class 'list'>
********** map, return_tensors=pt **********
Type <input_ids>: <class 'list'>
********** map, return_tensors=tf **********
Type <input_ids>: <class 'list'>
********** map, return_tensors=jax **********
Type <input_ids>: <class 'list'>
```
### Environment info
- `datasets` version: 2.17.1
- Platform: Redacted (linux)
- Python version: 3.10.12
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.1.3
- `fsspec` version: 2023.10.0 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6688/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6688/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6687 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6687/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6687/comments | https://api.github.com/repos/huggingface/datasets/issues/6687/events | https://github.com/huggingface/datasets/pull/6687 | 2,148,554,178 | PR_kwDODunzps5nnqBB | 6,687 | fsspec: support fsspec>=2023.12.0 glob changes | {
"avatar_url": "https://avatars.githubusercontent.com/u/651988?v=4",
"events_url": "https://api.github.com/users/pmrowla/events{/privacy}",
"followers_url": "https://api.github.com/users/pmrowla/followers",
"following_url": "https://api.github.com/users/pmrowla/following{/other_user}",
"gists_url": "https://api.github.com/users/pmrowla/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pmrowla",
"id": 651988,
"login": "pmrowla",
"node_id": "MDQ6VXNlcjY1MTk4OA==",
"organizations_url": "https://api.github.com/users/pmrowla/orgs",
"received_events_url": "https://api.github.com/users/pmrowla/received_events",
"repos_url": "https://api.github.com/users/pmrowla/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pmrowla/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pmrowla/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pmrowla",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 7 | 2024-02-22T08:59:32 | 2024-03-04T12:59:42 | 2024-02-29T15:12:17 | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6687.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6687",
"merged_at": "2024-02-29T15:12:17",
"patch_url": "https://github.com/huggingface/datasets/pull/6687.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6687"
} | - adds support for the `fs.glob` changes introduced in `fsspec==2023.12.0` and unpins the current upper bound
Should close #6644
Should close #6645
The `test_data_files` glob/pattern tests pass for me in:
- `fsspec==2023.10.0` (the pinned max version in datasets `main`)
- `fsspec==2023.12.0` (#6644)
- `fsspec==2024.2.0` (#6645) | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 5,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6687/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6687/timeline | null | null | true | 174.2125 |
https://api.github.com/repos/huggingface/datasets/issues/6686 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6686/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6686/comments | https://api.github.com/repos/huggingface/datasets/issues/6686/events | https://github.com/huggingface/datasets/issues/6686 | 2,147,795,103 | I_kwDODunzps6ABMCf | 6,686 | Question: Is there any way for uploading a large image dataset? | {
"avatar_url": "https://avatars.githubusercontent.com/u/37367987?v=4",
"events_url": "https://api.github.com/users/zhjohnchan/events{/privacy}",
"followers_url": "https://api.github.com/users/zhjohnchan/followers",
"following_url": "https://api.github.com/users/zhjohnchan/following{/other_user}",
"gists_url": "https://api.github.com/users/zhjohnchan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zhjohnchan",
"id": 37367987,
"login": "zhjohnchan",
"node_id": "MDQ6VXNlcjM3MzY3OTg3",
"organizations_url": "https://api.github.com/users/zhjohnchan/orgs",
"received_events_url": "https://api.github.com/users/zhjohnchan/received_events",
"repos_url": "https://api.github.com/users/zhjohnchan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zhjohnchan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zhjohnchan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zhjohnchan",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 1 | 2024-02-21T22:07:21 | 2024-05-02T03:44:59 | null | NONE | null | null | null | I am uploading an image dataset like this:
```
dataset = load_dataset(
"json",
data_files={"train": "data/custom_dataset/train.json", "validation": "data/custom_dataset/val.json"},
)
dataset = dataset.cast_column("images", Sequence(Image()))
dataset.push_to_hub("StanfordAIMI/custom_dataset", max_shard_size="1GB")
```
where it takes a long time in the `Map` process. Do you think I can use multi-processing to map all the image data to the memory first? For the `Map()` function, I can set `num_proc`. But for `push_to_hub` and `cast_column`, I can not find it.
Thanks in advance!
Best, | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6686/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6686/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6685 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6685/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6685/comments | https://api.github.com/repos/huggingface/datasets/issues/6685/events | https://github.com/huggingface/datasets/pull/6685 | 2,145,570,006 | PR_kwDODunzps5ndZQa | 6,685 | Updated Quickstart Notebook link | {
"avatar_url": "https://avatars.githubusercontent.com/u/55932554?v=4",
"events_url": "https://api.github.com/users/Codeblockz/events{/privacy}",
"followers_url": "https://api.github.com/users/Codeblockz/followers",
"following_url": "https://api.github.com/users/Codeblockz/following{/other_user}",
"gists_url": "https://api.github.com/users/Codeblockz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Codeblockz",
"id": 55932554,
"login": "Codeblockz",
"node_id": "MDQ6VXNlcjU1OTMyNTU0",
"organizations_url": "https://api.github.com/users/Codeblockz/orgs",
"received_events_url": "https://api.github.com/users/Codeblockz/received_events",
"repos_url": "https://api.github.com/users/Codeblockz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Codeblockz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Codeblockz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Codeblockz",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-02-21T01:04:18 | 2024-03-12T21:31:04 | 2024-02-25T18:48:08 | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6685.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6685",
"merged_at": "2024-02-25T18:48:08",
"patch_url": "https://github.com/huggingface/datasets/pull/6685.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6685"
} | Fixed Quickstart Notebook Link in the [Overview notebook](https://github.com/huggingface/datasets/blob/main/notebooks/Overview.ipynb) | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6685/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6685/timeline | null | null | true | 113.730556 |
https://api.github.com/repos/huggingface/datasets/issues/6684 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6684/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6684/comments | https://api.github.com/repos/huggingface/datasets/issues/6684/events | https://github.com/huggingface/datasets/pull/6684 | 2,144,092,388 | PR_kwDODunzps5nYUIf | 6,684 | Improve error message for gated datasets on load | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 7 | 2024-02-20T10:51:27 | 2024-02-20T15:40:52 | 2024-02-20T15:33:56 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6684.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6684",
"merged_at": "2024-02-20T15:33:56",
"patch_url": "https://github.com/huggingface/datasets/pull/6684.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6684"
} | Internal Slack discussion: https://huggingface.slack.com/archives/C02V51Q3800/p1708424971135029 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6684/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6684/timeline | null | null | true | 4.708056 |
https://api.github.com/repos/huggingface/datasets/issues/6683 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6683/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6683/comments | https://api.github.com/repos/huggingface/datasets/issues/6683/events | https://github.com/huggingface/datasets/pull/6683 | 2,142,751,955 | PR_kwDODunzps5nTxGu | 6,683 | Fix imagefolder dataset url | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-02-19T16:26:51 | 2024-02-19T17:24:25 | 2024-02-19T17:18:10 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6683.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6683",
"merged_at": "2024-02-19T17:18:10",
"patch_url": "https://github.com/huggingface/datasets/pull/6683.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6683"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6683/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6683/timeline | null | null | true | 0.855278 |
https://api.github.com/repos/huggingface/datasets/issues/6682 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6682/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6682/comments | https://api.github.com/repos/huggingface/datasets/issues/6682/events | https://github.com/huggingface/datasets/pull/6682 | 2,142,000,800 | PR_kwDODunzps5nRME6 | 6,682 | Update GitHub Actions to Node 20 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-02-19T10:10:50 | 2024-02-28T07:02:40 | 2024-02-28T06:56:34 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6682.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6682",
"merged_at": "2024-02-28T06:56:34",
"patch_url": "https://github.com/huggingface/datasets/pull/6682.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6682"
} | Update GitHub Actions to Node 20.
Fix #6679. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6682/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6682/timeline | null | null | true | 212.762222 |
https://api.github.com/repos/huggingface/datasets/issues/6681 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6681/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6681/comments | https://api.github.com/repos/huggingface/datasets/issues/6681/events | https://github.com/huggingface/datasets/pull/6681 | 2,141,985,239 | PR_kwDODunzps5nRItQ | 6,681 | Update release instructions | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | null | [] | null | 2 | 2024-02-19T10:03:08 | 2024-02-28T07:23:49 | 2024-02-28T07:17:22 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6681.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6681",
"merged_at": "2024-02-28T07:17:22",
"patch_url": "https://github.com/huggingface/datasets/pull/6681.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6681"
} | Update release instructions. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6681/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6681/timeline | null | null | true | 213.237222 |
https://api.github.com/repos/huggingface/datasets/issues/6680 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6680/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6680/comments | https://api.github.com/repos/huggingface/datasets/issues/6680/events | https://github.com/huggingface/datasets/pull/6680 | 2,141,979,527 | PR_kwDODunzps5nRHcz | 6,680 | Set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-02-19T10:00:31 | 2024-02-19T10:06:43 | 2024-02-19T10:00:40 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6680.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6680",
"merged_at": "2024-02-19T10:00:40",
"patch_url": "https://github.com/huggingface/datasets/pull/6680.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6680"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6680/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6680/timeline | null | null | true | 0.0025 |
https://api.github.com/repos/huggingface/datasets/issues/6679 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6679/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6679/comments | https://api.github.com/repos/huggingface/datasets/issues/6679/events | https://github.com/huggingface/datasets/issues/6679 | 2,141,953,981 | I_kwDODunzps5_q5-9 | 6,679 | Node.js 16 GitHub Actions are deprecated | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | 0 | 2024-02-19T09:47:37 | 2024-02-28T06:56:35 | 2024-02-28T06:56:35 | MEMBER | null | null | null | `Node.js` 16 GitHub Actions are deprecated. See: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/
We should update them to Node 20.
See warnings in our CI, e.g.: https://github.com/huggingface/datasets/actions/runs/7957295009?pr=6678
> Node.js 16 actions are deprecated. Please update the following actions to use Node.js 20: actions/checkout@v3, actions/setup-python@v4. For more information see: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/.
| {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6679/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6679/timeline | null | completed | false | 213.149444 |
https://api.github.com/repos/huggingface/datasets/issues/6678 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6678/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6678/comments | https://api.github.com/repos/huggingface/datasets/issues/6678/events | https://github.com/huggingface/datasets/pull/6678 | 2,141,902,154 | PR_kwDODunzps5nQ2ZO | 6,678 | Release: 2.17.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-02-19T09:24:29 | 2024-02-19T10:03:00 | 2024-02-19T09:56:52 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6678.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6678",
"merged_at": "2024-02-19T09:56:52",
"patch_url": "https://github.com/huggingface/datasets/pull/6678.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6678"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6678/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6678/timeline | null | null | true | 0.539722 |
https://api.github.com/repos/huggingface/datasets/issues/6677 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6677/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6677/comments | https://api.github.com/repos/huggingface/datasets/issues/6677/events | https://github.com/huggingface/datasets/pull/6677 | 2,141,244,167 | PR_kwDODunzps5nOmo_ | 6,677 | Pass through information about location of cache directory. | {
"avatar_url": "https://avatars.githubusercontent.com/u/94808782?v=4",
"events_url": "https://api.github.com/users/stridge-cruxml/events{/privacy}",
"followers_url": "https://api.github.com/users/stridge-cruxml/followers",
"following_url": "https://api.github.com/users/stridge-cruxml/following{/other_user}",
"gists_url": "https://api.github.com/users/stridge-cruxml/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stridge-cruxml",
"id": 94808782,
"login": "stridge-cruxml",
"node_id": "U_kgDOBaaqzg",
"organizations_url": "https://api.github.com/users/stridge-cruxml/orgs",
"received_events_url": "https://api.github.com/users/stridge-cruxml/received_events",
"repos_url": "https://api.github.com/users/stridge-cruxml/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stridge-cruxml/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stridge-cruxml/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stridge-cruxml",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2024-02-18T23:48:57 | 2024-02-28T18:57:39 | 2024-02-28T18:51:15 | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6677.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6677",
"merged_at": "2024-02-28T18:51:15",
"patch_url": "https://github.com/huggingface/datasets/pull/6677.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6677"
} | If cache directory is set, information is not passed through.
Pass download config in as an arg too. | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6677/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6677/timeline | null | null | true | 235.038333 |
https://api.github.com/repos/huggingface/datasets/issues/6676 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6676/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6676/comments | https://api.github.com/repos/huggingface/datasets/issues/6676/events | https://github.com/huggingface/datasets/issues/6676 | 2,140,648,619 | I_kwDODunzps5_l7Sr | 6,676 | Can't Read List of JSON Files Properly | {
"avatar_url": "https://avatars.githubusercontent.com/u/20232088?v=4",
"events_url": "https://api.github.com/users/lordsoffallen/events{/privacy}",
"followers_url": "https://api.github.com/users/lordsoffallen/followers",
"following_url": "https://api.github.com/users/lordsoffallen/following{/other_user}",
"gists_url": "https://api.github.com/users/lordsoffallen/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lordsoffallen",
"id": 20232088,
"login": "lordsoffallen",
"node_id": "MDQ6VXNlcjIwMjMyMDg4",
"organizations_url": "https://api.github.com/users/lordsoffallen/orgs",
"received_events_url": "https://api.github.com/users/lordsoffallen/received_events",
"repos_url": "https://api.github.com/users/lordsoffallen/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lordsoffallen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lordsoffallen/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lordsoffallen",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 3 | 2024-02-17T22:58:15 | 2024-03-02T20:47:22 | null | NONE | null | null | null | ### Describe the bug
Trying to read a bunch of JSON files into Dataset class but default approach doesn't work. I don't get why it works when I read it one by one but not when I pass as a list :man_shrugging:
The code fails with
```
ArrowInvalid: JSON parse error: Invalid value. in row 0
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte
DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
This doesn't work
```
from datasets import Dataset
# dir contains 100 json files.
Dataset.from_json("/PUT SOME PATH HERE/*")
```
This works:
```
from datasets import concatenate_datasets
ls_ds = []
for file in list_of_json_files:
ls_ds.append(Dataset.from_json(file))
ds = concatenate_datasets(ls_ds)
```
### Expected behavior
I expect this to read json files properly as error is not clear
### Environment info
- `datasets` version: 2.17.0
- Platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.35
- Python version: 3.10.13
- `huggingface_hub` version: 0.20.2
- PyArrow version: 15.0.0
- Pandas version: 2.2.0
- `fsspec` version: 2023.10.0
| null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6676/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6676/timeline | null | null | false | null |