url
stringlengths 61
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 75
75
| comments_url
stringlengths 70
70
| events_url
stringlengths 68
68
| html_url
stringlengths 49
51
| id
int64 1.92B
2.68B
| node_id
stringlengths 18
19
| number
int64 6.27k
7.3k
| title
stringlengths 2
159
| user
dict | labels
listlengths 0
2
| state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
listlengths 0
1
| milestone
dict | comments
int64 0
24
| created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 4
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 3
47.9k
⌀ | closed_by
dict | reactions
dict | timeline_url
stringlengths 70
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | is_pull_request
bool 2
classes | time_to_close
float64 0
7.99k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6368/comments | https://api.github.com/repos/huggingface/datasets/issues/6368/events | https://github.com/huggingface/datasets/pull/6368 | 1,971,193,692 | PR_kwDODunzps5eRZwQ | 6,368 | Fix python formatting for complex types in `format_table` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 4 | 2023-10-31T19:48:08 | 2023-11-02T14:42:28 | 2023-11-02T14:21:16 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6368.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6368",
"merged_at": "2023-11-02T14:21:16",
"patch_url": "https://github.com/huggingface/datasets/pull/6368.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6368"
} | Fix #6366 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6368/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6368/timeline | null | null | true | 42.552222 |
https://api.github.com/repos/huggingface/datasets/issues/6367 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6367/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6367/comments | https://api.github.com/repos/huggingface/datasets/issues/6367/events | https://github.com/huggingface/datasets/pull/6367 | 1,971,015,861 | PR_kwDODunzps5eQy1D | 6,367 | Fix time measuring snippet in docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 4 | 2023-10-31T17:57:17 | 2023-10-31T18:35:53 | 2023-10-31T18:24:02 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6367.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6367",
"merged_at": "2023-10-31T18:24:02",
"patch_url": "https://github.com/huggingface/datasets/pull/6367.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6367"
} | Fix https://discuss.huggingface.co/t/attributeerror-enter/60509 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6367/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6367/timeline | null | null | true | 0.445833 |
https://api.github.com/repos/huggingface/datasets/issues/6366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6366/comments | https://api.github.com/repos/huggingface/datasets/issues/6366/events | https://github.com/huggingface/datasets/issues/6366 | 1,970,213,490 | I_kwDODunzps51bxJy | 6,366 | with_format() function returns bytes instead of PIL images even when image column is not part of "columns" | {
"avatar_url": "https://avatars.githubusercontent.com/u/17809020?v=4",
"events_url": "https://api.github.com/users/leot13/events{/privacy}",
"followers_url": "https://api.github.com/users/leot13/followers",
"following_url": "https://api.github.com/users/leot13/following{/other_user}",
"gists_url": "https://api.github.com/users/leot13/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leot13",
"id": 17809020,
"login": "leot13",
"node_id": "MDQ6VXNlcjE3ODA5MDIw",
"organizations_url": "https://api.github.com/users/leot13/orgs",
"received_events_url": "https://api.github.com/users/leot13/received_events",
"repos_url": "https://api.github.com/users/leot13/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leot13/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leot13/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leot13",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2023-10-31T11:10:48 | 2023-11-02T14:21:17 | 2023-11-02T14:21:17 | NONE | null | null | null | ### Describe the bug
When using the with_format() function on a dataset containing images, even if the image column is not part of the columns provided in the function, its type will be changed to bytes.
Here is a minimal reproduction of the bug:
https://colab.research.google.com/drive/1hyaOspgyhB41oiR1-tXE3k_gJCdJUQCf?usp=sharing
### Steps to reproduce the bug
1. Load the image dataset
2. apply with_format(columns=["text"])
3. Check the type of images in the "image" column before and after applying with_format
### Expected behavior
The type should stay the same, but it does not
### Environment info
datasets==2.14.6
| {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6366/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6366/timeline | null | completed | false | 51.174722 |
https://api.github.com/repos/huggingface/datasets/issues/6365 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6365/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6365/comments | https://api.github.com/repos/huggingface/datasets/issues/6365/events | https://github.com/huggingface/datasets/issues/6365 | 1,970,140,392 | I_kwDODunzps51bfTo | 6,365 | Parquet size grows exponential for categorical data | {
"avatar_url": "https://avatars.githubusercontent.com/u/82567957?v=4",
"events_url": "https://api.github.com/users/aseganti/events{/privacy}",
"followers_url": "https://api.github.com/users/aseganti/followers",
"following_url": "https://api.github.com/users/aseganti/following{/other_user}",
"gists_url": "https://api.github.com/users/aseganti/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aseganti",
"id": 82567957,
"login": "aseganti",
"node_id": "MDQ6VXNlcjgyNTY3OTU3",
"organizations_url": "https://api.github.com/users/aseganti/orgs",
"received_events_url": "https://api.github.com/users/aseganti/received_events",
"repos_url": "https://api.github.com/users/aseganti/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aseganti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aseganti/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aseganti",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2023-10-31T10:29:02 | 2023-10-31T10:49:17 | 2023-10-31T10:49:17 | NONE | null | null | null | ### Describe the bug
It seems that when saving a data frame with a categorical column inside the size can grow exponentially.
This seems to happen because when we save the categorical data to parquet, we are saving the data + all the categories existing in the original data. This happens even when the categories are not present in the original data.
### Steps to reproduce the bug
To reproduce the bug, it is enough to run this script:
```
import pandas as pd
import os
if __name__ == "__main__":
for n in [10, 1e2, 1e3, 1e4, 1e5]:
for n_col in [1, 10, 100, 1000, 10000]:
input = pd.DataFrame([{"{i}": f"{i}_cat" for col in range(n_col)} for i in range(int(n))])
input.iloc[0:100].to_parquet("a.parquet")
for col in input.columns:
input[col] = input[col].astype("category")
input.iloc[0:100].to_parquet("b.parquet")
a_size_mb = os.stat("a.parquet").st_size / (1024 * 1024)
b_size_mb = os.stat("b.parquet").st_size / (1024 * 1024)
print(f"{n} {n_col} {a_size_mb} {b_size_mb} {100*b_size_mb/a_size_mb:.2f}")
```
That produces this output:
<img width="464" alt="Screenshot 2023-10-31 at 11 25 25" src="https://github.com/huggingface/datasets/assets/82567957/2b8a9284-7f9e-4c10-a006-0a27236ebd15">
### Expected behavior
In my opinion either:
1. The two file should have (almost) the same size
2. There should be warning telling the user that such difference in size is possible
### Environment info
Python 3.8.18
pandas==2.0.3
numpy==1.24.4 | {
"avatar_url": "https://avatars.githubusercontent.com/u/82567957?v=4",
"events_url": "https://api.github.com/users/aseganti/events{/privacy}",
"followers_url": "https://api.github.com/users/aseganti/followers",
"following_url": "https://api.github.com/users/aseganti/following{/other_user}",
"gists_url": "https://api.github.com/users/aseganti/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aseganti",
"id": 82567957,
"login": "aseganti",
"node_id": "MDQ6VXNlcjgyNTY3OTU3",
"organizations_url": "https://api.github.com/users/aseganti/orgs",
"received_events_url": "https://api.github.com/users/aseganti/received_events",
"repos_url": "https://api.github.com/users/aseganti/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aseganti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aseganti/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aseganti",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6365/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6365/timeline | null | not_planned | false | 0.3375 |
https://api.github.com/repos/huggingface/datasets/issues/6364 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6364/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6364/comments | https://api.github.com/repos/huggingface/datasets/issues/6364/events | https://github.com/huggingface/datasets/issues/6364 | 1,969,136,106 | I_kwDODunzps51XqHq | 6,364 | ArrowNotImplementedError: Unsupported cast from string to list using function cast_list | {
"avatar_url": "https://avatars.githubusercontent.com/u/32887094?v=4",
"events_url": "https://api.github.com/users/divyakrishna-devisetty/events{/privacy}",
"followers_url": "https://api.github.com/users/divyakrishna-devisetty/followers",
"following_url": "https://api.github.com/users/divyakrishna-devisetty/following{/other_user}",
"gists_url": "https://api.github.com/users/divyakrishna-devisetty/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/divyakrishna-devisetty",
"id": 32887094,
"login": "divyakrishna-devisetty",
"node_id": "MDQ6VXNlcjMyODg3MDk0",
"organizations_url": "https://api.github.com/users/divyakrishna-devisetty/orgs",
"received_events_url": "https://api.github.com/users/divyakrishna-devisetty/received_events",
"repos_url": "https://api.github.com/users/divyakrishna-devisetty/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/divyakrishna-devisetty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/divyakrishna-devisetty/subscriptions",
"type": "User",
"url": "https://api.github.com/users/divyakrishna-devisetty",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2023-10-30T20:14:01 | 2023-10-31T19:21:23 | 2023-10-31T19:21:23 | NONE | null | null | null | Hi,
I am trying to load a local csv dataset(similar to explodinggradients_fiqa) using load_dataset. When I try to pass features, I am facing the mentioned issue.
CSV Data sample(golden_dataset.csv):
Question | Context | answer | groundtruth
"what is abc?" | "abc is this and that" | "abc is this " | "abc is this and that"
```
import csv
# built it based on https://huggingface.co/datasets/explodinggradients/fiqa/viewer/ragas_eval?row=0
mydict = [
{'question' : "what is abc?", 'contexts': ["abc is this and that"], 'answer': "abc is this " , 'groundtruth': ["abc is this and that"]},
{'question' : "what is abc?", 'contexts': ["abc is this and that"], 'answer': "abc is this " , 'groundtruth': ["abc is this and that"]},
{'question' : "what is abc?", 'contexts': ["abc is this and that"], 'answer': "abc is this " , 'groundtruth': ["abc is this and that"]}
]
fields = ['question', 'contexts', 'answer', 'ground_truths']
with open('golden_dataset.csv', 'w', newline='\n') as file:
writer = csv.DictWriter(file, fieldnames = fields)
writer.writeheader()
for row in mydict:
writer.writerow(row)
```
Retrieved dataset:
DatasetDict({
train: Dataset({
features: ['question', 'contexts', 'answer', 'ground_truths'],
num_rows: 1
})
})
Code to reproduce issue:
```
from datasets import load_dataset, Features, Sequence, Value
encode_features = Features(
{
"question": Value(dtype='string', id=0),
"contexts": Sequence(feature=Value(dtype='string', id=1)),
"answer": Value(dtype='string', id=2),
"ground_truths": Sequence(feature=Value(dtype='string',id=3)),
}
)
eval_dataset = load_dataset('csv', data_files='/golden_dataset.csv', features = encode_features )
```
Error trace:
```
---------------------------------------------------------------------------
ArrowNotImplementedError Traceback (most recent call last)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1925, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1924 _time = time.time()
-> 1925 for _, table in generator:
1926 if max_shard_size is not None and writer._num_bytes > max_shard_size:
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/packaged_modules/csv/csv.py:192, in Csv._generate_tables(self, files)
189 # Uncomment for debugging (will print the Arrow table size and elements)
190 # logger.warning(f"pa_table: {pa_table} num rows: {pa_table.num_rows}")
191 # logger.warning('\n'.join(str(pa_table.slice(i, 1).to_pydict()) for i in range(pa_table.num_rows)))
--> 192 yield (file_idx, batch_idx), self._cast_table(pa_table)
193 except ValueError as e:
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/packaged_modules/csv/csv.py:167, in Csv._cast_table(self, pa_table)
165 if all(not require_storage_cast(feature) for feature in self.config.features.values()):
166 # cheaper cast
--> 167 pa_table = pa.Table.from_arrays([pa_table[field.name] for field in schema], schema=schema)
168 else:
169 # more expensive cast; allows str <-> int/float or str to Audio for example
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/table.pxi:3781, in pyarrow.lib.Table.from_arrays()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/table.pxi:1449, in pyarrow.lib._sanitize_arrays()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/array.pxi:354, in pyarrow.lib.asarray()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/table.pxi:551, in pyarrow.lib.ChunkedArray.cast()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/compute.py:400, in cast(arr, target_type, safe, options, memory_pool)
399 options = CastOptions.safe(target_type)
--> 400 return call_function("cast", [arr], options, memory_pool)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/_compute.pyx:572, in pyarrow._compute.call_function()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/_compute.pyx:367, in pyarrow._compute.Function.call()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status()
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/pyarrow/error.pxi:121, in pyarrow.lib.check_status()
ArrowNotImplementedError: Unsupported cast from string to list using function cast_list
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
Cell In[57], line 1
----> 1 eval_dataset = load_dataset('csv', data_files='/golden_dataset.csv', features = encode_features )
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/load.py:2153, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2150 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
2152 # Download and prepare data
-> 2153 builder_instance.download_and_prepare(
2154 download_config=download_config,
2155 download_mode=download_mode,
2156 verification_mode=verification_mode,
2157 try_from_hf_gcs=try_from_hf_gcs,
2158 num_proc=num_proc,
2159 storage_options=storage_options,
2160 )
2162 # Build dataset for splits
2163 keep_in_memory = (
2164 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
2165 )
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:954, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
952 if num_proc is not None:
953 prepare_split_kwargs["num_proc"] = num_proc
--> 954 self._download_and_prepare(
955 dl_manager=dl_manager,
956 verification_mode=verification_mode,
957 **prepare_split_kwargs,
958 **download_and_prepare_kwargs,
959 )
960 # Sync info
961 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1049, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
1045 split_dict.add(split_generator.split_info)
1047 try:
1048 # Prepare split will record examples associated to the split
-> 1049 self._prepare_split(split_generator, **prepare_split_kwargs)
1050 except OSError as e:
1051 raise OSError(
1052 "Cannot find data file. "
1053 + (self.manual_download_instructions or "")
1054 + "\nOriginal error:\n"
1055 + str(e)
1056 ) from None
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1813, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1811 job_id = 0
1812 with pbar:
-> 1813 for job_id, done, content in self._prepare_split_single(
1814 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1815 ):
1816 if done:
1817 result = content
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/datasets/builder.py:1958, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1956 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1957 e = e.__context__
-> 1958 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1960 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
Environment Info:
datasets version: 2.14.5
Python version: 3.10.8
PyArrow version: 12.0.1
Pandas version: 2.0.3
I have also tried to load dataset first and then use cast_column, or save_to_disk and load_from_disk. | {
"avatar_url": "https://avatars.githubusercontent.com/u/32887094?v=4",
"events_url": "https://api.github.com/users/divyakrishna-devisetty/events{/privacy}",
"followers_url": "https://api.github.com/users/divyakrishna-devisetty/followers",
"following_url": "https://api.github.com/users/divyakrishna-devisetty/following{/other_user}",
"gists_url": "https://api.github.com/users/divyakrishna-devisetty/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/divyakrishna-devisetty",
"id": 32887094,
"login": "divyakrishna-devisetty",
"node_id": "MDQ6VXNlcjMyODg3MDk0",
"organizations_url": "https://api.github.com/users/divyakrishna-devisetty/orgs",
"received_events_url": "https://api.github.com/users/divyakrishna-devisetty/received_events",
"repos_url": "https://api.github.com/users/divyakrishna-devisetty/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/divyakrishna-devisetty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/divyakrishna-devisetty/subscriptions",
"type": "User",
"url": "https://api.github.com/users/divyakrishna-devisetty",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6364/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6364/timeline | null | completed | false | 23.122778 |
https://api.github.com/repos/huggingface/datasets/issues/6363 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6363/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6363/comments | https://api.github.com/repos/huggingface/datasets/issues/6363/events | https://github.com/huggingface/datasets/issues/6363 | 1,968,891,277 | I_kwDODunzps51WuWN | 6,363 | dataset.transform() hangs indefinitely while finetuning the stable diffusion XL | {
"avatar_url": "https://avatars.githubusercontent.com/u/10846405?v=4",
"events_url": "https://api.github.com/users/bhosalems/events{/privacy}",
"followers_url": "https://api.github.com/users/bhosalems/followers",
"following_url": "https://api.github.com/users/bhosalems/following{/other_user}",
"gists_url": "https://api.github.com/users/bhosalems/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhosalems",
"id": 10846405,
"login": "bhosalems",
"node_id": "MDQ6VXNlcjEwODQ2NDA1",
"organizations_url": "https://api.github.com/users/bhosalems/orgs",
"received_events_url": "https://api.github.com/users/bhosalems/received_events",
"repos_url": "https://api.github.com/users/bhosalems/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhosalems/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhosalems/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhosalems",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 7 | 2023-10-30T17:34:05 | 2023-11-22T00:29:21 | 2023-11-22T00:29:21 | NONE | null | null | null | ### Describe the bug
Multi-GPU fine-tuning the stable diffusion X by following https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/README_sdxl.md hangs indefinitely.
### Steps to reproduce the bug
accelerate launch train_text_to_image_sdxl.py --pretrained_model_name_or_path=$MODEL_NAME --pretrained_vae_model_name_or_path=$VAE_NAME --dataset_name=$DATASET_NAME --enable_xformers_memory_efficient_attention --resolution=512 --center_crop --random_flip --proportion_empty_prompts=0.2 --train_batch_size=1 --gradient_accumulation_steps=4 --gradient_checkpointing --max_train_steps=10000 --use_8bit_adam --learning_rate=1e-06 --lr_scheduler="constant" --lr_warmup_steps=0 --mixed_precision="fp16" --report_to="wandb" --validation_prompt="a cute Sundar Pichai creature" --validation_epochs 5 --checkpointing_steps=5000 --output_dir="sdxl-pokemon-model"
### Expected behavior
It should start the training as it does for the single GPU training. I opened the issue in diffusers **https://github.com/huggingface/diffusers/issues/5534 but it does seem to be an issue with the Pokemon dataset.
I added some debug prints
```
print("==========HERE3=============")
with accelerator.main_process_first():
print(accelerator.is_main_process)
print("===========Here3.1===========")
if args.max_train_samples is not None:
dataset["train"] = dataset["train"].shuffle(seed=args.seed).select(range(args.max_train_samples))
print("===========Here3.2===========")
# Set the training transforms
train_dataset = dataset["train"].with_transform(preprocess_train)
print("==========HERE4=============")
Corresponding Output
Detected kernel version 5.4.0, which is below the recommended minimum of 5.5.0; this can cause the process to hang. It is recommended to upgrade the kernel to the minimum version or higher.
10/25/2023 21:18:04 - INFO - main - Distributed environment: MULTI_GPU Backend: nccl
Num processes: 3
Process index: 1
Local process index: 1
Device: cuda:1
Mixed precision type: fp16
10/25/2023 21:18:04 - INFO - main - Distributed environment: MULTI_GPU Backend: nccl
Num processes: 3
Process index: 2
Local process index: 2
Device: cuda:2
Mixed precision type: fp16
10/25/2023 21:18:04 - INFO - main - Distributed environment: MULTI_GPU Backend: nccl
Num processes: 3
Process index: 0
Local process index: 0
Device: cuda:0
Mixed precision type: fp16
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors.
{‘variance_type’, ‘clip_sample_range’, ‘thresholding’, ‘dynamic_thresholding_ratio’} was not found in config. Values will be initialized to default values.
{‘attention_type’, ‘reverse_transformer_layers_per_block’, ‘dropout’} was not found in config. Values will be initialized to default values.
==========HERE1=============
==========HERE1=============
==========HERE1=============
==========HERE2=============
==========HERE2=============
==========HERE2=============
==========HERE3=============
True
===========Here3.1===========
===========Here3.2===========
==========HERE3=============
==========HERE3=========
```
### Environment info
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_kmp_llvm conda-forge
absl-py 2.0.0 pypi_0 pypi
accelerate 0.24.0 pypi_0 pypi
aiohttp 3.8.6 pypi_0 pypi
aiosignal 1.3.1 pypi_0 pypi
appdirs 1.4.4 pyh9f0ad1d_0 conda-forge
async-timeout 4.0.3 pypi_0 pypi
attrs 23.1.0 pypi_0 pypi
bitsandbytes 0.41.1 pypi_0 pypi
blas 1.0 mkl
blessings 1.7 py39h06a4308_1002
brotli-python 1.0.9 py39h6a678d5_7
bzip2 1.0.8 h7b6447c_0
ca-certificates 2023.08.22 h06a4308_0
cachetools 5.3.2 pypi_0 pypi
certifi 2023.7.22 py39h06a4308_0
cffi 1.15.1 py39h5eee18b_3
charset-normalizer 2.0.4 pyhd3eb1b0_0
click 8.1.7 unix_pyh707e725_0 conda-forge
cryptography 41.0.3 py39hdda0065_0
cuda-cudart 11.7.99 0 nvidia
cuda-cupti 11.7.101 0 nvidia
cuda-libraries 11.7.1 0 nvidia
cuda-nvrtc 11.7.99 0 nvidia
cuda-nvtx 11.7.91 0 nvidia
cuda-runtime 11.7.1 0 nvidia
datasets 2.14.6 pypi_0 pypi
diffusers 0.22.0.dev0 pypi_0 pypi
dill 0.3.7 pypi_0 pypi
docker-pycreds 0.4.0 py_0 conda-forge
ffmpeg 4.3 hf484d3e_0 pytorch
filelock 3.12.4 pypi_0 pypi
freetype 2.12.1 h4a9f257_0
frozenlist 1.4.0 pypi_0 pypi
fsspec 2023.10.0 pypi_0 pypi
ftfy 6.1.1 pypi_0 pypi
giflib 5.2.1 h5eee18b_3
gitdb 4.0.11 pyhd8ed1ab_0 conda-forge
gitpython 3.1.40 pyhd8ed1ab_0 conda-forge
gmp 6.2.1 h295c915_3
gnutls 3.6.15 he1e5248_0
google-auth 2.23.3 pypi_0 pypi
google-auth-oauthlib 1.1.0 pypi_0 pypi
gpustat 0.6.0 pyhd3eb1b0_1
grpcio 1.59.0 pypi_0 pypi
huggingface-hub 0.17.3 pypi_0 pypi
idna 3.4 py39h06a4308_0
importlib-metadata 6.8.0 pypi_0 pypi
intel-openmp 2023.1.0 hdb19cb5_46305
jinja2 3.1.2 pypi_0 pypi
jpeg 9e h5eee18b_1
lame 3.100 h7b6447c_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.38 h1181459_1
lerc 3.0 h295c915_0
libcublas 11.10.3.66 0 nvidia
libcufft 10.7.2.124 h4fbf590_0 nvidia
libcufile 1.8.0.34 0 nvidia
libcurand 10.3.4.52 0 nvidia
libcusolver 11.4.0.1 0 nvidia
libcusparse 11.7.4.91 0 nvidia
libdeflate 1.17 h5eee18b_1
libffi 3.4.4 h6a678d5_0
libgcc-ng 13.2.0 h807b86a_2 conda-forge
libgfortran-ng 13.2.0 h69a702a_2 conda-forge
libgfortran5 13.2.0 ha4646dd_2 conda-forge
libiconv 1.16 h7f8727e_2
libidn2 2.3.4 h5eee18b_0
libnpp 11.7.4.75 0 nvidia
libnvjpeg 11.8.0.2 0 nvidia
libpng 1.6.39 h5eee18b_0
libprotobuf 3.20.3 he621ea3_0
libstdcxx-ng 13.2.0 h7e041cc_2 conda-forge
libtasn1 4.19.0 h5eee18b_0
libtiff 4.5.1 h6a678d5_0
libunistring 0.9.10 h27cfd23_0
libwebp 1.3.2 h11a3e52_0
libwebp-base 1.3.2 h5eee18b_0
llvm-openmp 14.0.6 h9e868ea_0
lz4-c 1.9.4 h6a678d5_0
markdown 3.5 pypi_0 pypi
markupsafe 2.1.3 pypi_0 pypi
mkl 2023.1.0 h213fc3f_46343
mkl-service 2.4.0 py39h5eee18b_1
mkl_fft 1.3.8 py39h5eee18b_0
mkl_random 1.2.4 py39hdb19cb5_0
multidict 6.0.4 pypi_0 pypi
multiprocess 0.70.15 pypi_0 pypi
ncurses 6.4 h6a678d5_0
nettle 3.7.3 hbbd107a_1
numpy 1.26.0 py39h5f9d8c6_0
numpy-base 1.26.0 py39hb5e798b_0
nvidia-ml 7.352.0 pyhd3eb1b0_0
oauthlib 3.2.2 pypi_0 pypi
openh264 2.1.1 h4ff587b_0
openjpeg 2.4.0 h3ad879b_0
openssl 3.0.11 h7f8727e_2
packaging 23.2 pypi_0 pypi
pandas 2.1.1 pypi_0 pypi
pathtools 0.1.2 py_1 conda-forge
pillow 10.0.1 py39ha6cbd5a_0
pip 23.3 py39h06a4308_0
protobuf 4.23.4 pypi_0 pypi
psutil 5.9.6 pypi_0 pypi
pyarrow 13.0.0 pypi_0 pypi
pyasn1 0.5.0 pypi_0 pypi
pyasn1-modules 0.3.0 pypi_0 pypi
pycparser 2.21 pyhd3eb1b0_0
pyopenssl 23.2.0 py39h06a4308_0
pysocks 1.7.1 py39h06a4308_0
python 3.9.18 h955ad1f_0
python-dateutil 2.8.2 pypi_0 pypi
python_abi 3.9 2_cp39 conda-forge
pytorch 1.13.1 py3.9_cuda11.7_cudnn8.5.0_0 pytorch
pytorch-cuda 11.7 h778d358_5 pytorch
pytorch-mutex 1.0 cuda pytorch
pytz 2023.3.post1 pypi_0 pypi
pyyaml 6.0.1 pypi_0 pypi
readline 8.2 h5eee18b_0
regex 2023.10.3 pypi_0 pypi
requests 2.31.0 py39h06a4308_0
requests-oauthlib 1.3.1 pypi_0 pypi
rsa 4.9 pypi_0 pypi
safetensors 0.4.0 pypi_0 pypi
scipy 1.11.3 py39h5f9d8c6_0
sentry-sdk 1.32.0 pyhd8ed1ab_0 conda-forge
setproctitle 1.1.10 py39h3811e60_1004 conda-forge
setuptools 68.0.0 py39h06a4308_0
six 1.16.0 pyh6c4a22f_0 conda-forge
smmap 5.0.0 pyhd8ed1ab_0 conda-forge
sqlite 3.41.2 h5eee18b_0
tbb 2021.8.0 hdb19cb5_0
tensorboard 2.15.0 pypi_0 pypi
tensorboard-data-server 0.7.2 pypi_0 pypi
tk 8.6.12 h1ccaba5_0
tokenizers 0.14.1 pypi_0 pypi
torchaudio 0.13.1 py39_cu117 pytorch
torchtriton 2.1.0 py39 pytorch
torchvision 0.14.1 py39_cu117 pytorch
tqdm 4.66.1 pypi_0 pypi
transformers 4.34.1 pypi_0 pypi
typing_extensions 4.7.1 py39h06a4308_0
tzdata 2023.3 pypi_0 pypi
urllib3 1.26.18 py39h06a4308_0
wandb 0.15.12 pyhd8ed1ab_0 conda-forge
wcwidth 0.2.8 pypi_0 pypi
werkzeug 3.0.1 pypi_0 pypi
wheel 0.41.2 py39h06a4308_0
xformers 0.0.22.post7 py39_cu11.7.1_pyt1.13.1 xformers
xxhash 3.4.1 pypi_0 pypi
xz 5.4.2 h5eee18b_0
yaml 0.2.5 h7f98852_2 conda-forge
yarl 1.9.2 pypi_0 pypi
zipp 3.17.0 pypi_0 pypi
zlib 1.2.13 h5eee18b_0
zstd 1.5.5 hc292b87_0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10846405?v=4",
"events_url": "https://api.github.com/users/bhosalems/events{/privacy}",
"followers_url": "https://api.github.com/users/bhosalems/followers",
"following_url": "https://api.github.com/users/bhosalems/following{/other_user}",
"gists_url": "https://api.github.com/users/bhosalems/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bhosalems",
"id": 10846405,
"login": "bhosalems",
"node_id": "MDQ6VXNlcjEwODQ2NDA1",
"organizations_url": "https://api.github.com/users/bhosalems/orgs",
"received_events_url": "https://api.github.com/users/bhosalems/received_events",
"repos_url": "https://api.github.com/users/bhosalems/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bhosalems/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhosalems/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bhosalems",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6363/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6363/timeline | null | completed | false | 534.921111 |
https://api.github.com/repos/huggingface/datasets/issues/6362 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6362/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6362/comments | https://api.github.com/repos/huggingface/datasets/issues/6362/events | https://github.com/huggingface/datasets/pull/6362 | 1,965,794,569 | PR_kwDODunzps5d_MxD | 6,362 | Simplify filesystem logic | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 13 | 2023-10-27T15:54:18 | 2023-11-15T14:08:29 | 2023-11-15T14:02:02 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6362.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6362",
"merged_at": "2023-11-15T14:02:02",
"patch_url": "https://github.com/huggingface/datasets/pull/6362.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6362"
} | Simplifies the existing filesystem logic (e.g., to avoid unnecessary if-else as mentioned in https://github.com/huggingface/datasets/pull/6098#issue-1827655071) | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6362/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6362/timeline | null | null | true | 454.128889 |
https://api.github.com/repos/huggingface/datasets/issues/6360 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6360/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6360/comments | https://api.github.com/repos/huggingface/datasets/issues/6360/events | https://github.com/huggingface/datasets/issues/6360 | 1,965,672,950 | I_kwDODunzps51Kcn2 | 6,360 | Add support for `Sequence(Audio/Image)` feature in `push_to_hub` | {
"avatar_url": "https://avatars.githubusercontent.com/u/21087104?v=4",
"events_url": "https://api.github.com/users/Laurent2916/events{/privacy}",
"followers_url": "https://api.github.com/users/Laurent2916/followers",
"following_url": "https://api.github.com/users/Laurent2916/following{/other_user}",
"gists_url": "https://api.github.com/users/Laurent2916/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Laurent2916",
"id": 21087104,
"login": "Laurent2916",
"node_id": "MDQ6VXNlcjIxMDg3MTA0",
"organizations_url": "https://api.github.com/users/Laurent2916/orgs",
"received_events_url": "https://api.github.com/users/Laurent2916/received_events",
"repos_url": "https://api.github.com/users/Laurent2916/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Laurent2916/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Laurent2916/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Laurent2916",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
}
] | null | 1 | 2023-10-27T14:39:57 | 2024-02-06T19:24:20 | 2024-02-06T19:24:20 | CONTRIBUTOR | null | null | null | ### Feature request
Allow for `Sequence` of `Image` (or `Audio`) to be embedded inside the shards.
### Motivation
Currently, thanks to #3685, when `embed_external_files` is set to True (which is the default) in `push_to_hub`, features of type `Image` and `Audio` are embedded inside the arrow/parquet shards, instead of only storing paths to the files.
I've noticed that this behavior does not extend to `Sequence` of `Image`, when working with a [dataset of timelapse images](https://huggingface.co/datasets/1aurent/Human-Embryo-Timelapse).
### Your contribution
I'll submit a PR if I find a way to add this feature | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6360/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6360/timeline | null | completed | false | 2,452.739722 |
https://api.github.com/repos/huggingface/datasets/issues/6359 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6359/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6359/comments | https://api.github.com/repos/huggingface/datasets/issues/6359/events | https://github.com/huggingface/datasets/issues/6359 | 1,965,378,583 | I_kwDODunzps51JUwX | 6,359 | Stuck in "Resolving data files..." | {
"avatar_url": "https://avatars.githubusercontent.com/u/20135317?v=4",
"events_url": "https://api.github.com/users/Luciennnnnnn/events{/privacy}",
"followers_url": "https://api.github.com/users/Luciennnnnnn/followers",
"following_url": "https://api.github.com/users/Luciennnnnnn/following{/other_user}",
"gists_url": "https://api.github.com/users/Luciennnnnnn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Luciennnnnnn",
"id": 20135317,
"login": "Luciennnnnnn",
"node_id": "MDQ6VXNlcjIwMTM1MzE3",
"organizations_url": "https://api.github.com/users/Luciennnnnnn/orgs",
"received_events_url": "https://api.github.com/users/Luciennnnnnn/received_events",
"repos_url": "https://api.github.com/users/Luciennnnnnn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Luciennnnnnn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Luciennnnnnn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Luciennnnnnn",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 4 | 2023-10-27T12:01:51 | 2024-01-24T15:02:06 | null | NONE | null | null | null | ### Describe the bug
I have an image dataset with 300k images, the size of image is 768 * 768.
When I run `dataset = load_dataset("imagefolder", data_dir="/path/to/img_dir", split='train')` in second time, it takes 50 minutes to finish "Resolving data files" part, what's going on in this part?
From my understand, after Arrow files been created in the first run, the second run should not take time longer than one or two minutes.
### Steps to reproduce the bug
# Run following code two times
dataset = load_dataset("imagefolder", data_dir="/path/to/img_dir", split='train')
### Expected behavior
Fast dataset building
### Environment info
- `datasets` version: 2.14.5
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- Huggingface_hub version: 0.17.3
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6359/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6359/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6358 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6358/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6358/comments | https://api.github.com/repos/huggingface/datasets/issues/6358/events | https://github.com/huggingface/datasets/issues/6358 | 1,965,014,595 | I_kwDODunzps51H75D | 6,358 | Mounting datasets cache fails due to absolute paths. | {
"avatar_url": "https://avatars.githubusercontent.com/u/72921588?v=4",
"events_url": "https://api.github.com/users/charliebudd/events{/privacy}",
"followers_url": "https://api.github.com/users/charliebudd/followers",
"following_url": "https://api.github.com/users/charliebudd/following{/other_user}",
"gists_url": "https://api.github.com/users/charliebudd/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/charliebudd",
"id": 72921588,
"login": "charliebudd",
"node_id": "MDQ6VXNlcjcyOTIxNTg4",
"organizations_url": "https://api.github.com/users/charliebudd/orgs",
"received_events_url": "https://api.github.com/users/charliebudd/received_events",
"repos_url": "https://api.github.com/users/charliebudd/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/charliebudd/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/charliebudd/subscriptions",
"type": "User",
"url": "https://api.github.com/users/charliebudd",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 5 | 2023-10-27T08:20:27 | 2024-04-10T08:50:06 | 2023-11-28T14:47:12 | NONE | null | null | null | ### Describe the bug
Creating a datasets cache and mounting this into, for example, a docker container, renders the data unreadable due to absolute paths written into the cache.
### Steps to reproduce the bug
1. Create a datasets cache by downloading some data
2. Mount the dataset folder into a docker container or remote system.
3. (Edit) Set `HF_HOME` or `HF_DATASET_CACHE` to point to the mounted cache.
4. Attempt to access the data from within the docker container.
5. An error is thrown saying no file exists at \<absolute path to original cache location\>
### Expected behavior
The data is loaded without error
### Environment info
- `datasets` version: 2.14.4
- Platform: Linux-5.4.0-162-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.16.4
- PyArrow version: 13.0.0
- Pandas version: 2.0.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6358/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6358/timeline | null | completed | false | 774.445833 |
https://api.github.com/repos/huggingface/datasets/issues/6357 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6357/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6357/comments | https://api.github.com/repos/huggingface/datasets/issues/6357/events | https://github.com/huggingface/datasets/issues/6357 | 1,964,653,995 | I_kwDODunzps51Gj2r | 6,357 | Allow passing a multiprocessing context to functions that support `num_proc` | {
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bryant1410",
"id": 3905501,
"login": "bryant1410",
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bryant1410",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 0 | 2023-10-27T02:31:16 | 2023-10-27T02:31:16 | null | CONTRIBUTOR | null | null | null | ### Feature request
Allow specifying [a multiprocessing context](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods) to functions that support `num_proc` or use multiprocessing pools. For example, the following could be done:
```python
dataset = dataset.map(_func, num_proc=2, mp_context=multiprocess.get_context("spawn"))
```
Or at least the multiprocessing start method ("fork", "spawn", "fork_server" or `None`):
```python
dataset = dataset.map(_func, num_proc=2, mp_start_method="spawn")
```
Another option could be passing the `pool` as an argument.
### Motivation
By default, `multiprocess` (the `multiprocessing`-fork library that this repo uses) uses the "fork" start method for multiprocessing pools (for the default context). It could be changed by using `set_start_method`. However, this conditions the multiprocessing start method from all processing in a Python program that uses the default context, because [you can't call that function more than once](https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods:~:text=set_start_method()%20should%20not%20be%20used%20more%20than%20once%20in%20the%20program.). My proposal is to allow using a different multiprocessing context, not to condition the whole Python program.
One reason to change the start method is that "fork" (the default) makes child processes likely deadlock if thread pools were created before (and also this is not supported by POSIX). For example, this happens when using PyTorch because OpenMP threads are used for CPU intra-op parallelism, which is enabled by default (e.g., for context see [`torch.set_num_threads`](https://pytorch.org/docs/stable/generated/torch.set_num_threads.html)). This can also be fixed by setting `torch.set_num_threads(1)` (or similarly by other methods) but this conditions the whole Python program as it can only be set once to guarantee its behavior (similarly to). There are noticeable performance differences when setting this number to 1 even when using GPU(s). Using, e.g., a "spawn" start method would solve this issue.
For more context, see:
* https://discuss.huggingface.co/t/dataset-map-stuck-with-torch-set-num-threads-set-to-2-or-larger/37984
* https://discuss.huggingface.co/t/using-num-proc-1-in-dataset-map-hangs/44310
### Your contribution
I'd be happy to review a PR that makes such a change. And if you really don't have the bandwidth for it, I'd consider creating one. | null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6357/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6357/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6356 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6356/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6356/comments | https://api.github.com/repos/huggingface/datasets/issues/6356/events | https://github.com/huggingface/datasets/pull/6356 | 1,964,015,802 | PR_kwDODunzps5d5Jri | 6,356 | Add `fsspec` version to the `datasets-cli env` command output | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2023-10-26T17:19:25 | 2023-10-26T18:42:56 | 2023-10-26T18:32:21 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6356.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6356",
"merged_at": "2023-10-26T18:32:21",
"patch_url": "https://github.com/huggingface/datasets/pull/6356.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6356"
} | ... to make debugging issues easier, as `fsspec`'s releases often introduce breaking changes. | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6356/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6356/timeline | null | null | true | 1.215556 |
https://api.github.com/repos/huggingface/datasets/issues/6355 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6355/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6355/comments | https://api.github.com/repos/huggingface/datasets/issues/6355/events | https://github.com/huggingface/datasets/pull/6355 | 1,963,979,896 | PR_kwDODunzps5d5B2B | 6,355 | More hub centric docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2023-10-26T16:54:46 | 2024-01-11T06:34:16 | 2023-10-30T17:32:57 | MEMBER | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/6355.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6355",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6355.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6355"
} | Let's have more hub-centric documentation in the datasets docs
Tutorials
- Add “Configure the dataset viewer” page
- Change order:
- Overview
- and more focused on the Hub rather than the library
- Then all the hub related things
- and mention how to read/write with other tools like pandas
- Then all the datasets lib related things in a subsection
Also:
- Rename “know your dataset” page to “Explore your dataset”
- Remove “Evaluate Predictions” page since it's 'evaluate' stuff (or move to legacy section ?)
TODO:
- [ ] write the “Configure the dataset viewer” page | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6355/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6355/timeline | null | null | true | 96.636389 |
https://api.github.com/repos/huggingface/datasets/issues/6354 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6354/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6354/comments | https://api.github.com/repos/huggingface/datasets/issues/6354/events | https://github.com/huggingface/datasets/issues/6354 | 1,963,483,324 | I_kwDODunzps51CGC8 | 6,354 | `IterableDataset.from_spark` does not support multiple workers in pytorch `Dataloader` | {
"avatar_url": "https://avatars.githubusercontent.com/u/50199774?v=4",
"events_url": "https://api.github.com/users/NazyS/events{/privacy}",
"followers_url": "https://api.github.com/users/NazyS/followers",
"following_url": "https://api.github.com/users/NazyS/following{/other_user}",
"gists_url": "https://api.github.com/users/NazyS/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NazyS",
"id": 50199774,
"login": "NazyS",
"node_id": "MDQ6VXNlcjUwMTk5Nzc0",
"organizations_url": "https://api.github.com/users/NazyS/orgs",
"received_events_url": "https://api.github.com/users/NazyS/received_events",
"repos_url": "https://api.github.com/users/NazyS/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NazyS/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NazyS/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NazyS",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 1 | 2023-10-26T12:43:36 | 2023-11-14T18:46:03 | null | NONE | null | null | null | ### Describe the bug
Looks like `IterableDataset.from_spark` does not support multiple workers in pytorch `Dataloader` if I'm not missing anything.
Also, returns not consistent error messages, which probably depend on the nondeterministic order of worker executions
Some exampes I've encountered:
```
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-68c05436-3512-41c4-88ca-5630012b70d1/lib/python3.10/site-packages/datasets/packaged_modules/spark/spark.py", line 79, in __iter__
yield from self.generate_examples_fn()
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-68c05436-3512-41c4-88ca-5630012b70d1/lib/python3.10/site-packages/datasets/packaged_modules/spark/spark.py", line 49, in generate_fn
df_with_partition_id = df.select("*", pyspark.sql.functions.spark_partition_id().alias("part_id"))
File "/databricks/spark/python/pyspark/instrumentation_utils.py", line 54, in wrapper
logger.log_failure(
File "/databricks/spark/python/pyspark/databricks/usage_logger.py", line 70, in log_failure
self.logger.recordFunctionCallFailureEvent(
File "/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py", line 1322, in __call__
return_value = get_return_value(
File "/databricks/spark/python/pyspark/errors/exceptions/captured.py", line 188, in deco
return f(*a, **kw)
File "/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/protocol.py", line 342, in get_return_value
return OUTPUT_CONVERTER[type](answer[2:], gateway_client)
KeyError: 'c'
```
```
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-68c05436-3512-41c4-88ca-5630012b70d1/lib/python3.10/site-packages/datasets/packaged_modules/spark/spark.py", line 79, in __iter__
yield from self.generate_examples_fn()
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-68c05436-3512-41c4-88ca-5630012b70d1/lib/python3.10/site-packages/datasets/packaged_modules/spark/spark.py", line 49, in generate_fn
df_with_partition_id = df.select("*", pyspark.sql.functions.spark_partition_id().alias("part_id"))
File "/databricks/spark/python/pyspark/sql/utils.py", line 162, in wrapped
return f(*args, **kwargs)
File "/databricks/spark/python/pyspark/sql/functions.py", line 4893, in spark_partition_id
return _invoke_function("spark_partition_id")
File "/databricks/spark/python/pyspark/sql/functions.py", line 98, in _invoke_function
return Column(jf(*args))
File "/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py", line 1322, in __call__
return_value = get_return_value(
File "/databricks/spark/python/pyspark/errors/exceptions/captured.py", line 188, in deco
return f(*a, **kw)
File "/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/protocol.py", line 342, in get_return_value
return OUTPUT_CONVERTER[type](answer[2:], gateway_client)
KeyError: 'm'
```
```
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-68c05436-3512-41c4-88ca-5630012b70d1/lib/python3.10/site-packages/datasets/packaged_modules/spark/spark.py", line 79, in __iter__
yield from self.generate_examples_fn()
File "/local_disk0/.ephemeral_nfs/envs/pythonEnv-68c05436-3512-41c4-88ca-5630012b70d1/lib/python3.10/site-packages/datasets/packaged_modules/spark/spark.py", line 49, in generate_fn
df_with_partition_id = df.select("*", pyspark.sql.functions.spark_partition_id().alias("part_id"))
File "/databricks/spark/python/pyspark/sql/utils.py", line 162, in wrapped
return f(*args, **kwargs)
File "/databricks/spark/python/pyspark/sql/functions.py", line 4893, in spark_partition_id
return _invoke_function("spark_partition_id")
File "/databricks/spark/python/pyspark/sql/functions.py", line 97, in _invoke_function
jf = _get_jvm_function(name, SparkContext._active_spark_context)
File "/databricks/spark/python/pyspark/sql/functions.py", line 88, in _get_jvm_function
return getattr(sc._jvm.functions, name)
File "/databricks/spark/python/lib/py4j-0.10.9.7-src.zip/py4j/java_gateway.py", line 1725, in __getattr__
raise Py4JError(message)
py4j.protocol.Py4JError: functions does not exist in the JVM
```
### Steps to reproduce the bug
```python
import pandas as pd
import numpy as np
batch_size = 16
pdf = pd.DataFrame({
key: np.random.rand(16*100) for key in ['feature', 'target']
})
test_df = spark.createDataFrame(pdf)
from datasets import IterableDataset
from torch.utils.data import DataLoader
ids = IterableDataset.from_spark(test_df)
for batch in DataLoader(ids, batch_size=16, num_workers=4):
for k, b in batch.items():
print(k, b.shape, sep='\t')
print('\n')
```
### Expected behavior
For `num_workers` equal to 0 or 1 works fine as expected:
```
feature torch.Size([16])
target torch.Size([16])
feature torch.Size([16])
target torch.Size([16])
....
```
Expected to support workers >1.
### Environment info
Databricks 13.3 LTS ML runtime - Spark 3.4.1
pyspark==3.4.1
py4j==0.10.9.7
datasets==2.13.1 and also tested with datasets==2.14.6 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6354/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6354/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6353 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6353/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6353/comments | https://api.github.com/repos/huggingface/datasets/issues/6353/events | https://github.com/huggingface/datasets/issues/6353 | 1,962,646,450 | I_kwDODunzps50-5uy | 6,353 | load_dataset save_to_disk load_from_disk error | {
"avatar_url": "https://avatars.githubusercontent.com/u/13804492?v=4",
"events_url": "https://api.github.com/users/brisker/events{/privacy}",
"followers_url": "https://api.github.com/users/brisker/followers",
"following_url": "https://api.github.com/users/brisker/following{/other_user}",
"gists_url": "https://api.github.com/users/brisker/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/brisker",
"id": 13804492,
"login": "brisker",
"node_id": "MDQ6VXNlcjEzODA0NDky",
"organizations_url": "https://api.github.com/users/brisker/orgs",
"received_events_url": "https://api.github.com/users/brisker/received_events",
"repos_url": "https://api.github.com/users/brisker/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/brisker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brisker/subscriptions",
"type": "User",
"url": "https://api.github.com/users/brisker",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 5 | 2023-10-26T03:47:06 | 2024-04-03T05:31:01 | 2023-10-26T10:18:04 | NONE | null | null | null | ### Describe the bug
datasets version: 2.10.1
I `load_dataset `and `save_to_disk` sucessfully on windows10( **and I `load_from_disk(/LLM/data/wiki)` succcesfully on windows10**), and I copy the dataset `/LLM/data/wiki`
into a ubuntu system, but when I `load_from_disk(/LLM/data/wiki)` on ubuntu, something weird happens:
```
load_from_disk('/LLM/data/wiki')
File "/usr/local/miniconda3/lib/python3.8/site-packages/datasets/load.py", line 1874, in load_from_disk
return DatasetDict.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options)
File "/usr/local/miniconda3/lib/python3.8/site-packages/datasets/dataset_dict.py", line 1309, in load_from_disk
dataset_dict[k] = Dataset.load_from_disk(
File "/usr/local/miniconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1543, in load_from_disk
fs_token_paths = fsspec.get_fs_token_paths(dataset_path, storage_options=storage_options)
File "/usr/local/miniconda3/lib/python3.8/site-packages/fsspec/core.py", line 610, in get_fs_token_paths
chain = _un_chain(urlpath0, storage_options or {})
File "/usr/local/miniconda3/lib/python3.8/site-packages/fsspec/core.py", line 325, in _un_chain
cls = get_filesystem_class(protocol)
File "/usr/local/miniconda3/lib/python3.8/site-packages/fsspec/registry.py", line 232, in get_filesystem_class
raise ValueError(f"Protocol not known: {protocol}")
ValueError: Protocol not known: /LLM/data/wiki
```
It seems that something went wrong on the arrow file?
How can I solve this , since currently I can not save_to_disk on ubuntu system
### Steps to reproduce the bug
datasets version: 2.10.1
### Expected behavior
datasets version: 2.10.1
### Environment info
datasets version: 2.10.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/13804492?v=4",
"events_url": "https://api.github.com/users/brisker/events{/privacy}",
"followers_url": "https://api.github.com/users/brisker/followers",
"following_url": "https://api.github.com/users/brisker/following{/other_user}",
"gists_url": "https://api.github.com/users/brisker/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/brisker",
"id": 13804492,
"login": "brisker",
"node_id": "MDQ6VXNlcjEzODA0NDky",
"organizations_url": "https://api.github.com/users/brisker/orgs",
"received_events_url": "https://api.github.com/users/brisker/received_events",
"repos_url": "https://api.github.com/users/brisker/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/brisker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brisker/subscriptions",
"type": "User",
"url": "https://api.github.com/users/brisker",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6353/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6353/timeline | null | completed | false | 6.516111 |
https://api.github.com/repos/huggingface/datasets/issues/6352 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6352/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6352/comments | https://api.github.com/repos/huggingface/datasets/issues/6352/events | https://github.com/huggingface/datasets/issues/6352 | 1,962,296,057 | I_kwDODunzps509kL5 | 6,352 | Error loading wikitext data raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.") | {
"avatar_url": "https://avatars.githubusercontent.com/u/68569076?v=4",
"events_url": "https://api.github.com/users/Ahmed-Roushdy/events{/privacy}",
"followers_url": "https://api.github.com/users/Ahmed-Roushdy/followers",
"following_url": "https://api.github.com/users/Ahmed-Roushdy/following{/other_user}",
"gists_url": "https://api.github.com/users/Ahmed-Roushdy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Ahmed-Roushdy",
"id": 68569076,
"login": "Ahmed-Roushdy",
"node_id": "MDQ6VXNlcjY4NTY5MDc2",
"organizations_url": "https://api.github.com/users/Ahmed-Roushdy/orgs",
"received_events_url": "https://api.github.com/users/Ahmed-Roushdy/received_events",
"repos_url": "https://api.github.com/users/Ahmed-Roushdy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Ahmed-Roushdy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ahmed-Roushdy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Ahmed-Roushdy",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 13 | 2023-10-25T21:55:31 | 2024-03-19T16:46:22 | 2023-11-07T07:26:54 | NONE | null | null | null | I was trying to load the wiki dataset, but i got this error
traindata = load_dataset('wikitext', 'wikitext-2-raw-v1', split='train')
File "/home/aelkordy/.conda/envs/prune_llm/lib/python3.9/site-packages/datasets/load.py", line 1804, in load_dataset
ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
File "/home/aelkordy/.conda/envs/prune_llm/lib/python3.9/site-packages/datasets/builder.py", line 1108, in as_dataset
raise NotImplementedError(f"Loading a dataset cached in a {type(self._fs).__name__} is not supported.")
NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6352/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6352/timeline | null | completed | false | 297.523056 |
https://api.github.com/repos/huggingface/datasets/issues/6351 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6351/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6351/comments | https://api.github.com/repos/huggingface/datasets/issues/6351/events | https://github.com/huggingface/datasets/pull/6351 | 1,961,982,988 | PR_kwDODunzps5dyMvh | 6,351 | Fix use_dataset.mdx | {
"avatar_url": "https://avatars.githubusercontent.com/u/17672548?v=4",
"events_url": "https://api.github.com/users/angel-luis/events{/privacy}",
"followers_url": "https://api.github.com/users/angel-luis/followers",
"following_url": "https://api.github.com/users/angel-luis/following{/other_user}",
"gists_url": "https://api.github.com/users/angel-luis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/angel-luis",
"id": 17672548,
"login": "angel-luis",
"node_id": "MDQ6VXNlcjE3NjcyNTQ4",
"organizations_url": "https://api.github.com/users/angel-luis/orgs",
"received_events_url": "https://api.github.com/users/angel-luis/received_events",
"repos_url": "https://api.github.com/users/angel-luis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/angel-luis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/angel-luis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/angel-luis",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2023-10-25T18:21:08 | 2023-10-26T17:19:49 | 2023-10-26T17:10:27 | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6351.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6351",
"merged_at": "2023-10-26T17:10:27",
"patch_url": "https://github.com/huggingface/datasets/pull/6351.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6351"
} | The current example isn't working because it can't find `labels` inside the Dataset object. So I've added an extra step to the process. Tested and working in Colab. | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6351/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6351/timeline | null | null | true | 22.821944 |
https://api.github.com/repos/huggingface/datasets/issues/6350 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6350/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6350/comments | https://api.github.com/repos/huggingface/datasets/issues/6350/events | https://github.com/huggingface/datasets/issues/6350 | 1,961,869,203 | I_kwDODunzps5077-T | 6,350 | Different objects are returned from calls that should be returning the same kind of object. | {
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"events_url": "https://api.github.com/users/phalexo/events{/privacy}",
"followers_url": "https://api.github.com/users/phalexo/followers",
"following_url": "https://api.github.com/users/phalexo/following{/other_user}",
"gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/phalexo",
"id": 4603365,
"login": "phalexo",
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"organizations_url": "https://api.github.com/users/phalexo/orgs",
"received_events_url": "https://api.github.com/users/phalexo/received_events",
"repos_url": "https://api.github.com/users/phalexo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phalexo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/phalexo",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 2 | 2023-10-25T17:08:39 | 2023-10-26T21:03:06 | null | NONE | null | null | null | ### Describe the bug
1. dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", cache_dir=training_args.cache_dir, split='train[:1%]')
2. dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", cache_dir=training_args.cache_dir)
The only difference I would expect these calls to have is the size of the dataset.
But, while 2. returns a dictionary with "train" key in it, 1. returns a dataset WITHOUT any initial "train" keyword.
Both calls are to be used within exactly the same context. They should return identically structured datasets of different size.
### Steps to reproduce the bug
See above.
### Expected behavior
Expect both calls to return the same structured Dataset structure but with different number of elements, i.e. call 1. should have 1% of the data of the call 2.0
### Environment info
Ubuntu 20.04
gcc 9.x.x.
It is really irrelevant. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6350/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6350/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6349/comments | https://api.github.com/repos/huggingface/datasets/issues/6349/events | https://github.com/huggingface/datasets/issues/6349 | 1,961,435,673 | I_kwDODunzps506SIZ | 6,349 | Can't load ds = load_dataset("imdb") | {
"avatar_url": "https://avatars.githubusercontent.com/u/86415736?v=4",
"events_url": "https://api.github.com/users/vivianc2/events{/privacy}",
"followers_url": "https://api.github.com/users/vivianc2/followers",
"following_url": "https://api.github.com/users/vivianc2/following{/other_user}",
"gists_url": "https://api.github.com/users/vivianc2/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vivianc2",
"id": 86415736,
"login": "vivianc2",
"node_id": "MDQ6VXNlcjg2NDE1NzM2",
"organizations_url": "https://api.github.com/users/vivianc2/orgs",
"received_events_url": "https://api.github.com/users/vivianc2/received_events",
"repos_url": "https://api.github.com/users/vivianc2/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vivianc2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vivianc2/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vivianc2",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 4 | 2023-10-25T13:29:51 | 2024-03-20T15:09:53 | 2023-10-31T19:59:35 | NONE | null | null | null | ### Describe the bug
I did `from datasets import load_dataset, load_metric` and then `ds = load_dataset("imdb")` and it gave me the error:
ExpectedMoreDownloadedFiles: {'http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz'}
I tried doing `ds = load_dataset("imdb",download_mode="force_redownload")` as well as reinstalling dataset. I still face this problem.
### Steps to reproduce the bug
1. from datasets import load_dataset, load_metric
2. ds = load_dataset("imdb")
### Expected behavior
It should load and give me this when I run `ds`
DatasetDict({
train: Dataset({
features: ['text', 'label'],
num_rows: 25000
})
test: Dataset({
features: ['text', 'label'],
num_rows: 25000
})
unsupervised: Dataset({
features: ['text', 'label'],
num_rows: 50000
})
})
### Environment info
- `datasets` version: 2.14.6
- Platform: Linux-5.4.0-164-generic-x86_64-with-glibc2.17
- Python version: 3.8.18
- Huggingface_hub version: 0.16.2
- PyArrow version: 13.0.0
- Pandas version: 2.0.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6349/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6349/timeline | null | completed | false | 150.495556 |
https://api.github.com/repos/huggingface/datasets/issues/6348 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6348/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6348/comments | https://api.github.com/repos/huggingface/datasets/issues/6348/events | https://github.com/huggingface/datasets/issues/6348 | 1,961,268,504 | I_kwDODunzps505pUY | 6,348 | Parquet stream-conversion fails to embed images/audio files from gated repos | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | 0 | 2023-10-25T12:12:44 | 2023-10-25T12:13:07 | null | COLLABORATOR | null | null | null | it seems to be an issue with datasets not passing the token to embed_table_storage when generating a dataset
See https://github.com/huggingface/datasets-server/issues/2010 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6348/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6348/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6347/comments | https://api.github.com/repos/huggingface/datasets/issues/6347/events | https://github.com/huggingface/datasets/issues/6347 | 1,959,004,835 | I_kwDODunzps50xAqj | 6,347 | Incorrect example code in 'Create a dataset' docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/72076688?v=4",
"events_url": "https://api.github.com/users/rwood-97/events{/privacy}",
"followers_url": "https://api.github.com/users/rwood-97/followers",
"following_url": "https://api.github.com/users/rwood-97/following{/other_user}",
"gists_url": "https://api.github.com/users/rwood-97/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rwood-97",
"id": 72076688,
"login": "rwood-97",
"node_id": "MDQ6VXNlcjcyMDc2Njg4",
"organizations_url": "https://api.github.com/users/rwood-97/orgs",
"received_events_url": "https://api.github.com/users/rwood-97/received_events",
"repos_url": "https://api.github.com/users/rwood-97/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rwood-97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rwood-97/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rwood-97",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2023-10-24T11:01:21 | 2023-10-25T13:05:21 | 2023-10-25T13:05:21 | NONE | null | null | null | ### Describe the bug
On [this](https://huggingface.co/docs/datasets/create_dataset) page, the example code for loading in images and audio is incorrect.
Currently, examples are:
``` python
from datasets import ImageFolder
dataset = load_dataset("imagefolder", data_dir="/path/to/pokemon")
```
and
``` python
from datasets import AudioFolder
dataset = load_dataset("audiofolder", data_dir="/path/to/folder")
```
I'm pretty sure the imports are wrong and should be:
``` python
from datasets import load_dataset
dataset = load_dataset("audiofolder", data_dir="/path/to/folder")
```
I am happy to update this if this is right but just wanted to check before making any changes.
### Steps to reproduce the bug
Go to https://huggingface.co/docs/datasets/create_dataset
### Expected behavior
N/A
### Environment info
N/A | {
"avatar_url": "https://avatars.githubusercontent.com/u/72076688?v=4",
"events_url": "https://api.github.com/users/rwood-97/events{/privacy}",
"followers_url": "https://api.github.com/users/rwood-97/followers",
"following_url": "https://api.github.com/users/rwood-97/following{/other_user}",
"gists_url": "https://api.github.com/users/rwood-97/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rwood-97",
"id": 72076688,
"login": "rwood-97",
"node_id": "MDQ6VXNlcjcyMDc2Njg4",
"organizations_url": "https://api.github.com/users/rwood-97/orgs",
"received_events_url": "https://api.github.com/users/rwood-97/received_events",
"repos_url": "https://api.github.com/users/rwood-97/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rwood-97/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rwood-97/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rwood-97",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6347/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6347/timeline | null | completed | false | 26.066667 |
https://api.github.com/repos/huggingface/datasets/issues/6346 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6346/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6346/comments | https://api.github.com/repos/huggingface/datasets/issues/6346/events | https://github.com/huggingface/datasets/pull/6346 | 1,958,777,076 | PR_kwDODunzps5dnZM_ | 6,346 | Fix UnboundLocalError if preprocessing returns an empty list | {
"avatar_url": "https://avatars.githubusercontent.com/u/40916592?v=4",
"events_url": "https://api.github.com/users/cwallenwein/events{/privacy}",
"followers_url": "https://api.github.com/users/cwallenwein/followers",
"following_url": "https://api.github.com/users/cwallenwein/following{/other_user}",
"gists_url": "https://api.github.com/users/cwallenwein/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cwallenwein",
"id": 40916592,
"login": "cwallenwein",
"node_id": "MDQ6VXNlcjQwOTE2NTky",
"organizations_url": "https://api.github.com/users/cwallenwein/orgs",
"received_events_url": "https://api.github.com/users/cwallenwein/received_events",
"repos_url": "https://api.github.com/users/cwallenwein/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cwallenwein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cwallenwein/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cwallenwein",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2023-10-24T08:38:43 | 2023-10-25T17:39:17 | 2023-10-25T16:36:38 | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6346.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6346",
"merged_at": "2023-10-25T16:36:38",
"patch_url": "https://github.com/huggingface/datasets/pull/6346.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6346"
} | If this tokenization function is used with IterableDatasets and no sample is as big as the context length, `input_batch` will be an empty list.
```
def tokenize(batch, tokenizer, context_length):
outputs = tokenizer(
batch["text"],
truncation=True,
max_length=context_length,
return_overflowing_tokens=True,
return_length=True
)
input_batch = []
for length, input_ids in zip(outputs["length"], outputs["input_ids"]):
if length == context_length:
input_batch.append(input_ids)
return {"input_ids": input_batch}
dataset.map(tokenize, batched=True, batch_size=batch_size, fn_kwargs={"context_length": context_length, "tokenizer": tokenizer}, remove_columns=dataset.column_names)
```
This will throw the following error: UnboundLocalError: local variable 'batch_idx' referenced before assignment, because the for loop was not executed a single time
```
for batch_idx, example in enumerate(_batch_to_examples(transformed_batch)):
yield new_key, example
current_idx += batch_idx + 1
```
Some of the possible solutions
```
for batch_idx, example in enumerate(_batch_to_examples(transformed_batch)):
yield new_key, example
try:
current_idx += batch_idx + 1
except:
current_idx += 1
```
or
```
batch_idx = 0
for batch_idx, example in enumerate(_batch_to_examples(transformed_batch)):
yield new_key, example
current_idx += batch_idx + 1
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6346/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6346/timeline | null | null | true | 31.965278 |
https://api.github.com/repos/huggingface/datasets/issues/6345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6345/comments | https://api.github.com/repos/huggingface/datasets/issues/6345/events | https://github.com/huggingface/datasets/issues/6345 | 1,957,707,870 | I_kwDODunzps50sEBe | 6,345 | support squad structure datasets using a YAML parameter | {
"avatar_url": "https://avatars.githubusercontent.com/u/138524319?v=4",
"events_url": "https://api.github.com/users/MajdTannous1/events{/privacy}",
"followers_url": "https://api.github.com/users/MajdTannous1/followers",
"following_url": "https://api.github.com/users/MajdTannous1/following{/other_user}",
"gists_url": "https://api.github.com/users/MajdTannous1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/MajdTannous1",
"id": 138524319,
"login": "MajdTannous1",
"node_id": "U_kgDOCEG2nw",
"organizations_url": "https://api.github.com/users/MajdTannous1/orgs",
"received_events_url": "https://api.github.com/users/MajdTannous1/received_events",
"repos_url": "https://api.github.com/users/MajdTannous1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/MajdTannous1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MajdTannous1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/MajdTannous1",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 0 | 2023-10-23T17:55:37 | 2023-10-23T17:55:37 | null | NONE | null | null | null | ### Feature request
Since the squad structure is widely used, I think it could be beneficial to support it using a YAML parameter.
could you implement automatic data loading of squad-like data using squad JSON format, to read it from JSON files and view it in the correct squad structure.
The dataset structure should be like this:
https://huggingface.co/datasets/squad
Columns:id,title,context,question,answers
### Motivation
Dataset repo requires arbitrary Python code execution
### Your contribution
The dataset structure should be like this:
https://huggingface.co/datasets/squad
Columns:id,title,context,question,answers
train and dev sets in squad structure JSON files | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6345/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6345/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6344/comments | https://api.github.com/repos/huggingface/datasets/issues/6344/events | https://github.com/huggingface/datasets/pull/6344 | 1,957,412,169 | PR_kwDODunzps5diyd5 | 6,344 | set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2023-10-23T15:13:28 | 2023-10-23T15:24:31 | 2023-10-23T15:13:38 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6344.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6344",
"merged_at": "2023-10-23T15:13:38",
"patch_url": "https://github.com/huggingface/datasets/pull/6344.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6344"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6344/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6344/timeline | null | null | true | 0.002778 |
https://api.github.com/repos/huggingface/datasets/issues/6343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6343/comments | https://api.github.com/repos/huggingface/datasets/issues/6343/events | https://github.com/huggingface/datasets/pull/6343 | 1,957,370,711 | PR_kwDODunzps5dipeb | 6,343 | Remove unused argument in `_get_data_files_patterns` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2023-10-23T14:54:18 | 2023-11-16T09:09:42 | 2023-11-16T09:03:39 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6343.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6343",
"merged_at": "2023-11-16T09:03:39",
"patch_url": "https://github.com/huggingface/datasets/pull/6343.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6343"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6343/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6343/timeline | null | null | true | 570.155833 |
https://api.github.com/repos/huggingface/datasets/issues/6342 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6342/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6342/comments | https://api.github.com/repos/huggingface/datasets/issues/6342/events | https://github.com/huggingface/datasets/pull/6342 | 1,957,344,445 | PR_kwDODunzps5dijxt | 6,342 | Release: 2.14.6 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 5 | 2023-10-23T14:43:26 | 2023-10-23T15:21:54 | 2023-10-23T15:07:25 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6342.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6342",
"merged_at": "2023-10-23T15:07:25",
"patch_url": "https://github.com/huggingface/datasets/pull/6342.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6342"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6342/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6342/timeline | null | null | true | 0.399722 |
https://api.github.com/repos/huggingface/datasets/issues/6340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6340/comments | https://api.github.com/repos/huggingface/datasets/issues/6340/events | https://github.com/huggingface/datasets/pull/6340 | 1,956,917,893 | PR_kwDODunzps5dhGpW | 6,340 | Release 2.14.5 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2023-10-23T11:10:22 | 2023-10-23T14:20:46 | 2023-10-23T11:12:40 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6340.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6340",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6340.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6340"
} | (wrong release number - I was continuing the 2.14 branch but 2.14.5 was released from `main`) | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6340/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6340/timeline | null | null | true | 0.038333 |
https://api.github.com/repos/huggingface/datasets/issues/6339 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6339/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6339/comments | https://api.github.com/repos/huggingface/datasets/issues/6339/events | https://github.com/huggingface/datasets/pull/6339 | 1,956,912,627 | PR_kwDODunzps5dhFfg | 6,339 | minor release step improvement | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2023-10-23T11:07:04 | 2023-11-07T10:38:54 | 2023-11-07T10:32:41 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6339.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6339",
"merged_at": "2023-11-07T10:32:41",
"patch_url": "https://github.com/huggingface/datasets/pull/6339.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6339"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6339/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6339/timeline | null | null | true | 359.426944 |
https://api.github.com/repos/huggingface/datasets/issues/6338 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6338/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6338/comments | https://api.github.com/repos/huggingface/datasets/issues/6338/events | https://github.com/huggingface/datasets/pull/6338 | 1,956,886,072 | PR_kwDODunzps5dg_sb | 6,338 | pin fsspec before it switches to glob.glob | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2023-10-23T10:50:54 | 2024-01-11T06:32:56 | 2023-10-23T10:51:52 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6338.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6338",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6338.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6338"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6338/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6338/timeline | null | null | true | 0.016111 |
https://api.github.com/repos/huggingface/datasets/issues/6337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6337/comments | https://api.github.com/repos/huggingface/datasets/issues/6337/events | https://github.com/huggingface/datasets/pull/6337 | 1,956,875,259 | PR_kwDODunzps5dg9Uu | 6,337 | Pin supported upper version of fsspec | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 6 | 2023-10-23T10:44:16 | 2023-10-23T12:13:20 | 2023-10-23T12:04:36 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6337.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6337",
"merged_at": "2023-10-23T12:04:36",
"patch_url": "https://github.com/huggingface/datasets/pull/6337.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6337"
} | Pin upper version of `fsspec` to avoid disruptions introduced by breaking changes (and the need of urgent patch releases with hotfixes) on each release on their side. See:
- #6331
- #6210
- #5731
- #5617
- #5447
I propose that we explicitly test, introduce fixes and support each new `fsspec` version release.
CC: @LysandreJik | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6337/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6337/timeline | null | null | true | 1.338889 |
https://api.github.com/repos/huggingface/datasets/issues/6336 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6336/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6336/comments | https://api.github.com/repos/huggingface/datasets/issues/6336/events | https://github.com/huggingface/datasets/pull/6336 | 1,956,827,232 | PR_kwDODunzps5dgy0w | 6,336 | unpin-fsspec | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2023-10-23T10:16:46 | 2024-02-07T12:41:35 | 2023-10-23T10:17:48 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6336.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6336",
"merged_at": "2023-10-23T10:17:48",
"patch_url": "https://github.com/huggingface/datasets/pull/6336.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6336"
} | Close #6333. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6336/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6336/timeline | null | null | true | 0.017222 |
https://api.github.com/repos/huggingface/datasets/issues/6335 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6335/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6335/comments | https://api.github.com/repos/huggingface/datasets/issues/6335/events | https://github.com/huggingface/datasets/pull/6335 | 1,956,740,818 | PR_kwDODunzps5dggIV | 6,335 | Support fsspec 2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 7 | 2023-10-23T09:29:17 | 2024-01-11T06:33:35 | 2023-11-14T14:17:40 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6335.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6335",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6335.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6335"
} | Fix #6333. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6335/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6335/timeline | null | null | true | 532.806389 |
https://api.github.com/repos/huggingface/datasets/issues/6334 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6334/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6334/comments | https://api.github.com/repos/huggingface/datasets/issues/6334/events | https://github.com/huggingface/datasets/pull/6334 | 1,956,719,774 | PR_kwDODunzps5dgbpR | 6,334 | datasets.filesystems: fix is_remote_filesystems | {
"avatar_url": "https://avatars.githubusercontent.com/u/1463443?v=4",
"events_url": "https://api.github.com/users/ap--/events{/privacy}",
"followers_url": "https://api.github.com/users/ap--/followers",
"following_url": "https://api.github.com/users/ap--/following{/other_user}",
"gists_url": "https://api.github.com/users/ap--/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ap--",
"id": 1463443,
"login": "ap--",
"node_id": "MDQ6VXNlcjE0NjM0NDM=",
"organizations_url": "https://api.github.com/users/ap--/orgs",
"received_events_url": "https://api.github.com/users/ap--/received_events",
"repos_url": "https://api.github.com/users/ap--/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ap--/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ap--/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ap--",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2023-10-23T09:17:54 | 2024-02-07T12:41:15 | 2023-10-23T10:14:10 | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6334.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6334",
"merged_at": "2023-10-23T10:14:10",
"patch_url": "https://github.com/huggingface/datasets/pull/6334.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6334"
} | Close #6330, close #6333.
`fsspec.implementations.LocalFilesystem.protocol`
was changed from `str` "file" to `tuple[str,...]` ("file", "local") in `fsspec>=2023.10.0`
This commit supports both styles. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6334/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6334/timeline | null | null | true | 0.937778 |
https://api.github.com/repos/huggingface/datasets/issues/6333 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6333/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6333/comments | https://api.github.com/repos/huggingface/datasets/issues/6333/events | https://github.com/huggingface/datasets/issues/6333 | 1,956,714,423 | I_kwDODunzps50oRe3 | 6,333 | Support fsspec 2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | 4 | 2023-10-23T09:14:53 | 2024-02-07T12:39:58 | 2024-02-07T12:39:58 | MEMBER | null | null | null | Once root issue is fixed, remove temporary pin of fsspec < 2023.10.0 introduced by:
- #6331
Related to issue:
- #6330
As @ZachNagengast suggested, the issue might be related to:
- https://github.com/fsspec/filesystem_spec/pull/1381 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6333/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6333/timeline | null | completed | false | 2,571.418056 |
https://api.github.com/repos/huggingface/datasets/issues/6332 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6332/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6332/comments | https://api.github.com/repos/huggingface/datasets/issues/6332/events | https://github.com/huggingface/datasets/pull/6332 | 1,956,697,328 | PR_kwDODunzps5dgW3w | 6,332 | Replace deprecated license_file in setup.cfg | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 4 | 2023-10-23T09:05:26 | 2023-11-07T08:23:10 | 2023-11-07T08:09:06 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6332.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6332",
"merged_at": "2023-11-07T08:09:06",
"patch_url": "https://github.com/huggingface/datasets/pull/6332.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6332"
} | Replace deprecated license_file in `setup.cfg`.
See: https://github.com/huggingface/datasets/actions/runs/6610930650/job/17953825724?pr=6331
```
/tmp/pip-build-env-a51hls20/overlay/lib/python3.8/site-packages/setuptools/config/setupcfg.py:293: _DeprecatedConfig: Deprecated config in `setup.cfg`
!!
********************************************************************************
The license_file parameter is deprecated, use license_files instead.
By 2023-Oct-30, you need to update your project and remove deprecated calls
or your builds will no longer be supported.
See https://setuptools.pypa.io/en/latest/userguide/declarative_config.html for details.
********************************************************************************
!!
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6332/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6332/timeline | null | null | true | 359.061111 |
https://api.github.com/repos/huggingface/datasets/issues/6331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6331/comments | https://api.github.com/repos/huggingface/datasets/issues/6331/events | https://github.com/huggingface/datasets/pull/6331 | 1,956,671,256 | PR_kwDODunzps5dgRQt | 6,331 | Temporarily pin fsspec < 2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2023-10-23T08:51:50 | 2023-10-23T09:26:42 | 2023-10-23T09:17:55 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6331.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6331",
"merged_at": "2023-10-23T09:17:55",
"patch_url": "https://github.com/huggingface/datasets/pull/6331.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6331"
} | Temporarily pin fsspec < 2023.10.0 until permanent solution is found.
Hot fix #6330.
See: https://github.com/huggingface/datasets/actions/runs/6610904287/job/17953774987
```
...
ERROR tests/test_iterable_dataset.py::test_iterable_dataset_from_file - NotImplementedError: Loading a dataset cached in a LocalFileSystem is not supported.
= 373 failed, 2055 passed, 17 skipped, 8 warnings, 6 errors in 228.14s (0:03:48) =
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6331/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6331/timeline | null | null | true | 0.434722 |
https://api.github.com/repos/huggingface/datasets/issues/6330 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6330/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6330/comments | https://api.github.com/repos/huggingface/datasets/issues/6330/events | https://github.com/huggingface/datasets/issues/6330 | 1,956,053,294 | I_kwDODunzps50lwEu | 6,330 | Latest fsspec==2023.10.0 issue with streaming datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/1981179?v=4",
"events_url": "https://api.github.com/users/ZachNagengast/events{/privacy}",
"followers_url": "https://api.github.com/users/ZachNagengast/followers",
"following_url": "https://api.github.com/users/ZachNagengast/following{/other_user}",
"gists_url": "https://api.github.com/users/ZachNagengast/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ZachNagengast",
"id": 1981179,
"login": "ZachNagengast",
"node_id": "MDQ6VXNlcjE5ODExNzk=",
"organizations_url": "https://api.github.com/users/ZachNagengast/orgs",
"received_events_url": "https://api.github.com/users/ZachNagengast/received_events",
"repos_url": "https://api.github.com/users/ZachNagengast/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ZachNagengast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZachNagengast/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ZachNagengast",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | 8 | 2023-10-22T20:57:10 | 2024-05-08T00:18:39 | 2023-10-23T09:17:56 | CONTRIBUTOR | null | null | null | ### Describe the bug
Loading a streaming dataset with this version of fsspec fails with the following error:
`NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet.`
I suspect the issue is with this PR
https://github.com/fsspec/filesystem_spec/pull/1381
### Steps to reproduce the bug
1. Upgrade fsspec to version `2023.10.0`
2. Attempt to load a streaming dataset e.g. `load_dataset("laion/gpt4v-emotion-dataset", split="train", streaming=True)`
3. Observe the following exception:
```
File "/opt/hostedtoolcache/Python/3.11.6/x64/lib/python3.11/site-packages/datasets/load.py", line 2146, in load_dataset
return builder_instance.as_streaming_dataset(split=split)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/hostedtoolcache/Python/3.11.6/x64/lib/python3.11/site-packages/datasets/builder.py", line 1318, in as_streaming_dataset
raise NotImplementedError(
NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet.
```
### Expected behavior
Should stream the dataset as normal.
### Environment info
datasets@main
fsspec==2023.10.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6330/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6330/timeline | null | completed | false | 12.346111 |
https://api.github.com/repos/huggingface/datasets/issues/6329 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6329/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6329/comments | https://api.github.com/repos/huggingface/datasets/issues/6329/events | https://github.com/huggingface/datasets/issues/6329 | 1,955,858,020 | I_kwDODunzps50lAZk | 6,329 | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی | {
"avatar_url": "https://avatars.githubusercontent.com/u/147399213?v=4",
"events_url": "https://api.github.com/users/shabnam706/events{/privacy}",
"followers_url": "https://api.github.com/users/shabnam706/followers",
"following_url": "https://api.github.com/users/shabnam706/following{/other_user}",
"gists_url": "https://api.github.com/users/shabnam706/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shabnam706",
"id": 147399213,
"login": "shabnam706",
"node_id": "U_kgDOCMkiLQ",
"organizations_url": "https://api.github.com/users/shabnam706/orgs",
"received_events_url": "https://api.github.com/users/shabnam706/received_events",
"repos_url": "https://api.github.com/users/shabnam706/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shabnam706/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shabnam706/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shabnam706",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 0 | 2023-10-22T11:07:46 | 2023-10-23T09:22:58 | 2023-10-23T09:22:58 | NONE | null | null | null | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی
| {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6329/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6329/timeline | null | completed | false | 22.253333 |
https://api.github.com/repos/huggingface/datasets/issues/6328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6328/comments | https://api.github.com/repos/huggingface/datasets/issues/6328/events | https://github.com/huggingface/datasets/issues/6328 | 1,955,857,904 | I_kwDODunzps50lAXw | 6,328 | شبکه های متن به گفتار ابتدا متن داده شده را به بازنمایی میانی | {
"avatar_url": "https://avatars.githubusercontent.com/u/147399213?v=4",
"events_url": "https://api.github.com/users/shabnam706/events{/privacy}",
"followers_url": "https://api.github.com/users/shabnam706/followers",
"following_url": "https://api.github.com/users/shabnam706/following{/other_user}",
"gists_url": "https://api.github.com/users/shabnam706/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/shabnam706",
"id": 147399213,
"login": "shabnam706",
"node_id": "U_kgDOCMkiLQ",
"organizations_url": "https://api.github.com/users/shabnam706/orgs",
"received_events_url": "https://api.github.com/users/shabnam706/received_events",
"repos_url": "https://api.github.com/users/shabnam706/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/shabnam706/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shabnam706/subscriptions",
"type": "User",
"url": "https://api.github.com/users/shabnam706",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2023-10-22T11:07:21 | 2023-10-23T09:22:38 | 2023-10-23T09:22:38 | NONE | null | null | null | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6328/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6328/timeline | null | completed | false | 22.254722 |
https://api.github.com/repos/huggingface/datasets/issues/6327 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6327/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6327/comments | https://api.github.com/repos/huggingface/datasets/issues/6327/events | https://github.com/huggingface/datasets/issues/6327 | 1,955,470,755 | I_kwDODunzps50jh2j | 6,327 | FileNotFoundError when trying to load the downloaded dataset with `load_dataset(..., streaming=True)` | {
"avatar_url": "https://avatars.githubusercontent.com/u/18402347?v=4",
"events_url": "https://api.github.com/users/yzhangcs/events{/privacy}",
"followers_url": "https://api.github.com/users/yzhangcs/followers",
"following_url": "https://api.github.com/users/yzhangcs/following{/other_user}",
"gists_url": "https://api.github.com/users/yzhangcs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yzhangcs",
"id": 18402347,
"login": "yzhangcs",
"node_id": "MDQ6VXNlcjE4NDAyMzQ3",
"organizations_url": "https://api.github.com/users/yzhangcs/orgs",
"received_events_url": "https://api.github.com/users/yzhangcs/received_events",
"repos_url": "https://api.github.com/users/yzhangcs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yzhangcs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yzhangcs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yzhangcs",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2023-10-21T12:27:03 | 2023-10-23T18:50:07 | 2023-10-23T18:50:07 | CONTRIBUTOR | null | null | null | ### Describe the bug
Hi, I'm trying to load the dataset `togethercomputer/RedPajama-Data-1T-Sample` with `load_dataset` in streaming mode, i.e., `streaming=True`, but `FileNotFoundError` occurs.
### Steps to reproduce the bug
I've downloaded the dataset and save it to the cache dir in advance. My hope is loading the files in offline environment and without taking too much hours to prepross the entire data before running into the training process.
So I try the following code to load the files streamingly
```py
dataset = load_dataset('togethercomputer/RedPajama-Data-1T-Sample', streaming=True)
print(next(iter(dataset['train'])))
```
Sadly, it raises the following:
```
FileNotFoundError: [Errno 2] No such file or directory: 'CURRENT_CODE_PATH/arxiv_sample.jsonl'
```
I've noticed that the dataset can be properly found in the begining
```
Using the latest cached version of the module from /root/.cache/huggingface/modules/datasets_modules/datasets/togethercomputer--RedPajama-Data-1T-Sample/6ea3bc8ec2e84ec6d2df1930942e9028ace8c5b9d9143823cf911c50bbd92039 (last modified on Sat Oct 21 20:12:57 2023) since it couldn't be found locally at togethercomputer/RedPajama-Data-1T-Sample., or remotely on the Hugging Face Hub.
```
But it seems that the paths couldn't be properly parsed when loading iteratively.
How should I fix this error. I've tried specifying `data_files` or `data_dir` as `.../arxiv_sample.jsonl` but none of them works.
Thanks.
### Expected behavior
Properly load the dataset.
### Environment info
`datasets==2.14.5` | {
"avatar_url": "https://avatars.githubusercontent.com/u/18402347?v=4",
"events_url": "https://api.github.com/users/yzhangcs/events{/privacy}",
"followers_url": "https://api.github.com/users/yzhangcs/followers",
"following_url": "https://api.github.com/users/yzhangcs/following{/other_user}",
"gists_url": "https://api.github.com/users/yzhangcs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yzhangcs",
"id": 18402347,
"login": "yzhangcs",
"node_id": "MDQ6VXNlcjE4NDAyMzQ3",
"organizations_url": "https://api.github.com/users/yzhangcs/orgs",
"received_events_url": "https://api.github.com/users/yzhangcs/received_events",
"repos_url": "https://api.github.com/users/yzhangcs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yzhangcs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yzhangcs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yzhangcs",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6327/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6327/timeline | null | completed | false | 54.384444 |
https://api.github.com/repos/huggingface/datasets/issues/6326 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6326/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6326/comments | https://api.github.com/repos/huggingface/datasets/issues/6326/events | https://github.com/huggingface/datasets/pull/6326 | 1,955,420,536 | PR_kwDODunzps5dcSRa | 6,326 | Create battery_analysis.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/130216732?v=4",
"events_url": "https://api.github.com/users/vinitkm/events{/privacy}",
"followers_url": "https://api.github.com/users/vinitkm/followers",
"following_url": "https://api.github.com/users/vinitkm/following{/other_user}",
"gists_url": "https://api.github.com/users/vinitkm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vinitkm",
"id": 130216732,
"login": "vinitkm",
"node_id": "U_kgDOB8LzHA",
"organizations_url": "https://api.github.com/users/vinitkm/orgs",
"received_events_url": "https://api.github.com/users/vinitkm/received_events",
"repos_url": "https://api.github.com/users/vinitkm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vinitkm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vinitkm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vinitkm",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 0 | 2023-10-21T10:07:48 | 2023-10-23T14:56:20 | 2023-10-23T14:56:20 | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6326.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6326",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6326.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6326"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6326/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6326/timeline | null | null | true | 52.808889 |
https://api.github.com/repos/huggingface/datasets/issues/6325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6325/comments | https://api.github.com/repos/huggingface/datasets/issues/6325/events | https://github.com/huggingface/datasets/pull/6325 | 1,955,420,178 | PR_kwDODunzps5dcSM3 | 6,325 | Create battery_analysis.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/130216732?v=4",
"events_url": "https://api.github.com/users/vinitkm/events{/privacy}",
"followers_url": "https://api.github.com/users/vinitkm/followers",
"following_url": "https://api.github.com/users/vinitkm/following{/other_user}",
"gists_url": "https://api.github.com/users/vinitkm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vinitkm",
"id": 130216732,
"login": "vinitkm",
"node_id": "U_kgDOB8LzHA",
"organizations_url": "https://api.github.com/users/vinitkm/orgs",
"received_events_url": "https://api.github.com/users/vinitkm/received_events",
"repos_url": "https://api.github.com/users/vinitkm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vinitkm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vinitkm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vinitkm",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 0 | 2023-10-21T10:06:37 | 2023-10-23T14:55:58 | 2023-10-23T14:55:58 | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6325.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6325",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6325.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6325"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6325/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6325/timeline | null | null | true | 52.8225 |
https://api.github.com/repos/huggingface/datasets/issues/6324 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6324/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6324/comments | https://api.github.com/repos/huggingface/datasets/issues/6324/events | https://github.com/huggingface/datasets/issues/6324 | 1,955,126,687 | I_kwDODunzps50iN2f | 6,324 | Conversion to Arrow fails due to wrong type heuristic | {
"avatar_url": "https://avatars.githubusercontent.com/u/2862336?v=4",
"events_url": "https://api.github.com/users/jphme/events{/privacy}",
"followers_url": "https://api.github.com/users/jphme/followers",
"following_url": "https://api.github.com/users/jphme/following{/other_user}",
"gists_url": "https://api.github.com/users/jphme/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jphme",
"id": 2862336,
"login": "jphme",
"node_id": "MDQ6VXNlcjI4NjIzMzY=",
"organizations_url": "https://api.github.com/users/jphme/orgs",
"received_events_url": "https://api.github.com/users/jphme/received_events",
"repos_url": "https://api.github.com/users/jphme/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jphme/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jphme/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jphme",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2023-10-20T23:20:58 | 2023-10-23T20:52:57 | 2023-10-23T20:52:57 | NONE | null | null | null | ### Describe the bug
I have a list of dictionaries with valid/JSON-serializable values.
One key is the denominator for a paragraph. In 99.9% of cases its a number, but there are some occurences of '1a', '2b' and so on.
If trying to convert this list to a dataset with `Dataset.from_list()`, I always get
`ArrowInvalid: Could not convert '1' with type str: tried to convert to int64`, presumably because pyarrow tries to convert the keys to integers.
Is there any way to circumvent this and fix dtypes? I didn't find anything in the documentation.
### Steps to reproduce the bug
* create a list of dicts with one key being a string of an integer for the first few thousand occurences and try to convert to dataset.
### Expected behavior
There shouldn't be an error (e.g. some flag to turn off automatic str to numeric conversion).
### Environment info
- `datasets` version: 2.14.5
- Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.35
- Python version: 3.9.18
- Huggingface_hub version: 0.17.3
- PyArrow version: 13.0.0
- Pandas version: 2.1.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/2862336?v=4",
"events_url": "https://api.github.com/users/jphme/events{/privacy}",
"followers_url": "https://api.github.com/users/jphme/followers",
"following_url": "https://api.github.com/users/jphme/following{/other_user}",
"gists_url": "https://api.github.com/users/jphme/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jphme",
"id": 2862336,
"login": "jphme",
"node_id": "MDQ6VXNlcjI4NjIzMzY=",
"organizations_url": "https://api.github.com/users/jphme/orgs",
"received_events_url": "https://api.github.com/users/jphme/received_events",
"repos_url": "https://api.github.com/users/jphme/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jphme/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jphme/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jphme",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6324/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6324/timeline | null | completed | false | 69.533056 |
https://api.github.com/repos/huggingface/datasets/issues/6323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6323/comments | https://api.github.com/repos/huggingface/datasets/issues/6323/events | https://github.com/huggingface/datasets/issues/6323 | 1,954,245,980 | I_kwDODunzps50e21c | 6,323 | Loading dataset from large GCS bucket very slow since 2.14 | {
"avatar_url": "https://avatars.githubusercontent.com/u/6209990?v=4",
"events_url": "https://api.github.com/users/jbcdnr/events{/privacy}",
"followers_url": "https://api.github.com/users/jbcdnr/followers",
"following_url": "https://api.github.com/users/jbcdnr/following{/other_user}",
"gists_url": "https://api.github.com/users/jbcdnr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jbcdnr",
"id": 6209990,
"login": "jbcdnr",
"node_id": "MDQ6VXNlcjYyMDk5OTA=",
"organizations_url": "https://api.github.com/users/jbcdnr/orgs",
"received_events_url": "https://api.github.com/users/jbcdnr/received_events",
"repos_url": "https://api.github.com/users/jbcdnr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jbcdnr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jbcdnr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jbcdnr",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 1 | 2023-10-20T12:59:55 | 2024-09-03T18:42:33 | null | NONE | null | null | null | ### Describe the bug
Since updating to >2.14 we have very slow access to our parquet files on GCS when loading a dataset (>30 min vs 3s). Our GCS bucket has many objects and resolving globs is very slow. I could track down the problem to this change:
https://github.com/huggingface/datasets/blame/bade7af74437347a760830466eb74f7a8ce0d799/src/datasets/data_files.py#L348
The underlying implementation with gcsfs is really slow. Could you go back to the old way if we are simply giving the parquet files and no glob pattern?
Thank you.
### Steps to reproduce the bug
Load a dataset from a GCS bucket that has many files.
### Expected behavior
Used to be fast (3s) in 2.13
### Environment info
datasets==2.14.5 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6323/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6323/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6322 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6322/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6322/comments | https://api.github.com/repos/huggingface/datasets/issues/6322/events | https://github.com/huggingface/datasets/pull/6322 | 1,952,947,461 | PR_kwDODunzps5dT5vG | 6,322 | Fix regex `get_data_files` formatting for base paths | {
"avatar_url": "https://avatars.githubusercontent.com/u/1981179?v=4",
"events_url": "https://api.github.com/users/ZachNagengast/events{/privacy}",
"followers_url": "https://api.github.com/users/ZachNagengast/followers",
"following_url": "https://api.github.com/users/ZachNagengast/following{/other_user}",
"gists_url": "https://api.github.com/users/ZachNagengast/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ZachNagengast",
"id": 1981179,
"login": "ZachNagengast",
"node_id": "MDQ6VXNlcjE5ODExNzk=",
"organizations_url": "https://api.github.com/users/ZachNagengast/orgs",
"received_events_url": "https://api.github.com/users/ZachNagengast/received_events",
"repos_url": "https://api.github.com/users/ZachNagengast/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ZachNagengast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZachNagengast/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ZachNagengast",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 4 | 2023-10-19T19:45:10 | 2023-10-23T14:40:45 | 2023-10-23T14:31:21 | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6322.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6322",
"merged_at": "2023-10-23T14:31:21",
"patch_url": "https://github.com/huggingface/datasets/pull/6322.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6322"
} | With this pr https://github.com/huggingface/datasets/pull/6309, it is formatting the entire base path into regex, which results in the undesired formatting error `doesn't match the pattern` because of the line in `glob_pattern_to_regex`: `.replace("//", "/")`:
- Input: `hf://datasets/...`
- Output: `hf:/datasets/...`
This fix will only convert the `split_pattern` to regex and keep the `base_path` unchanged.
cc @albertvillanova hopefully this still works with your implementation | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6322/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6322/timeline | null | null | true | 90.769722 |
https://api.github.com/repos/huggingface/datasets/issues/6321 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6321/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6321/comments | https://api.github.com/repos/huggingface/datasets/issues/6321/events | https://github.com/huggingface/datasets/pull/6321 | 1,952,643,483 | PR_kwDODunzps5dS3Mc | 6,321 | Fix typos | {
"avatar_url": "https://avatars.githubusercontent.com/u/3097956?v=4",
"events_url": "https://api.github.com/users/python273/events{/privacy}",
"followers_url": "https://api.github.com/users/python273/followers",
"following_url": "https://api.github.com/users/python273/following{/other_user}",
"gists_url": "https://api.github.com/users/python273/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/python273",
"id": 3097956,
"login": "python273",
"node_id": "MDQ6VXNlcjMwOTc5NTY=",
"organizations_url": "https://api.github.com/users/python273/orgs",
"received_events_url": "https://api.github.com/users/python273/received_events",
"repos_url": "https://api.github.com/users/python273/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/python273/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/python273/subscriptions",
"type": "User",
"url": "https://api.github.com/users/python273",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2023-10-19T16:24:35 | 2023-10-19T17:18:00 | 2023-10-19T17:07:35 | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6321.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6321",
"merged_at": "2023-10-19T17:07:35",
"patch_url": "https://github.com/huggingface/datasets/pull/6321.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6321"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6321/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6321/timeline | null | null | true | 0.716667 |
https://api.github.com/repos/huggingface/datasets/issues/6320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6320/comments | https://api.github.com/repos/huggingface/datasets/issues/6320/events | https://github.com/huggingface/datasets/issues/6320 | 1,952,618,316 | I_kwDODunzps50YpdM | 6,320 | Dataset slice splits can't load training and validation at the same time | {
"avatar_url": "https://avatars.githubusercontent.com/u/32488097?v=4",
"events_url": "https://api.github.com/users/timlac/events{/privacy}",
"followers_url": "https://api.github.com/users/timlac/followers",
"following_url": "https://api.github.com/users/timlac/following{/other_user}",
"gists_url": "https://api.github.com/users/timlac/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/timlac",
"id": 32488097,
"login": "timlac",
"node_id": "MDQ6VXNlcjMyNDg4MDk3",
"organizations_url": "https://api.github.com/users/timlac/orgs",
"received_events_url": "https://api.github.com/users/timlac/received_events",
"repos_url": "https://api.github.com/users/timlac/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/timlac/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timlac/subscriptions",
"type": "User",
"url": "https://api.github.com/users/timlac",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2023-10-19T16:09:22 | 2023-11-30T16:21:15 | 2023-11-30T16:21:15 | NONE | null | null | null | ### Describe the bug
According to the [documentation](https://huggingface.co/docs/datasets/v2.14.5/loading#slice-splits) is should be possible to run the following command:
`train_test_ds = datasets.load_dataset("bookcorpus", split="train+test")`
to load the train and test sets from the dataset.
However executing the equivalent code:
`speech_commands_v1 = load_dataset("superb", "ks", split="train+test")`
only yields the following output:
> Dataset({
> features: ['file', 'audio', 'label'],
> num_rows: 54175
> })
Where loading the dataset without the split argument yields:
> DatasetDict({
> train: Dataset({
> features: ['file', 'audio', 'label'],
> num_rows: 51094
> })
> validation: Dataset({
> features: ['file', 'audio', 'label'],
> num_rows: 6798
> })
> test: Dataset({
> features: ['file', 'audio', 'label'],
> num_rows: 3081
> })
> })
Thus, the API seems to be broken in this regard.
This is a bit annoying since I want to be able to use the split argument with `split="train[:10%]+test[:10%]"` to have smaller dataset to work with when validating my model is working correctly.
### Steps to reproduce the bug
`speech_commands_v1 = load_dataset("superb", "ks", split="train+test")`
### Expected behavior
> DatasetDict({
> train: Dataset({
> features: ['file', 'audio', 'label'],
> num_rows: 51094
> })
> test: Dataset({
> features: ['file', 'audio', 'label'],
> num_rows: 3081
> })
> })
### Environment info
```
import datasets
print(datasets.__version__)
```
> 2.14.5
```
import sys
print(sys.version)
```
> 3.9.17 (main, Jul 5 2023, 20:41:20)
> [GCC 11.2.0] | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6320/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6320/timeline | null | completed | false | 1,008.198056 |
https://api.github.com/repos/huggingface/datasets/issues/6319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6319/comments | https://api.github.com/repos/huggingface/datasets/issues/6319/events | https://github.com/huggingface/datasets/issues/6319 | 1,952,101,717 | I_kwDODunzps50WrVV | 6,319 | Datasets.map is severely broken | {
"avatar_url": "https://avatars.githubusercontent.com/u/4603365?v=4",
"events_url": "https://api.github.com/users/phalexo/events{/privacy}",
"followers_url": "https://api.github.com/users/phalexo/followers",
"following_url": "https://api.github.com/users/phalexo/following{/other_user}",
"gists_url": "https://api.github.com/users/phalexo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/phalexo",
"id": 4603365,
"login": "phalexo",
"node_id": "MDQ6VXNlcjQ2MDMzNjU=",
"organizations_url": "https://api.github.com/users/phalexo/orgs",
"received_events_url": "https://api.github.com/users/phalexo/received_events",
"repos_url": "https://api.github.com/users/phalexo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/phalexo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phalexo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/phalexo",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 15 | 2023-10-19T12:19:33 | 2024-08-08T17:05:08 | null | NONE | null | null | null | ### Describe the bug
Regardless of how many cores I used, I have 16 or 32 threads, map slows down to a crawl at around 80% done, lingers maybe until 97% extremely slowly and NEVER finishes the job. It just hangs.
After watching this for 27 hours I control-C out of it. Until the end one process appears to be doing something, but it never ends.
I saw some comments about fast tokenizers using Rust and all and tried different variations. NOTHING works.
### Steps to reproduce the bug
Running it without breaking the dataset into parts results in the same behavior. The loop was an attempt to see if this was a RAM issue.
for idx in range(100):
dataset = load_dataset("togethercomputer/RedPajama-Data-1T-Sample", cache_dir=cache_dir, split=f'train[{idx}%:{idx+1}%]')
dataset = dataset.map(partial(tokenize_fn, tokenizer), batched=False, num_proc=1, remove_columns=["text", "meta"])
dataset.save_to_disk(training_args.cache_dir + f"/training_data_{idx}")
### Expected behavior
I expect map to run at more or less the same speed it starts with and FINISH its processing.
### Environment info
Python 3.8, same with 3.10 makes no difference.
Ubuntu 20.04, | null | {
"+1": 6,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6319/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6319/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6318 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6318/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6318/comments | https://api.github.com/repos/huggingface/datasets/issues/6318/events | https://github.com/huggingface/datasets/pull/6318 | 1,952,100,706 | PR_kwDODunzps5dRC9V | 6,318 | Deterministic set hash | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2023-10-19T12:19:13 | 2023-10-19T16:27:20 | 2023-10-19T16:16:31 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6318.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6318",
"merged_at": "2023-10-19T16:16:31",
"patch_url": "https://github.com/huggingface/datasets/pull/6318.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6318"
} | Sort the items in a set according to their `datasets.fingerprint.Hasher.hash` hash to get a deterministic hash of sets.
This is useful to get deterministic hashes of tokenizers that use a trie based on python sets.
reported in https://github.com/huggingface/datasets/issues/3847 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6318/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6318/timeline | null | null | true | 3.955 |
https://api.github.com/repos/huggingface/datasets/issues/6317 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6317/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6317/comments | https://api.github.com/repos/huggingface/datasets/issues/6317/events | https://github.com/huggingface/datasets/issues/6317 | 1,951,965,668 | I_kwDODunzps50WKHk | 6,317 | sentiment140 dataset unavailable | {
"avatar_url": "https://avatars.githubusercontent.com/u/52670382?v=4",
"events_url": "https://api.github.com/users/AndreasKarasenko/events{/privacy}",
"followers_url": "https://api.github.com/users/AndreasKarasenko/followers",
"following_url": "https://api.github.com/users/AndreasKarasenko/following{/other_user}",
"gists_url": "https://api.github.com/users/AndreasKarasenko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AndreasKarasenko",
"id": 52670382,
"login": "AndreasKarasenko",
"node_id": "MDQ6VXNlcjUyNjcwMzgy",
"organizations_url": "https://api.github.com/users/AndreasKarasenko/orgs",
"received_events_url": "https://api.github.com/users/AndreasKarasenko/received_events",
"repos_url": "https://api.github.com/users/AndreasKarasenko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AndreasKarasenko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AndreasKarasenko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AndreasKarasenko",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | 2 | 2023-10-19T11:25:21 | 2023-10-19T13:04:56 | 2023-10-19T13:04:56 | NONE | null | null | null | ### Describe the bug
loading the dataset using load_dataset("sentiment140") returns the following error
ConnectionError: Couldn't reach http://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip (error 403)
### Steps to reproduce the bug
Run the following code (version should not matter).
```
from datasets import load_dataset
data = load_dataset("sentiment140")
```
### Expected behavior
The dataset should be loaded just like any other.
The main issue is that it is no longer hosted by stanford. It is still available from a [Google Drive Link](https://docs.google.com/file/d/0B04GJPshIjmPRnZManQwWEdTZjg/edit).
### Environment info
- `datasets` version: 2.14.5
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.10.8
- Huggingface_hub version: 0.17.3
- PyArrow version: 13.0.0
- Pandas version: 2.1.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6317/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6317/timeline | null | completed | false | 1.659722 |
https://api.github.com/repos/huggingface/datasets/issues/6316 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6316/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6316/comments | https://api.github.com/repos/huggingface/datasets/issues/6316/events | https://github.com/huggingface/datasets/pull/6316 | 1,951,819,869 | PR_kwDODunzps5dQGpg | 6,316 | Fix loading Hub datasets with CSV metadata file | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 4 | 2023-10-19T10:21:34 | 2023-10-20T06:23:21 | 2023-10-20T06:14:09 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6316.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6316",
"merged_at": "2023-10-20T06:14:09",
"patch_url": "https://github.com/huggingface/datasets/pull/6316.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6316"
} | Currently, the reading of the metadata file infers the file extension (.jsonl or .csv) from the passed filename. However, downloaded files from the Hub don't have file extension. For example:
- the original file: `hf://datasets/__DUMMY_TRANSFORMERS_USER__/test-dataset-5916a4-16977085077831/metadata.jsonl`
- corresponds to the downloaded path: `/tmp/pytest-of-username/pytest-46/cache/datasets/downloads/9f5374dbb470f711f6b89d66a5eec1f19cc96324b26bcbebe29138bda6cb20e6`, which does not have extension
In the case where the metadata file does not have an extension, the reader assumes it is a JSONL file, thus the reported error when trying to read a CSV file as a JSONL one: `ArrowInvalid: JSON parse error: Invalid value. in row 0`
This behavior was introduced by:
- #4837
This PR extracts the metadata file extension from the original filename (instead of the downloaded one) and passes it as a parameter to the read_metadata function.
Fix #6315. | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6316/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6316/timeline | null | null | true | 19.876389 |
https://api.github.com/repos/huggingface/datasets/issues/6315 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6315/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6315/comments | https://api.github.com/repos/huggingface/datasets/issues/6315/events | https://github.com/huggingface/datasets/issues/6315 | 1,951,800,819 | I_kwDODunzps50Vh3z | 6,315 | Hub datasets with CSV metadata raise ArrowInvalid: JSON parse error: Invalid value. in row 0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | 0 | 2023-10-19T10:11:29 | 2023-10-20T06:14:10 | 2023-10-20T06:14:10 | MEMBER | null | null | null | When trying to load a Hub dataset that contains a CSV metadata file, it raises an `ArrowInvalid` error:
```
E pyarrow.lib.ArrowInvalid: JSON parse error: Invalid value. in row 0
pyarrow/error.pxi:100: ArrowInvalid
```
See: https://huggingface.co/datasets/lukarape/public_small_papers/discussions/1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6315/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6315/timeline | null | completed | false | 20.044722 |
https://api.github.com/repos/huggingface/datasets/issues/6314 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6314/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6314/comments | https://api.github.com/repos/huggingface/datasets/issues/6314/events | https://github.com/huggingface/datasets/pull/6314 | 1,951,684,763 | PR_kwDODunzps5dPo25 | 6,314 | Support creating new branch in push_to_hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4",
"events_url": "https://api.github.com/users/jmif/events{/privacy}",
"followers_url": "https://api.github.com/users/jmif/followers",
"following_url": "https://api.github.com/users/jmif/following{/other_user}",
"gists_url": "https://api.github.com/users/jmif/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jmif",
"id": 1000442,
"login": "jmif",
"node_id": "MDQ6VXNlcjEwMDA0NDI=",
"organizations_url": "https://api.github.com/users/jmif/orgs",
"received_events_url": "https://api.github.com/users/jmif/received_events",
"repos_url": "https://api.github.com/users/jmif/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmif/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jmif",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 0 | 2023-10-19T09:12:39 | 2023-10-19T09:20:06 | 2023-10-19T09:19:48 | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6314.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6314",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6314.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6314"
} | This adds support for creating a new branch when pushing a dataset to the hub. Tested both methods locally and branches are created. | {
"avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4",
"events_url": "https://api.github.com/users/jmif/events{/privacy}",
"followers_url": "https://api.github.com/users/jmif/followers",
"following_url": "https://api.github.com/users/jmif/following{/other_user}",
"gists_url": "https://api.github.com/users/jmif/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jmif",
"id": 1000442,
"login": "jmif",
"node_id": "MDQ6VXNlcjEwMDA0NDI=",
"organizations_url": "https://api.github.com/users/jmif/orgs",
"received_events_url": "https://api.github.com/users/jmif/received_events",
"repos_url": "https://api.github.com/users/jmif/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmif/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jmif",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6314/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6314/timeline | null | null | true | 0.119167 |
https://api.github.com/repos/huggingface/datasets/issues/6313 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6313/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6313/comments | https://api.github.com/repos/huggingface/datasets/issues/6313/events | https://github.com/huggingface/datasets/pull/6313 | 1,951,527,712 | PR_kwDODunzps5dPGmL | 6,313 | Fix commit message formatting in multi-commit uploads | {
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/qgallouedec",
"id": 45557362,
"login": "qgallouedec",
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"type": "User",
"url": "https://api.github.com/users/qgallouedec",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2023-10-19T07:53:56 | 2023-10-20T14:06:13 | 2023-10-20T13:57:39 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6313.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6313",
"merged_at": "2023-10-20T13:57:38",
"patch_url": "https://github.com/huggingface/datasets/pull/6313.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6313"
} | Currently, the commit message keeps on adding:
- `Upload dataset (part 00000-of-00002)`
- `Upload dataset (part 00000-of-00002) (part 00001-of-00002)`
Introduced in https://github.com/huggingface/datasets/pull/6269
This PR fixes this issue to have
- `Upload dataset (part 00000-of-00002)`
- `Upload dataset (part 00001-of-00002)` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6313/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6313/timeline | null | null | true | 30.061944 |
https://api.github.com/repos/huggingface/datasets/issues/6312 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6312/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6312/comments | https://api.github.com/repos/huggingface/datasets/issues/6312/events | https://github.com/huggingface/datasets/pull/6312 | 1,950,128,416 | PR_kwDODunzps5dKWDF | 6,312 | docs: resolving namespace conflict, refactored variable | {
"avatar_url": "https://avatars.githubusercontent.com/u/74114936?v=4",
"events_url": "https://api.github.com/users/smty2018/events{/privacy}",
"followers_url": "https://api.github.com/users/smty2018/followers",
"following_url": "https://api.github.com/users/smty2018/following{/other_user}",
"gists_url": "https://api.github.com/users/smty2018/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/smty2018",
"id": 74114936,
"login": "smty2018",
"node_id": "MDQ6VXNlcjc0MTE0OTM2",
"organizations_url": "https://api.github.com/users/smty2018/orgs",
"received_events_url": "https://api.github.com/users/smty2018/received_events",
"repos_url": "https://api.github.com/users/smty2018/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/smty2018/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smty2018/subscriptions",
"type": "User",
"url": "https://api.github.com/users/smty2018",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2023-10-18T16:10:59 | 2023-10-19T16:31:59 | 2023-10-19T16:23:07 | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6312.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6312",
"merged_at": "2023-10-19T16:23:07",
"patch_url": "https://github.com/huggingface/datasets/pull/6312.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6312"
} | In docs of about_arrow.md, in the below example code
![image](https://github.com/huggingface/datasets/assets/74114936/fc70e152-e15f-422e-949a-1c4c4c9aa116)
The variable name 'time' was being used in a way that could potentially lead to a namespace conflict with Python's built-in 'time' module. It is not a good convention and can lead to unintended variable shadowing for any user re-using the example code.
To ensure code clarity, and prevent potential naming conflicts renamed the variable 'time' to 'elapsed_time' in the example code. | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6312/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6312/timeline | null | null | true | 24.202222 |
https://api.github.com/repos/huggingface/datasets/issues/6311 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6311/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6311/comments | https://api.github.com/repos/huggingface/datasets/issues/6311/events | https://github.com/huggingface/datasets/issues/6311 | 1,949,304,993 | I_kwDODunzps50MAih | 6,311 | cast_column to Sequence with length=4 occur exception raise in datasets/table.py:2146 | {
"avatar_url": "https://avatars.githubusercontent.com/u/16574677?v=4",
"events_url": "https://api.github.com/users/neiblegy/events{/privacy}",
"followers_url": "https://api.github.com/users/neiblegy/followers",
"following_url": "https://api.github.com/users/neiblegy/following{/other_user}",
"gists_url": "https://api.github.com/users/neiblegy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/neiblegy",
"id": 16574677,
"login": "neiblegy",
"node_id": "MDQ6VXNlcjE2NTc0Njc3",
"organizations_url": "https://api.github.com/users/neiblegy/orgs",
"received_events_url": "https://api.github.com/users/neiblegy/received_events",
"repos_url": "https://api.github.com/users/neiblegy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/neiblegy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neiblegy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/neiblegy",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 4 | 2023-10-18T09:38:05 | 2024-02-06T19:24:20 | 2024-02-06T19:24:20 | NONE | null | null | null | ### Describe the bug
i load a dataset from local csv file which has 187383612 examples, then use `map` to generate new columns for test.
here is my code :
```
import os
from datasets import load_dataset
from datasets.features import Sequence, Value
def add_new_path(example):
example["ais_bbox"] = [100,100,200,200]
example["ais_image_path"] = os.path.join("images", example["image_path"]) if example["image_path"] else ""
return example
ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1749/")
hf_ds = ais_dataset.map(add_new_path, batched=False, num_proc=32)
ds = hf_ds.cast_column("ais_bbox", Sequence(Value("int32"), length=4))
```
and the `cast_column` raise an exception
```
Casting the dataset: 3%|███▉
...
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2110, in cast_column
return self.cast(features)
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2055, in cast
dataset = dataset.map(
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3097, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3474, in _map_single
batch = apply_function_on_filtered_inputs(
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3353, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2329, in table_cast
return cast_table_to_schema(table, schema)
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2288, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2288, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 1831, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 1831, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/protoss.gao/.local/lib/python3.9/site-packages/datasets/table.py", line 2145, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
list<item: int64>
to
Sequence(feature=Value(dtype='int32', id=None), length=4, id=None)
```
i check the source code and make debug info:
in datasets/table.py:2092
```
2091 if feature.length > -1:
2092 if feature.length * len(array) == len(array.values):
2093 return pa.FixedSizeListArray.from_arrays(_c(array.values, feature.feature), feature.length)
2094 print(len(array))
2095 print(len(array.values))
```
my feature.length is 4. but feature.length * len(array) == len(array.values) is false.
print(len(array)) is 262
print(len(array.values)) is 4000
then I use "for item in array" to print each item then get 262 * [100,100,200,200]
and use "for item in array.values" to print each item and get 4000 int32 which are 1000 * [100,100,200,200]
i'm wondering the `chunk` in each `array.chunks`, the "chunk.values" may get all the chunks's value rather than single chunk? but i check the pyarrow's doc seems chunk.values is chunk's value not all.
### Steps to reproduce the bug
code provided above.
### Expected behavior
feature.length * len(array) == len(array.values) should be true. and there should not has Exception.
### Environment info
python3.9
x86_64
datasets: 2.14.4
pyarrow: 13.0.0 or 10.0.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6311/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6311/timeline | null | completed | false | 2,673.770833 |
https://api.github.com/repos/huggingface/datasets/issues/6310 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6310/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6310/comments | https://api.github.com/repos/huggingface/datasets/issues/6310/events | https://github.com/huggingface/datasets/pull/6310 | 1,947,457,988 | PR_kwDODunzps5dBPnY | 6,310 | Add return_file_name in load_dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/40604584?v=4",
"events_url": "https://api.github.com/users/juliendenize/events{/privacy}",
"followers_url": "https://api.github.com/users/juliendenize/followers",
"following_url": "https://api.github.com/users/juliendenize/following{/other_user}",
"gists_url": "https://api.github.com/users/juliendenize/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/juliendenize",
"id": 40604584,
"login": "juliendenize",
"node_id": "MDQ6VXNlcjQwNjA0NTg0",
"organizations_url": "https://api.github.com/users/juliendenize/orgs",
"received_events_url": "https://api.github.com/users/juliendenize/received_events",
"repos_url": "https://api.github.com/users/juliendenize/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/juliendenize/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juliendenize/subscriptions",
"type": "User",
"url": "https://api.github.com/users/juliendenize",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 7 | 2023-10-17T13:36:57 | 2024-08-09T11:51:55 | 2024-07-31T13:56:50 | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6310.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6310",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6310.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6310"
} | Proposition to fix #5806.
Added an optional parameter `return_file_name` in the dataset builder config. When set to `True`, the function will include the file name corresponding to the sample in the returned output.
There is a difference between arrow-based and folder-based datasets to return the file name:
- for arrow-based: a column is concatenated after the table is cast.
- for folder-based: `dataset.info.features` has the entry `file_name` and the original file name is passed to the `sample_metadata` dictionary.
The difference in behavior might be a concern, also I do not know whether the `file_name` should return the original file path or the downloaded one for folder-based datasets.
I added some tests for the datasets that already had a test file. | {
"avatar_url": "https://avatars.githubusercontent.com/u/40604584?v=4",
"events_url": "https://api.github.com/users/juliendenize/events{/privacy}",
"followers_url": "https://api.github.com/users/juliendenize/followers",
"following_url": "https://api.github.com/users/juliendenize/following{/other_user}",
"gists_url": "https://api.github.com/users/juliendenize/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/juliendenize",
"id": 40604584,
"login": "juliendenize",
"node_id": "MDQ6VXNlcjQwNjA0NTg0",
"organizations_url": "https://api.github.com/users/juliendenize/orgs",
"received_events_url": "https://api.github.com/users/juliendenize/received_events",
"repos_url": "https://api.github.com/users/juliendenize/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/juliendenize/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/juliendenize/subscriptions",
"type": "User",
"url": "https://api.github.com/users/juliendenize",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6310/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6310/timeline | null | null | true | 6,912.331389 |
https://api.github.com/repos/huggingface/datasets/issues/6309 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6309/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6309/comments | https://api.github.com/repos/huggingface/datasets/issues/6309/events | https://github.com/huggingface/datasets/pull/6309 | 1,946,916,969 | PR_kwDODunzps5c_YcX | 6,309 | Fix get_data_patterns for directories with the word data twice | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 7 | 2023-10-17T09:00:39 | 2023-10-18T14:01:52 | 2023-10-18T13:50:35 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6309.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6309",
"merged_at": "2023-10-18T13:50:35",
"patch_url": "https://github.com/huggingface/datasets/pull/6309.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6309"
} | Before the fix, `get_data_patterns` inferred wrongly the split name for paths with the word "data" twice:
- For the URL path: `hf://datasets/piuba-bigdata/articles_and_comments@f328d536425ae8fcac5d098c8408f437bffdd357/data/train-00001-of-00009.parquet` (note the org name `piuba-bigdata/` ending with `data/`)
- The inferred split name was: `articles_and_comments@f328d536425ae8fcac5d098c8408f437bffdd357/data/train` instead of `train`
This PR fixes this issue by passing the `base_path` (`hf://datasets/piuba-bigdata/articles_and_comments@f328d536425ae8fcac5d098c8408f437bffdd357`) to `_get_data_files_patterns` and prepending it to the regex split pattern (`data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].*\\..*`).
Fix #6305.
Fix https://huggingface.co/datasets/piuba-bigdata/articles_and_comments/discussions/1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6309/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6309/timeline | null | null | true | 28.832222 |
https://api.github.com/repos/huggingface/datasets/issues/6308 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6308/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6308/comments | https://api.github.com/repos/huggingface/datasets/issues/6308/events | https://github.com/huggingface/datasets/issues/6308 | 1,946,810,625 | I_kwDODunzps50CfkB | 6,308 | module 'resource' has no attribute 'error' | {
"avatar_url": "https://avatars.githubusercontent.com/u/48009681?v=4",
"events_url": "https://api.github.com/users/NeoWang9999/events{/privacy}",
"followers_url": "https://api.github.com/users/NeoWang9999/followers",
"following_url": "https://api.github.com/users/NeoWang9999/following{/other_user}",
"gists_url": "https://api.github.com/users/NeoWang9999/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NeoWang9999",
"id": 48009681,
"login": "NeoWang9999",
"node_id": "MDQ6VXNlcjQ4MDA5Njgx",
"organizations_url": "https://api.github.com/users/NeoWang9999/orgs",
"received_events_url": "https://api.github.com/users/NeoWang9999/received_events",
"repos_url": "https://api.github.com/users/NeoWang9999/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NeoWang9999/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NeoWang9999/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NeoWang9999",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 4 | 2023-10-17T08:08:54 | 2023-10-25T17:09:22 | 2023-10-25T17:09:22 | NONE | null | null | null | ### Describe the bug
just run import:
`from datasets import load_dataset`
and then:
```
File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\__init__.py", line 22, in <module>
from .arrow_dataset import Dataset
File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\arrow_dataset.py", line 66, in <module>
from .arrow_reader import ArrowReader
File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\arrow_reader.py", line 30, in <module>
from .download.download_config import DownloadConfig
File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\download\__init__.py", line 10, in <module>
from .streaming_download_manager import StreamingDownloadManager
File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\download\streaming_download_manager.py", line 21, in <module>
from ..filesystems import COMPRESSION_FILESYSTEMS
File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\datasets\filesystems\__init__.py", line 8, in <module>
import fsspec.asyn
File "C:\ProgramData\anaconda3\envs\py310\lib\site-packages\fsspec\asyn.py", line 157, in <module>
ResourceEror = resource.error
AttributeError: module 'resource' has no attribute 'error'
Process finished with exit code 1
```
and the error codes are:
```
try:
import resource
except ImportError:
resource = None
ResourceError = OSError
else:
ResourceEror = resource.error
```
1. miss spelling : "ResourceEror " should be "ResourceErorr"
2. module 'resource' has no attribute 'error'
### Steps to reproduce the bug
only one step:
`from datasets import load_dataset`
### Expected behavior
slove error: module 'resource' has no attribute 'error'
### Environment info
python=3.10
datasets==2.14.5
| {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6308/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6308/timeline | null | completed | false | 201.007778 |
https://api.github.com/repos/huggingface/datasets/issues/6307 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6307/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6307/comments | https://api.github.com/repos/huggingface/datasets/issues/6307/events | https://github.com/huggingface/datasets/pull/6307 | 1,946,414,808 | PR_kwDODunzps5c9s0j | 6,307 | Fix typo in code example in docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4",
"events_url": "https://api.github.com/users/bryant1410/events{/privacy}",
"followers_url": "https://api.github.com/users/bryant1410/followers",
"following_url": "https://api.github.com/users/bryant1410/following{/other_user}",
"gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bryant1410",
"id": 3905501,
"login": "bryant1410",
"node_id": "MDQ6VXNlcjM5MDU1MDE=",
"organizations_url": "https://api.github.com/users/bryant1410/orgs",
"received_events_url": "https://api.github.com/users/bryant1410/received_events",
"repos_url": "https://api.github.com/users/bryant1410/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bryant1410",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2023-10-17T02:28:50 | 2023-10-17T12:59:26 | 2023-10-17T06:36:19 | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6307.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6307",
"merged_at": "2023-10-17T06:36:18",
"patch_url": "https://github.com/huggingface/datasets/pull/6307.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6307"
} | null | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6307/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6307/timeline | null | null | true | 4.124722 |
https://api.github.com/repos/huggingface/datasets/issues/6306 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6306/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6306/comments | https://api.github.com/repos/huggingface/datasets/issues/6306/events | https://github.com/huggingface/datasets/issues/6306 | 1,946,363,452 | I_kwDODunzps50AyY8 | 6,306 | pyinstaller : OSError: could not get source code | {
"avatar_url": "https://avatars.githubusercontent.com/u/57702070?v=4",
"events_url": "https://api.github.com/users/dusk877647949/events{/privacy}",
"followers_url": "https://api.github.com/users/dusk877647949/followers",
"following_url": "https://api.github.com/users/dusk877647949/following{/other_user}",
"gists_url": "https://api.github.com/users/dusk877647949/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dusk877647949",
"id": 57702070,
"login": "dusk877647949",
"node_id": "MDQ6VXNlcjU3NzAyMDcw",
"organizations_url": "https://api.github.com/users/dusk877647949/orgs",
"received_events_url": "https://api.github.com/users/dusk877647949/received_events",
"repos_url": "https://api.github.com/users/dusk877647949/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dusk877647949/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dusk877647949/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dusk877647949",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 5 | 2023-10-17T01:41:51 | 2023-11-02T07:24:51 | 2023-10-18T14:03:42 | NONE | null | null | null | ### Describe the bug
I ran a package with pyinstaller and got the following error:
### Steps to reproduce the bug
```
...
File "datasets\__init__.py", line 52, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "datasets\inspect.py", line 30, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "datasets\load.py", line 58, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 499, in exec_module
File "datasets\packaged_modules\__init__.py", line 31, in <module>
File "inspect.py", line 1147, in getsource
File "inspect.py", line 1129, in getsourcelines
File "inspect.py", line 958, in findsource
OSError: could not get source code
```
### Expected behavior
I have looked up the relevant information, but I can't find a suitable reason
### Environment info
```python
python 3.10
datasets 2.14.4
pyinstaller 5.6.2
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/57702070?v=4",
"events_url": "https://api.github.com/users/dusk877647949/events{/privacy}",
"followers_url": "https://api.github.com/users/dusk877647949/followers",
"following_url": "https://api.github.com/users/dusk877647949/following{/other_user}",
"gists_url": "https://api.github.com/users/dusk877647949/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dusk877647949",
"id": 57702070,
"login": "dusk877647949",
"node_id": "MDQ6VXNlcjU3NzAyMDcw",
"organizations_url": "https://api.github.com/users/dusk877647949/orgs",
"received_events_url": "https://api.github.com/users/dusk877647949/received_events",
"repos_url": "https://api.github.com/users/dusk877647949/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dusk877647949/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dusk877647949/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dusk877647949",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6306/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6306/timeline | null | completed | false | 36.364167 |
https://api.github.com/repos/huggingface/datasets/issues/6305 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6305/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6305/comments | https://api.github.com/repos/huggingface/datasets/issues/6305/events | https://github.com/huggingface/datasets/issues/6305 | 1,946,010,912 | I_kwDODunzps5z_cUg | 6,305 | Cannot load dataset with `2.14.5`: `FileNotFound` error | {
"avatar_url": "https://avatars.githubusercontent.com/u/167943?v=4",
"events_url": "https://api.github.com/users/finiteautomata/events{/privacy}",
"followers_url": "https://api.github.com/users/finiteautomata/followers",
"following_url": "https://api.github.com/users/finiteautomata/following{/other_user}",
"gists_url": "https://api.github.com/users/finiteautomata/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/finiteautomata",
"id": 167943,
"login": "finiteautomata",
"node_id": "MDQ6VXNlcjE2Nzk0Mw==",
"organizations_url": "https://api.github.com/users/finiteautomata/orgs",
"received_events_url": "https://api.github.com/users/finiteautomata/received_events",
"repos_url": "https://api.github.com/users/finiteautomata/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/finiteautomata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/finiteautomata/subscriptions",
"type": "User",
"url": "https://api.github.com/users/finiteautomata",
"user_view_type": "public"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
}
] | null | 2 | 2023-10-16T20:11:27 | 2023-10-18T13:50:36 | 2023-10-18T13:50:36 | NONE | null | null | null | ### Describe the bug
I'm trying to load [piuba-bigdata/articles_and_comments] and I'm stumbling with this error on `2.14.5`. However, this works on `2.10.0`.
### Steps to reproduce the bug
[Colab link](https://colab.research.google.com/drive/1SAftFMQnFE708ikRnJJHIXZV7R5IBOCE#scrollTo=r2R2ipCCDmsg)
```python
Downloading readme: 100%
1.19k/1.19k [00:00<00:00, 30.9kB/s]
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[<ipython-input-2-807c3583d297>](https://localhost:8080/#) in <cell line: 3>()
1 from datasets import load_dataset
2
----> 3 load_dataset("piuba-bigdata/articles_and_comments", split="train")
2 frames
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)
2127
2128 # Create a dataset builder
-> 2129 builder_instance = load_dataset_builder(
2130 path=path,
2131 name=name,
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, token, use_auth_token, storage_options, **config_kwargs)
1813 download_config = download_config.copy() if download_config else DownloadConfig()
1814 download_config.storage_options.update(storage_options)
-> 1815 dataset_module = dataset_module_factory(
1816 path,
1817 revision=revision,
[/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1506 raise e1 from None
1507 if isinstance(e1, FileNotFoundError):
-> 1508 raise FileNotFoundError(
1509 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1510 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
FileNotFoundError: Couldn't find a dataset script at /content/piuba-bigdata/articles_and_comments/articles_and_comments.py or any data file in the same directory. Couldn't find 'piuba-bigdata/articles_and_comments' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in piuba-bigdata/articles_and_comments.
```
### Expected behavior
It should load normally.
### Environment info
```
- `datasets` version: 2.14.5
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.18.0
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6305/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6305/timeline | null | completed | false | 41.6525 |
https://api.github.com/repos/huggingface/datasets/issues/6304 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6304/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6304/comments | https://api.github.com/repos/huggingface/datasets/issues/6304/events | https://github.com/huggingface/datasets/pull/6304 | 1,945,913,521 | PR_kwDODunzps5c7-4q | 6,304 | Update README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/74114936?v=4",
"events_url": "https://api.github.com/users/smty2018/events{/privacy}",
"followers_url": "https://api.github.com/users/smty2018/followers",
"following_url": "https://api.github.com/users/smty2018/following{/other_user}",
"gists_url": "https://api.github.com/users/smty2018/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/smty2018",
"id": 74114936,
"login": "smty2018",
"node_id": "MDQ6VXNlcjc0MTE0OTM2",
"organizations_url": "https://api.github.com/users/smty2018/orgs",
"received_events_url": "https://api.github.com/users/smty2018/received_events",
"repos_url": "https://api.github.com/users/smty2018/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/smty2018/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smty2018/subscriptions",
"type": "User",
"url": "https://api.github.com/users/smty2018",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2023-10-16T19:10:39 | 2023-10-17T15:13:37 | 2023-10-17T15:04:52 | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6304.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6304",
"merged_at": "2023-10-17T15:04:52",
"patch_url": "https://github.com/huggingface/datasets/pull/6304.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6304"
} | Fixed typos in ReadMe and added punctuation marks
Tensorflow --> TensorFlow
| {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6304/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6304/timeline | null | null | true | 19.903611 |
https://api.github.com/repos/huggingface/datasets/issues/6303 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6303/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6303/comments | https://api.github.com/repos/huggingface/datasets/issues/6303/events | https://github.com/huggingface/datasets/issues/6303 | 1,943,466,532 | I_kwDODunzps5z1vIk | 6,303 | Parquet uploads off-by-one naming scheme | {
"avatar_url": "https://avatars.githubusercontent.com/u/1981179?v=4",
"events_url": "https://api.github.com/users/ZachNagengast/events{/privacy}",
"followers_url": "https://api.github.com/users/ZachNagengast/followers",
"following_url": "https://api.github.com/users/ZachNagengast/following{/other_user}",
"gists_url": "https://api.github.com/users/ZachNagengast/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ZachNagengast",
"id": 1981179,
"login": "ZachNagengast",
"node_id": "MDQ6VXNlcjE5ODExNzk=",
"organizations_url": "https://api.github.com/users/ZachNagengast/orgs",
"received_events_url": "https://api.github.com/users/ZachNagengast/received_events",
"repos_url": "https://api.github.com/users/ZachNagengast/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ZachNagengast/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZachNagengast/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ZachNagengast",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 4 | 2023-10-14T18:31:03 | 2023-10-16T16:33:21 | null | CONTRIBUTOR | null | null | null | ### Describe the bug
I noticed this numbering scheme not matching up in a different project and wanted to raise it as an issue for discussion, what is the actual proper way to have these stored?
<img width="425" alt="image" src="https://github.com/huggingface/datasets/assets/1981179/3ffa2144-7c9a-446f-b521-a5e9db71e7ce">
The `-SSSSS-of-NNNNN` seems to be used widely across the codebase. The section that creates the part in my screenshot is here https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L5287
There are also some edits to this section in the single commit branch.
### Steps to reproduce the bug
1. Upload a dataset that requires at least two parquet files in it
2. Observe the naming scheme
### Expected behavior
The couple options here are of course **1. keeping it as is**
**2. Starting the index at 1:**
train-00001-of-00002-{hash}.parquet
train-00002-of-00002-{hash}.parquet
**3. My preferred option** (which would solve my specific issue), dropping the total entirely:
train-00000-{hash}.parquet
train-00001-{hash}.parquet
This also solves an issue that will occur with an `append` variable for `push_to_hub` (see https://github.com/huggingface/datasets/issues/6290) where as you add a new parquet file, you need to rename everything in the repo as well.
However, I know there are parts of the repo that use 0 as the starting file or may require the total, so raising the question for discussion.
### Environment info
- `datasets` version: 2.14.6.dev0
- Platform: macOS-14.0-arm64-arm-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.18.0
- PyArrow version: 12.0.1
- Pandas version: 1.5.3 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6303/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6303/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6302 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6302/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6302/comments | https://api.github.com/repos/huggingface/datasets/issues/6302/events | https://github.com/huggingface/datasets/issues/6302 | 1,942,096,078 | I_kwDODunzps5zwgjO | 6,302 | ArrowWriter/ParquetWriter `write` method does not increase `_num_bytes` and hence datasets not sharding at `max_shard_size` | {
"avatar_url": "https://avatars.githubusercontent.com/u/2855550?v=4",
"events_url": "https://api.github.com/users/Rassibassi/events{/privacy}",
"followers_url": "https://api.github.com/users/Rassibassi/followers",
"following_url": "https://api.github.com/users/Rassibassi/following{/other_user}",
"gists_url": "https://api.github.com/users/Rassibassi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rassibassi",
"id": 2855550,
"login": "Rassibassi",
"node_id": "MDQ6VXNlcjI4NTU1NTA=",
"organizations_url": "https://api.github.com/users/Rassibassi/orgs",
"received_events_url": "https://api.github.com/users/Rassibassi/received_events",
"repos_url": "https://api.github.com/users/Rassibassi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rassibassi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rassibassi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rassibassi",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2023-10-13T14:43:36 | 2023-10-17T06:52:12 | 2023-10-17T06:52:11 | NONE | null | null | null | ### Describe the bug
An example from [1], does not work when limiting shards with `max_shard_size`.
Try the following example with low `max_shard_size`, such as:
```python
builder.download_and_prepare(output_dir, storage_options=storage_options, file_format="parquet", max_shard_size="10MB")
```
The reason for this is that, in line [2] `writer._num_bytes > max_shard_size` is never true, because the `write` method of `ArrowWriter` [3] does not increase `self._num_bytes`.
Such that respective Arrow/Parquet shards are only written to file based on the `writer_batch_size` or `config.DEFAULT_MAX_BATCH_SIZE`, but not based on `max_shard_size`.
[1] https://huggingface.co/docs/datasets/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage
[2] https://github.com/huggingface/datasets/blob/3e8d420808718c9a1453a2e7ee3484ca12c9c70d/src/datasets/builder.py#L1677
[3] https://github.com/huggingface/datasets/blob/3e8d420808718c9a1453a2e7ee3484ca12c9c70d/src/datasets/arrow_writer.py#L459
### Steps to reproduce the bug
Get example from: https://huggingface.co/docs/datasets/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage
Call `builder.download_and_prepare` with low `max_shard_size` such as `10MB`, e.g.:
```python
builder.download_and_prepare(output_dir, storage_options=storage_options, file_format="parquet", max_shard_size="10MB")
```
### Expected behavior
Shards should be written based on `max_shard_size` instead of batch size.
### Environment info
```
>>> import datasets
>>> datasets.__version__
'2.14.6.dev0
``` | {
"avatar_url": "https://avatars.githubusercontent.com/u/2855550?v=4",
"events_url": "https://api.github.com/users/Rassibassi/events{/privacy}",
"followers_url": "https://api.github.com/users/Rassibassi/followers",
"following_url": "https://api.github.com/users/Rassibassi/following{/other_user}",
"gists_url": "https://api.github.com/users/Rassibassi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rassibassi",
"id": 2855550,
"login": "Rassibassi",
"node_id": "MDQ6VXNlcjI4NTU1NTA=",
"organizations_url": "https://api.github.com/users/Rassibassi/orgs",
"received_events_url": "https://api.github.com/users/Rassibassi/received_events",
"repos_url": "https://api.github.com/users/Rassibassi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rassibassi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rassibassi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rassibassi",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6302/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6302/timeline | null | completed | false | 88.143056 |
https://api.github.com/repos/huggingface/datasets/issues/6301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6301/comments | https://api.github.com/repos/huggingface/datasets/issues/6301/events | https://github.com/huggingface/datasets/pull/6301 | 1,940,183,999 | PR_kwDODunzps5cpPVh | 6,301 | Unpin `tensorflow` maximum version | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2023-10-12T14:58:07 | 2023-10-12T15:58:20 | 2023-10-12T15:49:54 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6301.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6301",
"merged_at": "2023-10-12T15:49:54",
"patch_url": "https://github.com/huggingface/datasets/pull/6301.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6301"
} | Removes the temporary pin introduced in #6264 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6301/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6301/timeline | null | null | true | 0.863056 |
https://api.github.com/repos/huggingface/datasets/issues/6300 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6300/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6300/comments | https://api.github.com/repos/huggingface/datasets/issues/6300/events | https://github.com/huggingface/datasets/pull/6300 | 1,940,153,432 | PR_kwDODunzps5cpIoG | 6,300 | Unpin `jax` maximum version | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 6 | 2023-10-12T14:42:40 | 2023-10-12T16:37:55 | 2023-10-12T16:28:57 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6300.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6300",
"merged_at": "2023-10-12T16:28:57",
"patch_url": "https://github.com/huggingface/datasets/pull/6300.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6300"
} | fix #6299
fix #6202 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6300/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6300/timeline | null | null | true | 1.771389 |
https://api.github.com/repos/huggingface/datasets/issues/6299 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6299/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6299/comments | https://api.github.com/repos/huggingface/datasets/issues/6299/events | https://github.com/huggingface/datasets/issues/6299 | 1,939,649,238 | I_kwDODunzps5znLLW | 6,299 | Support for newer versions of JAX | {
"avatar_url": "https://avatars.githubusercontent.com/u/25456859?v=4",
"events_url": "https://api.github.com/users/ddrous/events{/privacy}",
"followers_url": "https://api.github.com/users/ddrous/followers",
"following_url": "https://api.github.com/users/ddrous/following{/other_user}",
"gists_url": "https://api.github.com/users/ddrous/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ddrous",
"id": 25456859,
"login": "ddrous",
"node_id": "MDQ6VXNlcjI1NDU2ODU5",
"organizations_url": "https://api.github.com/users/ddrous/orgs",
"received_events_url": "https://api.github.com/users/ddrous/received_events",
"repos_url": "https://api.github.com/users/ddrous/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ddrous/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ddrous/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ddrous",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 0 | 2023-10-12T10:03:46 | 2023-10-12T16:28:59 | 2023-10-12T16:28:59 | NONE | null | null | null | ### Feature request
Hi,
I like your idea of adapting the datasets library to be usable with JAX. Thank you for that.
However, in your [setup.py](https://github.com/huggingface/datasets/blob/main/setup.py), you enforce old versions of JAX <= 0.3... It is very cumbersome !
What is the rationale for such a limitation ? Can you remove it please ?
Thanks,
### Motivation
This library is unusable with new versions of JAX ?
### Your contribution
Yes. | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6299/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6299/timeline | null | completed | false | 6.420278 |
https://api.github.com/repos/huggingface/datasets/issues/6298 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6298/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6298/comments | https://api.github.com/repos/huggingface/datasets/issues/6298/events | https://github.com/huggingface/datasets/pull/6298 | 1,938,797,389 | PR_kwDODunzps5ckg6j | 6,298 | Doc readme improvements | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2023-10-11T21:51:12 | 2023-10-12T12:47:15 | 2023-10-12T12:38:19 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6298.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6298",
"merged_at": "2023-10-12T12:38:19",
"patch_url": "https://github.com/huggingface/datasets/pull/6298.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6298"
} | Changes in the doc READMe:
* adds two new sections (to be aligned with `transformers` and `hfh`): "Previewing the documentation" and "Writing documentation examples"
* replaces the mentions of `transformers` with `datasets`
* fixes some dead links | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6298/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6298/timeline | null | null | true | 14.785278 |
https://api.github.com/repos/huggingface/datasets/issues/6297 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6297/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6297/comments | https://api.github.com/repos/huggingface/datasets/issues/6297/events | https://github.com/huggingface/datasets/pull/6297 | 1,938,752,707 | PR_kwDODunzps5ckXBa | 6,297 | Fix ArrayXD cast | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2023-10-11T21:14:59 | 2023-10-13T13:54:00 | 2023-10-13T13:45:30 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6297.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6297",
"merged_at": "2023-10-13T13:45:30",
"patch_url": "https://github.com/huggingface/datasets/pull/6297.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6297"
} | Fix #6291 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6297/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6297/timeline | null | null | true | 40.508611 |
https://api.github.com/repos/huggingface/datasets/issues/6296 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6296/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6296/comments | https://api.github.com/repos/huggingface/datasets/issues/6296/events | https://github.com/huggingface/datasets/pull/6296 | 1,938,453,845 | PR_kwDODunzps5cjUs1 | 6,296 | Move `exceptions.py` to `utils/exceptions.py` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 6 | 2023-10-11T18:28:00 | 2024-09-03T16:00:04 | 2024-09-03T16:00:03 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6296.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6296",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6296.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6296"
} | I didn't notice the path while reviewing the PR yesterday :( | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6296/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6296/timeline | null | null | true | 7,869.534167 |
https://api.github.com/repos/huggingface/datasets/issues/6295 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6295/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6295/comments | https://api.github.com/repos/huggingface/datasets/issues/6295/events | https://github.com/huggingface/datasets/pull/6295 | 1,937,362,102 | PR_kwDODunzps5cfiW8 | 6,295 | Fix parquet columns argument in streaming mode | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 3 | 2023-10-11T10:01:01 | 2023-10-11T16:30:24 | 2023-10-11T16:21:36 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6295.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6295",
"merged_at": "2023-10-11T16:21:36",
"patch_url": "https://github.com/huggingface/datasets/pull/6295.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6295"
} | It was failing when there's a DatasetInfo with non-None info.features from the YAML (therefore containing columns that should be ignored)
Fix https://github.com/huggingface/datasets/issues/6293 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6295/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6295/timeline | null | null | true | 6.343056 |
https://api.github.com/repos/huggingface/datasets/issues/6294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6294/comments | https://api.github.com/repos/huggingface/datasets/issues/6294/events | https://github.com/huggingface/datasets/issues/6294 | 1,937,359,605 | I_kwDODunzps5zecL1 | 6,294 | IndexError: Invalid key is out of bounds for size 0 despite having a populated dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/61892155?v=4",
"events_url": "https://api.github.com/users/ZYM66/events{/privacy}",
"followers_url": "https://api.github.com/users/ZYM66/followers",
"following_url": "https://api.github.com/users/ZYM66/following{/other_user}",
"gists_url": "https://api.github.com/users/ZYM66/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ZYM66",
"id": 61892155,
"login": "ZYM66",
"node_id": "MDQ6VXNlcjYxODkyMTU1",
"organizations_url": "https://api.github.com/users/ZYM66/orgs",
"received_events_url": "https://api.github.com/users/ZYM66/received_events",
"repos_url": "https://api.github.com/users/ZYM66/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ZYM66/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZYM66/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ZYM66",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2023-10-11T09:59:38 | 2023-10-17T11:24:06 | 2023-10-17T11:24:06 | NONE | null | null | null | ### Describe the bug
I am encountering an `IndexError` when trying to access data from a DataLoader which wraps around a dataset I've loaded using the `datasets` library. The error suggests that the dataset size is `0`, but when I check the length and print the dataset, it's clear that it has `1166` entries.
### Steps to reproduce the bug
1. Load a dataset with `1166` entries.
2. Create a DataLoader using this dataset.
3. Try iterating over the DataLoader.
code:
```python
def get_train_dataloader(self) -> DataLoader:
if self.train_dataset is None:
raise ValueError("Trainer: training requires a train_dataset.")
train_dataset = self.train_dataset
data_collator = self.data_collator
print(len(train_dataset))
print(train_dataset)
if is_datasets_available() and isinstance(train_dataset, datasets.Dataset):
train_dataset = self._remove_unused_columns(train_dataset, description="training")
else:
data_collator = self._get_collator_with_removed_columns(data_collator, description="training")
train_sampler = self._get_train_sampler()
dl = DataLoader(
train_dataset,
batch_size=self._train_batch_size,
sampler=train_sampler,
collate_fn=data_collator,
drop_last=self.args.dataloader_drop_last,
num_workers=self.args.dataloader_num_workers,
pin_memory=self.args.dataloader_pin_memory,
worker_init_fn=seed_worker,
)
print(dl)
print(len(dl))
for i in dl:
print(i)
break
return dl
```
output :
```
1166
Dataset({
features: ['input_ids', 'special_tokens_mask'],
num_rows: 1166
})
<torch.utils.data.dataloader.DataLoader object ...>
146
```
Error:
```
Traceback (most recent call last):
File "/home/dl/zym/llamaJP/TestUseContinuePretrainLlama.py", line 266, in <module>
train()
File "/home/dl/zym/llamaJP/TestUseContinuePretrainLlama.py", line 260, in train
trainer.train()
File "/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py", line 1506, in train
return inner_training_loop(
File "/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py", line 1520, in _inner_training_loop
train_dataloader = self.get_train_dataloader()
File "/home/dl/zym/llamaJP/TestUseContinuePretrainLlama.py", line 80, in get_train_dataloader
for i in dl:
File "/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 630, in __next__
data = self._next_data()
File "/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 674, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = self.dataset.__getitems__(possibly_batched_index)
File "/root/miniconda3/envs/LLM/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2807, in __getitems__
batch = self.__getitem__(keys)
File "/root/miniconda3/envs/LLM/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2803, in __getitem__
return self._getitem(key)
File "/root/miniconda3/envs/LLM/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2787, in _getitem
pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
File "/root/miniconda3/envs/LLM/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 583, in query_table
_check_valid_index_key(key, size)
File "/root/miniconda3/envs/LLM/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 536, in _check_valid_index_key
_check_valid_index_key(int(max(key)), size=size)
File "/root/miniconda3/envs/LLM/lib/python3.10/site-packages/datasets/formatting/formatting.py", line 526, in _check_valid_index_key
raise IndexError(f"Invalid key: {key} is out of bounds for size {size}")
IndexError: Invalid key: 1116 is out of bounds for size 0
```
### Expected behavior
I expect to be able to iterate over the DataLoader without encountering an IndexError since the dataset is populated.
### Environment info
- `datasets` library version: [2.14.5]
- Platform: [Linux]
- Python version: 3.10
- Other libraries involved: HuggingFace Transformers | {
"avatar_url": "https://avatars.githubusercontent.com/u/61892155?v=4",
"events_url": "https://api.github.com/users/ZYM66/events{/privacy}",
"followers_url": "https://api.github.com/users/ZYM66/followers",
"following_url": "https://api.github.com/users/ZYM66/following{/other_user}",
"gists_url": "https://api.github.com/users/ZYM66/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ZYM66",
"id": 61892155,
"login": "ZYM66",
"node_id": "MDQ6VXNlcjYxODkyMTU1",
"organizations_url": "https://api.github.com/users/ZYM66/orgs",
"received_events_url": "https://api.github.com/users/ZYM66/received_events",
"repos_url": "https://api.github.com/users/ZYM66/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ZYM66/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZYM66/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ZYM66",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6294/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6294/timeline | null | completed | false | 145.407778 |
https://api.github.com/repos/huggingface/datasets/issues/6293 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6293/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6293/comments | https://api.github.com/repos/huggingface/datasets/issues/6293/events | https://github.com/huggingface/datasets/issues/6293 | 1,937,238,047 | I_kwDODunzps5zd-gf | 6,293 | Choose columns to stream parquet data in streaming mode | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | 0 | 2023-10-11T08:59:36 | 2023-10-11T16:21:38 | 2023-10-11T16:21:38 | MEMBER | null | null | null | Currently passing columns= to load_dataset in streaming mode fails
```
Tried to load parquet data with columns '['link']' with mismatching features '{'caption': Value(dtype='string', id=None), 'image': {'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='null', id=None)}, 'link': Value(dtype='string', id=None), 'message_id': Value(dtype='string', id=None), 'timestamp': Value(dtype='string', id=None)}'
```
similar to https://github.com/huggingface/datasets/issues/6039
reported at https://huggingface.co/datasets/laion/dalle-3-dataset/discussions/3#65259a09617407d4520f4ad9 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6293/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6293/timeline | null | completed | false | 7.367222 |
https://api.github.com/repos/huggingface/datasets/issues/6292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6292/comments | https://api.github.com/repos/huggingface/datasets/issues/6292/events | https://github.com/huggingface/datasets/issues/6292 | 1,937,050,470 | I_kwDODunzps5zdQtm | 6,292 | how to load the image of dtype float32 or float64 | {
"avatar_url": "https://avatars.githubusercontent.com/u/26437644?v=4",
"events_url": "https://api.github.com/users/wanglaofei/events{/privacy}",
"followers_url": "https://api.github.com/users/wanglaofei/followers",
"following_url": "https://api.github.com/users/wanglaofei/following{/other_user}",
"gists_url": "https://api.github.com/users/wanglaofei/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/wanglaofei",
"id": 26437644,
"login": "wanglaofei",
"node_id": "MDQ6VXNlcjI2NDM3NjQ0",
"organizations_url": "https://api.github.com/users/wanglaofei/orgs",
"received_events_url": "https://api.github.com/users/wanglaofei/received_events",
"repos_url": "https://api.github.com/users/wanglaofei/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/wanglaofei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wanglaofei/subscriptions",
"type": "User",
"url": "https://api.github.com/users/wanglaofei",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 1 | 2023-10-11T07:27:16 | 2023-10-11T13:19:11 | null | NONE | null | null | null | _FEATURES = datasets.Features(
{
"image": datasets.Image(),
"text": datasets.Value("string"),
},
)
The datasets builder seems only support the unit8 data. How to load the float dtype data? | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6292/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6292/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6291 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6291/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6291/comments | https://api.github.com/repos/huggingface/datasets/issues/6291/events | https://github.com/huggingface/datasets/issues/6291 | 1,936,129,871 | I_kwDODunzps5zZv9P | 6,291 | Casting type from Array2D int to Array2D float crashes | {
"avatar_url": "https://avatars.githubusercontent.com/u/22567306?v=4",
"events_url": "https://api.github.com/users/AlanBlanchet/events{/privacy}",
"followers_url": "https://api.github.com/users/AlanBlanchet/followers",
"following_url": "https://api.github.com/users/AlanBlanchet/following{/other_user}",
"gists_url": "https://api.github.com/users/AlanBlanchet/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/AlanBlanchet",
"id": 22567306,
"login": "AlanBlanchet",
"node_id": "MDQ6VXNlcjIyNTY3MzA2",
"organizations_url": "https://api.github.com/users/AlanBlanchet/orgs",
"received_events_url": "https://api.github.com/users/AlanBlanchet/received_events",
"repos_url": "https://api.github.com/users/AlanBlanchet/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/AlanBlanchet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AlanBlanchet/subscriptions",
"type": "User",
"url": "https://api.github.com/users/AlanBlanchet",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2023-10-10T20:10:10 | 2023-10-13T13:45:31 | 2023-10-13T13:45:31 | NONE | null | null | null | ### Describe the bug
I am on a school project and the initial type for feature annotations are `Array2D(shape=(None, 4))`. I am trying to cast this type to a `float64` and pyarrow gives me this error :
```
Traceback (most recent call last):
File "/home/alan/dev/ClassezDesImagesAvecDesAlgorithmesDeDeeplearning/src/sdd/data/dataset.py", line 141, in <module>
dataset = StanfordDogsDataset(size, 5).original(True).demo()
File "<attrs generated init __main__.StanfordDogsDataset>", line 4, in __init__
File "/home/alan/dev/ClassezDesImagesAvecDesAlgorithmesDeDeeplearning/src/sdd/data/dataset.py", line 33, in __attrs_post_init__
self.dataset = self.dataset.cast_column(
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/fingerprint.py", line 511, in wrapper
out = func(dataset, *args, **kwargs)
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2110, in cast_column
return self.cast(features)
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 2055, in cast
dataset = dataset.map(
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 592, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3097, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3474, in _map_single
batch = apply_function_on_filtered_inputs(
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3353, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/table.py", line 2328, in table_cast
return cast_table_to_schema(table, schema)
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/table.py", line 2287, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/table.py", line 2287, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1831, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1831, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/table.py", line 2143, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1833, in wrapper
return func(array, *args, **kwargs)
File "/home/alan/.cache/pypoetry/virtualenvs/sdd-2XWLAjSi-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1967, in array_cast
return pa_type.wrap_array(array)
File "pyarrow/types.pxi", line 1369, in pyarrow.lib.BaseExtensionType.wrap_array
TypeError: Incompatible storage type for extension<arrow.py_extension_type<Array2DExtensionType>>: expected list<item: list<item: double>>, got list<item: list<item: int32>>
```
### Steps to reproduce the bug
```python
dataset = datasets.load_dataset("Alanox/stanford-dogs", split="full")
dataset = dataset.cast_column("annotations", Array2D((None, 4), "float64"))
```
### Expected behavior
It should simply cast the column feature type to a `float64` without error
### Environment info
datasets == 2.14.5 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6291/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6291/timeline | null | completed | false | 65.589167 |
https://api.github.com/repos/huggingface/datasets/issues/6290 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6290/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6290/comments | https://api.github.com/repos/huggingface/datasets/issues/6290/events | https://github.com/huggingface/datasets/issues/6290 | 1,935,629,679 | I_kwDODunzps5zX11v | 6,290 | Incremental dataset (e.g. `.push_to_hub(..., append=True)`) | {
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Wauplin",
"id": 11801849,
"login": "Wauplin",
"node_id": "MDQ6VXNlcjExODAxODQ5",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Wauplin",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 4 | 2023-10-10T15:18:03 | 2024-09-02T08:11:17 | null | CONTRIBUTOR | null | null | null | ### Feature request
Have the possibility to do `ds.push_to_hub(..., append=True)`.
### Motivation
Requested in this [comment](https://huggingface.co/datasets/laion/dalle-3-dataset/discussions/3#65252597c4edc168202a5eaa) and
this [comment](https://huggingface.co/datasets/laion/dalle-3-dataset/discussions/4#6524f675c9607bdffb208d8f). Discussed internally on [slack](https://huggingface.slack.com/archives/C02EMARJ65P/p1696950642610639?thread_ts=1690554266.830949&cid=C02EMARJ65P).
### Your contribution
What I suggest to do for parquet datasets is to use `CommitOperationCopy` + `CommitOperationDelete` from `huggingface_hub`:
1. list files
2. copy files from parquet-0001-of-0004 to parquet-0001-of-0005
3. delete files like parquet-0001-of-0004
4. generate + add last parquet file parquet-0005-of-0005
=> make a single commit with all commit operations at once
I think it should be quite straightforward to implement. Happy to review a PR (maybe conflicting with the ongoing "1 commit push_to_hub" PR https://github.com/huggingface/datasets/pull/6269) | null | {
"+1": 5,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6290/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6290/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6289 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6289/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6289/comments | https://api.github.com/repos/huggingface/datasets/issues/6289/events | https://github.com/huggingface/datasets/pull/6289 | 1,935,628,506 | PR_kwDODunzps5cZiay | 6,289 | testing doc-builder | {
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mishig25",
"id": 11827707,
"login": "mishig25",
"node_id": "MDQ6VXNlcjExODI3NzA3",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"repos_url": "https://api.github.com/users/mishig25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mishig25",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2023-10-10T15:17:29 | 2023-10-13T08:57:14 | 2023-10-13T08:56:48 | NONE | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6289.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6289",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6289.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6289"
} | testing https://github.com/huggingface/doc-builder/pull/426 | {
"avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4",
"events_url": "https://api.github.com/users/mishig25/events{/privacy}",
"followers_url": "https://api.github.com/users/mishig25/followers",
"following_url": "https://api.github.com/users/mishig25/following{/other_user}",
"gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mishig25",
"id": 11827707,
"login": "mishig25",
"node_id": "MDQ6VXNlcjExODI3NzA3",
"organizations_url": "https://api.github.com/users/mishig25/orgs",
"received_events_url": "https://api.github.com/users/mishig25/received_events",
"repos_url": "https://api.github.com/users/mishig25/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mishig25/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mishig25",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6289/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6289/timeline | null | null | true | 65.655278 |
https://api.github.com/repos/huggingface/datasets/issues/6288 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6288/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6288/comments | https://api.github.com/repos/huggingface/datasets/issues/6288/events | https://github.com/huggingface/datasets/issues/6288 | 1,935,005,457 | I_kwDODunzps5zVdcR | 6,288 | Dataset.from_pandas with a DataFrame of PIL.Images | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 2 | 2023-10-10T10:29:16 | 2023-10-20T18:23:05 | null | MEMBER | null | null | null | Currently type inference doesn't know what to do with a Pandas Series of PIL.Image objects, though it would be nice to get a Dataset with the Image type this way | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6288/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6288/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6287 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6287/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6287/comments | https://api.github.com/repos/huggingface/datasets/issues/6287/events | https://github.com/huggingface/datasets/issues/6287 | 1,932,758,192 | I_kwDODunzps5zM4yw | 6,287 | map() not recognizing "text" | {
"avatar_url": "https://avatars.githubusercontent.com/u/5688359?v=4",
"events_url": "https://api.github.com/users/EngineerKhan/events{/privacy}",
"followers_url": "https://api.github.com/users/EngineerKhan/followers",
"following_url": "https://api.github.com/users/EngineerKhan/following{/other_user}",
"gists_url": "https://api.github.com/users/EngineerKhan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/EngineerKhan",
"id": 5688359,
"login": "EngineerKhan",
"node_id": "MDQ6VXNlcjU2ODgzNTk=",
"organizations_url": "https://api.github.com/users/EngineerKhan/orgs",
"received_events_url": "https://api.github.com/users/EngineerKhan/received_events",
"repos_url": "https://api.github.com/users/EngineerKhan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/EngineerKhan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/EngineerKhan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/EngineerKhan",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2023-10-09T10:27:30 | 2023-10-11T20:28:45 | 2023-10-11T20:28:45 | NONE | null | null | null | ### Describe the bug
The [map() documentation](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/main_classes#datasets.Dataset.map) reads:
`
ds = ds.map(lambda x: tokenizer(x['text'], truncation=True, padding=True), batched=True)`
I have been trying to reproduce it in my code as:
`tokenizedDataset = dataset.map(lambda x: tokenizer(x['text']), batched=True)`
But it doesn't work as it throws the error:
> KeyError: 'text'
Can you please guide me on how to fix it?
### Steps to reproduce the bug
1. `from datasets import load_dataset
dataset = load_dataset("amazon_reviews_multi")`
2. Then this code: `from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")`
3. The line I quoted above (which I have been trying)
### Expected behavior
As mentioned in the documentation, it should run without any error and map the tokenization on the whole dataset.
### Environment info
Python 3.10.2 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6287/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6287/timeline | null | completed | false | 58.020833 |
https://api.github.com/repos/huggingface/datasets/issues/6286 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6286/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6286/comments | https://api.github.com/repos/huggingface/datasets/issues/6286/events | https://github.com/huggingface/datasets/pull/6286 | 1,932,640,128 | PR_kwDODunzps5cPKNK | 6,286 | Create DefunctDatasetError | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2023-10-09T09:23:23 | 2023-10-10T07:13:22 | 2023-10-10T07:03:04 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6286.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6286",
"merged_at": "2023-10-10T07:03:04",
"patch_url": "https://github.com/huggingface/datasets/pull/6286.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6286"
} | Create `DefunctDatasetError` as a specific error to be raised when a dataset is defunct and no longer accessible.
See Hub discussion: https://huggingface.co/datasets/the_pile_books3/discussions/7#6523c13a94f3a1a2092d251b | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6286/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6286/timeline | null | null | true | 21.661389 |
https://api.github.com/repos/huggingface/datasets/issues/6285 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6285/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6285/comments | https://api.github.com/repos/huggingface/datasets/issues/6285/events | https://github.com/huggingface/datasets/issues/6285 | 1,932,306,325 | I_kwDODunzps5zLKeV | 6,285 | TypeError: expected str, bytes or os.PathLike object, not dict | {
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"events_url": "https://api.github.com/users/andysingal/events{/privacy}",
"followers_url": "https://api.github.com/users/andysingal/followers",
"following_url": "https://api.github.com/users/andysingal/following{/other_user}",
"gists_url": "https://api.github.com/users/andysingal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/andysingal",
"id": 20493493,
"login": "andysingal",
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"organizations_url": "https://api.github.com/users/andysingal/orgs",
"received_events_url": "https://api.github.com/users/andysingal/received_events",
"repos_url": "https://api.github.com/users/andysingal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/andysingal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/andysingal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/andysingal",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 4 | 2023-10-09T04:56:26 | 2023-10-10T13:17:33 | null | NONE | null | null | null | ### Describe the bug
my dataset is in form : train- image /n -labels
and tried the code:
```
from datasets import load_dataset
data_files = {
"train": "/content/datasets/PotholeDetectionYOLOv8-1/train/",
"validation": "/content/datasets/PotholeDetectionYOLOv8-1/valid/",
"test": "/content/datasets/PotholeDetectionYOLOv8-1/test/"
}
dataset = load_dataset("imagefolder", data_dir=data_files)
dataset
```
got error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-29-2ef1926f73d9>](https://localhost:8080/#) in <cell line: 8>()
6 "test": "/content/datasets/PotholeDetectionYOLOv8-1/test/"
7 }
----> 8 dataset = load_dataset("imagefolder", data_dir=data_files)
9 dataset
6 frames
[/usr/lib/python3.10/pathlib.py](https://localhost:8080/#) in _parse_args(cls, args)
576 parts += a._parts
577 else:
--> 578 a = os.fspath(a)
579 if isinstance(a, str):
580 # Force-cast str subclasses to str (issue #21127)
TypeError: expected str, bytes or os.PathLike object, not dict
```
### Steps to reproduce the bug
as share above
### Expected behavior
load images and labels , but my dataset only uploads images
- https://huggingface.co/datasets/Andyrasika/potholes-dataset
### Environment info
colab pro | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6285/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6285/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6284 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6284/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6284/comments | https://api.github.com/repos/huggingface/datasets/issues/6284/events | https://github.com/huggingface/datasets/issues/6284 | 1,929,551,712 | I_kwDODunzps5zAp9g | 6,284 | Add Belebele multiple-choice machine reading comprehension (MRC) dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rajveer43",
"id": 64583161,
"login": "rajveer43",
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rajveer43",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 1 | 2023-10-06T06:58:03 | 2023-10-06T13:26:51 | 2023-10-06T13:26:51 | NONE | null | null | null | ### Feature request
Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. This dataset enables the evaluation of mono- and multi-lingual models in high-, medium-, and low-resource languages. Each question has four multiple-choice answers and is linked to a short passage from the [FLORES-200](https://github.com/facebookresearch/flores/tree/main/flores200) dataset. The human annotation procedure was carefully curated to create questions that discriminate between different levels of generalizable language comprehension and is reinforced by extensive quality checks. While all questions directly relate to the passage, the English dataset on its own proves difficult enough to challenge state-of-the-art language models. Being fully parallel, this dataset enables direct comparison of model performance across all languages. Belebele opens up new avenues for evaluating and analyzing the multilingual abilities of language models and NLP systems.
Please refer to paper for more details, [The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants](https://arxiv.org/abs/2308.16884).
## Composition
- 900 questions per language variant
- 488 distinct passages, there are 1-2 associated questions for each.
- For each question, there is 4 multiple-choice answers, exactly 1 of which is correct.
- 122 language/language variants (including English).
- 900 x 122 = 109,800 total questions.
### Motivation
official repo https://github.com/facebookresearch/belebele
### Your contribution
- | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6284/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6284/timeline | null | completed | false | 6.48 |
https://api.github.com/repos/huggingface/datasets/issues/6283 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6283/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6283/comments | https://api.github.com/repos/huggingface/datasets/issues/6283/events | https://github.com/huggingface/datasets/pull/6283 | 1,928,552,257 | PR_kwDODunzps5cBlKq | 6,283 | Fix array cast/embed with null values | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 10 | 2023-10-05T15:24:05 | 2024-07-04T07:24:20 | 2024-02-06T19:24:19 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6283.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6283",
"merged_at": "2024-02-06T19:24:18",
"patch_url": "https://github.com/huggingface/datasets/pull/6283.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6283"
} | Fixes issues with casting/embedding PyArrow list arrays with null values. It also bumps the required PyArrow version to 12.0.0 (over 9 months old) to simplify the implementation.
Fix #6280, fix #6311, fix #6360
(Also fixes https://github.com/huggingface/datasets/issues/5430 to make Beam compatible with PyArrow>=12.0.0) | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6283/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6283/timeline | null | null | true | 2,980.003889 |
https://api.github.com/repos/huggingface/datasets/issues/6282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6282/comments | https://api.github.com/repos/huggingface/datasets/issues/6282/events | https://github.com/huggingface/datasets/pull/6282 | 1,928,473,630 | PR_kwDODunzps5cBT5p | 6,282 | Drop data_files duplicates | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 5 | 2023-10-05T14:43:08 | 2024-09-02T14:08:35 | 2024-09-02T14:08:35 | MEMBER | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6282.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6282",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6282.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6282"
} | I just added drop_duplicates=True to `.from_patterns`. I used a dict to deduplicate and preserve the order
close https://github.com/huggingface/datasets/issues/6259
close https://github.com/huggingface/datasets/issues/6272
| {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6282/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6282/timeline | null | null | true | 7,991.424167 |
https://api.github.com/repos/huggingface/datasets/issues/6281 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6281/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6281/comments | https://api.github.com/repos/huggingface/datasets/issues/6281/events | https://github.com/huggingface/datasets/pull/6281 | 1,928,456,959 | PR_kwDODunzps5cBQPd | 6,281 | Improve documentation of dataset.from_generator | {
"avatar_url": "https://avatars.githubusercontent.com/u/53510?v=4",
"events_url": "https://api.github.com/users/hartmans/events{/privacy}",
"followers_url": "https://api.github.com/users/hartmans/followers",
"following_url": "https://api.github.com/users/hartmans/following{/other_user}",
"gists_url": "https://api.github.com/users/hartmans/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hartmans",
"id": 53510,
"login": "hartmans",
"node_id": "MDQ6VXNlcjUzNTEw",
"organizations_url": "https://api.github.com/users/hartmans/orgs",
"received_events_url": "https://api.github.com/users/hartmans/received_events",
"repos_url": "https://api.github.com/users/hartmans/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hartmans/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hartmans/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hartmans",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2023-10-05T14:34:49 | 2023-10-05T19:09:07 | 2023-10-05T18:57:41 | CONTRIBUTOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6281.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6281",
"merged_at": "2023-10-05T18:57:41",
"patch_url": "https://github.com/huggingface/datasets/pull/6281.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6281"
} | Improve documentation to clarify sharding behavior (#6270) | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6281/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6281/timeline | null | null | true | 4.381111 |
https://api.github.com/repos/huggingface/datasets/issues/6280 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6280/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6280/comments | https://api.github.com/repos/huggingface/datasets/issues/6280/events | https://github.com/huggingface/datasets/issues/6280 | 1,928,215,278 | I_kwDODunzps5y7jru | 6,280 | Couldn't cast array of type fixed_size_list to Sequence(Value(float64)) | {
"avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4",
"events_url": "https://api.github.com/users/jmif/events{/privacy}",
"followers_url": "https://api.github.com/users/jmif/followers",
"following_url": "https://api.github.com/users/jmif/following{/other_user}",
"gists_url": "https://api.github.com/users/jmif/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jmif",
"id": 1000442,
"login": "jmif",
"node_id": "MDQ6VXNlcjEwMDA0NDI=",
"organizations_url": "https://api.github.com/users/jmif/orgs",
"received_events_url": "https://api.github.com/users/jmif/received_events",
"repos_url": "https://api.github.com/users/jmif/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmif/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jmif",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 4 | 2023-10-05T12:48:31 | 2024-02-06T19:24:20 | 2024-02-06T19:24:20 | NONE | null | null | null | ### Describe the bug
I have a dataset with an embedding column, when I try to map that dataset I get the following exception:
```
Traceback (most recent call last):
File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3189, in map
for rank, done, content in iflatmap_unordered(
File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1387, in iflatmap_unordered
[async_result.get(timeout=0.05) for async_result in async_results]
File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1387, in <listcomp>
[async_result.get(timeout=0.05) for async_result in async_results]
File "/Users/jmif/.virtualenvs/llm-training/lib/python3.10/site-packages/multiprocess/pool.py", line 774, in get
raise self._value
TypeError: Couldn't cast array of type
fixed_size_list<item: float>[2]
to
Sequence(feature=Value(dtype='float32', id=None), length=2, id=None)
```
### Steps to reproduce the bug
Here's a simple repro script:
```
from datasets import Features, Value, Sequence, ClassLabel, Dataset
dataset_features = Features({
'text': Value('string'),
'embedding': Sequence(Value('double'), length=2),
'categories': Sequence(ClassLabel(names=sorted([
'one',
'two',
'three'
]))),
})
dataset = Dataset.from_dict(
{
'text': ['A'] * 10000,
'embedding': [[0.0, 0.1]] * 10000,
'categories': [[0]] * 10000,
},
features=dataset_features
)
def test_mapper(r):
r['text'] = list(map(lambda t: t + ' b', r['text']))
return r
dataset = dataset.map(test_mapper, batched=True, batch_size=10, features=dataset_features, num_proc=2)
```
Removing the embedding column fixes the issue!
### Expected behavior
The mapping completes successfully.
### Environment info
- `datasets` version: 2.14.4
- Platform: macOS-14.0-arm64-arm-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.17.1
- PyArrow version: 13.0.0
- Pandas version: 2.0.3 | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6280/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6280/timeline | null | completed | false | 2,982.596944 |
https://api.github.com/repos/huggingface/datasets/issues/6279 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6279/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6279/comments | https://api.github.com/repos/huggingface/datasets/issues/6279/events | https://github.com/huggingface/datasets/issues/6279 | 1,928,028,226 | I_kwDODunzps5y62BC | 6,279 | Batched IterableDataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7010688?v=4",
"events_url": "https://api.github.com/users/lneukom/events{/privacy}",
"followers_url": "https://api.github.com/users/lneukom/followers",
"following_url": "https://api.github.com/users/lneukom/following{/other_user}",
"gists_url": "https://api.github.com/users/lneukom/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lneukom",
"id": 7010688,
"login": "lneukom",
"node_id": "MDQ6VXNlcjcwMTA2ODg=",
"organizations_url": "https://api.github.com/users/lneukom/orgs",
"received_events_url": "https://api.github.com/users/lneukom/received_events",
"repos_url": "https://api.github.com/users/lneukom/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lneukom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lneukom/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lneukom",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 9 | 2023-10-05T11:12:49 | 2024-11-07T10:01:22 | null | NONE | null | null | null | ### Feature request
Hi,
could you add an implementation of a batched `IterableDataset`. It already support an option to do batch iteration via `.iter(batch_size=...)` but this cannot be used in combination with a torch `DataLoader` since it just returns an iterator.
### Motivation
The current implementation loads each element of a batch individually which can be very slow in cases of a big batch_size. I did some experiments [here](https://discuss.huggingface.co/t/slow-dataloader-with-big-batch-size/57224) and using a batched iteration would speed up data loading significantly.
### Your contribution
N/A | null | {
"+1": 7,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 7,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6279/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6279/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6278 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6278/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6278/comments | https://api.github.com/repos/huggingface/datasets/issues/6278/events | https://github.com/huggingface/datasets/pull/6278 | 1,927,957,877 | PR_kwDODunzps5b_iKb | 6,278 | No data files duplicates | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 4 | 2023-10-05T10:31:58 | 2024-01-11T06:32:49 | 2023-10-05T14:43:17 | MEMBER | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/6278.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6278",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6278.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6278"
} | I added a new DataFilesSet class to disallow duplicate data files.
I also deprecated DataFilesList.
EDIT: actually I might just add drop_duplicates=True to `.from_patterns`
close https://github.com/huggingface/datasets/issues/6259
close https://github.com/huggingface/datasets/issues/6272
TODO:
- [ ] tests
- [ ] preserve data files order | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6278/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6278/timeline | null | null | true | 4.188611 |
https://api.github.com/repos/huggingface/datasets/issues/6277 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6277/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6277/comments | https://api.github.com/repos/huggingface/datasets/issues/6277/events | https://github.com/huggingface/datasets/issues/6277 | 1,927,044,546 | I_kwDODunzps5y3F3C | 6,277 | FileNotFoundError: Couldn't find a module script at /content/paws-x/paws-x.py. Module 'paws-x' doesn't exist on the Hugging Face Hub either. | {
"avatar_url": "https://avatars.githubusercontent.com/u/66733346?v=4",
"events_url": "https://api.github.com/users/diegogonzalezc/events{/privacy}",
"followers_url": "https://api.github.com/users/diegogonzalezc/followers",
"following_url": "https://api.github.com/users/diegogonzalezc/following{/other_user}",
"gists_url": "https://api.github.com/users/diegogonzalezc/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/diegogonzalezc",
"id": 66733346,
"login": "diegogonzalezc",
"node_id": "MDQ6VXNlcjY2NzMzMzQ2",
"organizations_url": "https://api.github.com/users/diegogonzalezc/orgs",
"received_events_url": "https://api.github.com/users/diegogonzalezc/received_events",
"repos_url": "https://api.github.com/users/diegogonzalezc/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/diegogonzalezc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/diegogonzalezc/subscriptions",
"type": "User",
"url": "https://api.github.com/users/diegogonzalezc",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2023-10-04T22:01:25 | 2023-10-08T17:05:46 | 2023-10-08T17:05:46 | NONE | null | null | null | ### Describe the bug
I'm encountering a "FileNotFoundError" while attempting to use the "paws-x" dataset to retrain the DistilRoBERTa-base model. The error message is as follows:
FileNotFoundError: Couldn't find a module script at /content/paws-x/paws-x.py. Module 'paws-x' doesn't exist on the Hugging Face Hub either.
### Steps to reproduce the bug
https://colab.research.google.com/drive/11xUUFxloClpmqLvDy_Xxfmo3oUzjY5nx#scrollTo=kUn74FigzhHm
### Expected behavior
The the trained model
### Environment info
colab, "paws-x" dataset , DistilRoBERTa-base model | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6277/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6277/timeline | null | completed | false | 91.0725 |
https://api.github.com/repos/huggingface/datasets/issues/6276 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6276/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6276/comments | https://api.github.com/repos/huggingface/datasets/issues/6276/events | https://github.com/huggingface/datasets/issues/6276 | 1,925,961,878 | I_kwDODunzps5yy9iW | 6,276 | I'm trying to fine tune the openai/whisper model from huggingface using jupyter notebook and i keep getting this error | {
"avatar_url": "https://avatars.githubusercontent.com/u/50768065?v=4",
"events_url": "https://api.github.com/users/valaofficial/events{/privacy}",
"followers_url": "https://api.github.com/users/valaofficial/followers",
"following_url": "https://api.github.com/users/valaofficial/following{/other_user}",
"gists_url": "https://api.github.com/users/valaofficial/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/valaofficial",
"id": 50768065,
"login": "valaofficial",
"node_id": "MDQ6VXNlcjUwNzY4MDY1",
"organizations_url": "https://api.github.com/users/valaofficial/orgs",
"received_events_url": "https://api.github.com/users/valaofficial/received_events",
"repos_url": "https://api.github.com/users/valaofficial/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/valaofficial/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/valaofficial/subscriptions",
"type": "User",
"url": "https://api.github.com/users/valaofficial",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 3 | 2023-10-04T11:03:41 | 2023-11-27T10:39:16 | null | NONE | null | null | null | ### Describe the bug
I'm trying to fine tune the openai/whisper model from huggingface using jupyter notebook and i keep getting this error, i'm following the steps in this blog post
https://huggingface.co/blog/fine-tune-whisper
I tried google collab and it works but because I'm on the free version the training doesn't complete
the error comes in jupyter notebook when i run this line
`common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=4)`
here is the error message
```
Map (num_proc=4): 0% 0/2506 [00:52<?, ? examples/s]
The above exception was the direct cause of the following exception:
NameError Traceback (most recent call last) Cell In[19], line 1 ----> 1 common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=4)
File ~\anaconda\Lib\site-packages\datasets\dataset_dict.py:853, in DatasetDict.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc) 850 if cache_file_names is None: 851 cache_file_names = {k: None for k in self} 852 return DatasetDict( --> 853 { 854 k: dataset.map( 855 function=function, 856 with_indices=with_indices, 857 with_rank=with_rank, 858 input_columns=input_columns, 859 batched=batched, 860 batch_size=batch_size, 861 drop_last_batch=drop_last_batch, 862 remove_columns=remove_columns, 863 keep_in_memory=keep_in_memory, 864 load_from_cache_file=load_from_cache_file, 865 cache_file_name=cache_file_names[k], 866 writer_batch_size=writer_batch_size, 867 features=features, 868 disable_nullable=disable_nullable, 869 fn_kwargs=fn_kwargs, 870 num_proc=num_proc, 871 desc=desc, 872 ) 873 for k, dataset in self.items() 874 } 875 )
File ~\anaconda\Lib\site-packages\datasets\dataset_dict.py:854, in <dictcomp>(.0) 850 if cache_file_names is None: 851 cache_file_names = {k: None for k in self} 852 return DatasetDict( 853 { --> 854 k: dataset.map( 855 function=function, 856 with_indices=with_indices, 857 with_rank=with_rank, 858 input_columns=input_columns, 859 batched=batched, 860 batch_size=batch_size, 861 drop_last_batch=drop_last_batch, 862 remove_columns=remove_columns, 863 keep_in_memory=keep_in_memory, 864 load_from_cache_file=load_from_cache_file, 865 cache_file_name=cache_file_names[k], 866 writer_batch_size=writer_batch_size, 867 features=features, 868 disable_nullable=disable_nullable, 869 fn_kwargs=fn_kwargs, 870 num_proc=num_proc, 871 desc=desc, 872 ) 873 for k, dataset in self.items() 874 } 875 )
File ~\anaconda\Lib\site-packages\datasets\arrow_dataset.py:592, in transmit_tasks.<locals>.wrapper(*args, **kwargs) 590 self: "Dataset" = kwargs.pop("self") 591 # apply actual function --> 592 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 593 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 594 for dataset in datasets: 595 # Remove task templates if a column mapping of the template is no longer valid
File ~\anaconda\Lib\site-packages\datasets\arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs) 550 self_format = { 551 "type": self._format_type, 552 "format_kwargs": self._format_kwargs, 553 "columns": self._format_columns, 554 "output_all_columns": self._output_all_columns, 555 } 556 # apply actual function --> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 559 # re-apply format to the output
File ~\anaconda\Lib\site-packages\datasets\arrow_dataset.py:3189, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 3182 logger.info(f"Spawning {num_proc} processes") 3183 with logging.tqdm( 3184 disable=not logging.is_progress_bar_enabled(), 3185 unit=" examples", 3186 total=pbar_total, 3187 desc=(desc or "Map") + f" (num_proc={num_proc})", 3188 ) as pbar: -> 3189 for rank, done, content in iflatmap_unordered( 3190 pool, Dataset._map_single, kwargs_iterable=kwargs_per_job 3191 ): 3192 if done: 3193 shards_done += 1
File ~\anaconda\Lib\site-packages\datasets\utils\py_utils.py:1394, in iflatmap_unordered(pool, func, kwargs_iterable) 1391 finally: 1392 if not pool_changed: 1393 # we get the result in case there's an error to raise -> 1394 [async_result.get(timeout=0.05) for async_result in async_results]
File ~\anaconda\Lib\site-packages\datasets\utils\py_utils.py:1394, in <listcomp>(.0) 1391 finally: 1392 if not pool_changed: 1393 # we get the result in case there's an error to raise -> 1394 [async_result.get(timeout=0.05) for async_result in async_results]
File ~\anaconda\Lib\site-packages\multiprocess\pool.py:774, in ApplyResult.get(self, timeout) 772 return self._value 773 else: --> 774 raise self._value
NameError: name 'feature_extractor' is not defined
```
### Steps to reproduce the bug
1. follow the steps in this blog post
https://huggingface.co/blog/fine-tune-whisper
2. run this line of code
`common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=4)`
3. I'm using jupyter notebook from anaconda
### Expected behavior
No error message
### Environment info
datasets version: 2.8.0
Python version: 3.11
Windows 10 | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6276/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6276/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6275 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6275/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6275/comments | https://api.github.com/repos/huggingface/datasets/issues/6275/events | https://github.com/huggingface/datasets/issues/6275 | 1,921,354,680 | I_kwDODunzps5yhYu4 | 6,275 | Would like to Contribute a dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/97907750?v=4",
"events_url": "https://api.github.com/users/vikas70607/events{/privacy}",
"followers_url": "https://api.github.com/users/vikas70607/followers",
"following_url": "https://api.github.com/users/vikas70607/following{/other_user}",
"gists_url": "https://api.github.com/users/vikas70607/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vikas70607",
"id": 97907750,
"login": "vikas70607",
"node_id": "U_kgDOBdX0Jg",
"organizations_url": "https://api.github.com/users/vikas70607/orgs",
"received_events_url": "https://api.github.com/users/vikas70607/received_events",
"repos_url": "https://api.github.com/users/vikas70607/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vikas70607/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vikas70607/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vikas70607",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 1 | 2023-10-02T07:00:21 | 2023-10-10T16:27:54 | 2023-10-10T16:27:54 | NONE | null | null | null | I have a dataset of 2500 images that can be used for color-blind machine-learning algorithms. Since , there was no dataset available online , I made this dataset myself and would like to contribute this now to community | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6275/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6275/timeline | null | completed | false | 201.459167 |
https://api.github.com/repos/huggingface/datasets/issues/6274 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6274/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6274/comments | https://api.github.com/repos/huggingface/datasets/issues/6274/events | https://github.com/huggingface/datasets/issues/6274 | 1,921,036,328 | I_kwDODunzps5ygLAo | 6,274 | FileNotFoundError for dataset with multiple builder config | {
"avatar_url": "https://avatars.githubusercontent.com/u/97120485?v=4",
"events_url": "https://api.github.com/users/LouisChen15/events{/privacy}",
"followers_url": "https://api.github.com/users/LouisChen15/followers",
"following_url": "https://api.github.com/users/LouisChen15/following{/other_user}",
"gists_url": "https://api.github.com/users/LouisChen15/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LouisChen15",
"id": 97120485,
"login": "LouisChen15",
"node_id": "U_kgDOBcnw5Q",
"organizations_url": "https://api.github.com/users/LouisChen15/orgs",
"received_events_url": "https://api.github.com/users/LouisChen15/received_events",
"repos_url": "https://api.github.com/users/LouisChen15/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LouisChen15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LouisChen15/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LouisChen15",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 2 | 2023-10-01T23:45:56 | 2024-08-14T04:42:02 | 2023-10-02T20:09:38 | NONE | null | null | null | ### Describe the bug
When there is only one config and only the dataset name is entered when using datasets.load_dataset(), it works fine. But if I create a second builder_config for my dataset and enter the config name when using datasets.load_dataset(), the following error will happen.
FileNotFoundError: [Errno 2] No such file or directory: 'C:/Users/chenx/.cache/huggingface/datasets/my_dataset/0_shot_multiple_choice/1.0.0/97c3854a012cfd6b045e3be4c864739902af2d818bb9235b047baa94c302e9a2.incomplete/my_dataset-test-00000-00000-of-NNNNN.arrow'
The "XXX.incomplete folder" in the cache folder of my dataset will disappear before "generating test split", which does not happen when config name is not entered and the config name is "default"
C:\Users\chenx\.cache\huggingface\datasets\my_dataset\0_shot_multiple_choice\1.0.0
The folder that is supposed to remain under the above directory will disappear, and the data generator will not have a place to generate data into.
### Steps to reproduce the bug
test = load_dataset('my_dataset', '0_shot_multiple_choice')
### Expected behavior
FileNotFoundError: [Errno 2] No such file or directory: 'C:/Users/chenx/.cache/huggingface/datasets/my_dataset/0_shot_multiple_choice/1.0.0/97c3854a012cfd6b045e3be4c864739902af2d818bb9235b047baa94c302e9a2.incomplete/my_dataset-test-00000-00000-of-NNNNN.arrow'
### Environment info
datasets 2.14.5
python 3.8.18 | {
"avatar_url": "https://avatars.githubusercontent.com/u/97120485?v=4",
"events_url": "https://api.github.com/users/LouisChen15/events{/privacy}",
"followers_url": "https://api.github.com/users/LouisChen15/followers",
"following_url": "https://api.github.com/users/LouisChen15/following{/other_user}",
"gists_url": "https://api.github.com/users/LouisChen15/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/LouisChen15",
"id": 97120485,
"login": "LouisChen15",
"node_id": "U_kgDOBcnw5Q",
"organizations_url": "https://api.github.com/users/LouisChen15/orgs",
"received_events_url": "https://api.github.com/users/LouisChen15/received_events",
"repos_url": "https://api.github.com/users/LouisChen15/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/LouisChen15/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LouisChen15/subscriptions",
"type": "User",
"url": "https://api.github.com/users/LouisChen15",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6274/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6274/timeline | null | completed | false | 20.395 |
https://api.github.com/repos/huggingface/datasets/issues/6273 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6273/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6273/comments | https://api.github.com/repos/huggingface/datasets/issues/6273/events | https://github.com/huggingface/datasets/issues/6273 | 1,920,922,260 | I_kwDODunzps5yfvKU | 6,273 | Broken Link to PubMed Abstracts dataset . | {
"avatar_url": "https://avatars.githubusercontent.com/u/100606327?v=4",
"events_url": "https://api.github.com/users/sameemqureshi/events{/privacy}",
"followers_url": "https://api.github.com/users/sameemqureshi/followers",
"following_url": "https://api.github.com/users/sameemqureshi/following{/other_user}",
"gists_url": "https://api.github.com/users/sameemqureshi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sameemqureshi",
"id": 100606327,
"login": "sameemqureshi",
"node_id": "U_kgDOBf8hdw",
"organizations_url": "https://api.github.com/users/sameemqureshi/orgs",
"received_events_url": "https://api.github.com/users/sameemqureshi/received_events",
"repos_url": "https://api.github.com/users/sameemqureshi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sameemqureshi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sameemqureshi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sameemqureshi",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 5 | 2023-10-01T19:08:48 | 2024-04-28T02:30:42 | null | NONE | null | null | null | ### Describe the bug
The link provided for the dataset is broken,
data_files =
[https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst](url)
The
### Steps to reproduce the bug
Steps to reproduce:
1) Head over to [https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt#big-data-datasets-to-the-rescue](url)
2) In the Section "What is the Pile?", you can see a code snippet that contains the broken link.
### Expected behavior
The link should Redirect to the "PubMed Abstracts dataset" as expected .
### Environment info
. | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6273/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6273/timeline | null | null | false | null |
https://api.github.com/repos/huggingface/datasets/issues/6272 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6272/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6272/comments | https://api.github.com/repos/huggingface/datasets/issues/6272/events | https://github.com/huggingface/datasets/issues/6272 | 1,920,831,487 | I_kwDODunzps5yfY__ | 6,272 | Duplicate `data_files` when named `<split>/<split>.parquet` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | 7 | 2023-10-01T15:43:56 | 2024-03-15T15:22:05 | 2024-03-15T15:22:05 | MEMBER | null | null | null | e.g. with `u23429/stock_1_minute_ticker`
```ipython
In [1]: from datasets import *
In [2]: b = load_dataset_builder("u23429/stock_1_minute_ticker")
Downloading readme: 100%|██████████████████████████| 627/627 [00:00<00:00, 246kB/s]
In [3]: b.config.data_files
Out[3]:
{NamedSplit('train'): ['hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/train/train.parquet',
'hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/train/train.parquet'],
NamedSplit('validation'): ['hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/validation/validation.parquet',
'hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/validation/validation.parquet'],
NamedSplit('test'): ['hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/test/test.parquet',
'hf://datasets/u23429/stock_1_minute_ticker@65c973cf4ec061f01a363b40da4c1bb128ba4166/test/test.parquet']}
```
This bug issue is present in the current `datasets` 2.14.5 and also on `main` even after https://github.com/huggingface/datasets/pull/6244 cc @mariosasko | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6272/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6272/timeline | null | completed | false | 3,983.635833 |
https://api.github.com/repos/huggingface/datasets/issues/6271 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6271/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6271/comments | https://api.github.com/repos/huggingface/datasets/issues/6271/events | https://github.com/huggingface/datasets/issues/6271 | 1,920,420,295 | I_kwDODunzps5yd0nH | 6,271 | Overwriting Split overwrites data but not metadata, corrupting dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/13859249?v=4",
"events_url": "https://api.github.com/users/govindrai/events{/privacy}",
"followers_url": "https://api.github.com/users/govindrai/followers",
"following_url": "https://api.github.com/users/govindrai/following{/other_user}",
"gists_url": "https://api.github.com/users/govindrai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/govindrai",
"id": 13859249,
"login": "govindrai",
"node_id": "MDQ6VXNlcjEzODU5MjQ5",
"organizations_url": "https://api.github.com/users/govindrai/orgs",
"received_events_url": "https://api.github.com/users/govindrai/received_events",
"repos_url": "https://api.github.com/users/govindrai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/govindrai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/govindrai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/govindrai",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 0 | 2023-09-30T22:37:31 | 2023-10-16T13:30:50 | 2023-10-16T13:30:50 | NONE | null | null | null | ### Describe the bug
I want to be able to overwrite/update/delete splits in my dataset. Currently the only way to do is to manually go into the dataset and delete the split. If I try to overwrite programmatically I end up in an error state and (somewhat) corrupting the dataset. Read below.
**Current Behavior**
When I push to an existing split I get this error:
`ValueError: Split complexRoofLocation_01Apr2023_to_31May2023test already present`
This seems to suggest that the library doesn't support overwriting splits.
**Potential Bug**
What’s strange is that datasets, despite the operation erroring out with the ValueError above, does, in fact, overwrite the split:
`Pushing dataset shards to the dataset hub: 100% [.....................] 1/1 [00:00<00:00, 55.04it/s]`
Even though you got an error message and your code fails, your dataset is now changed. That seems like a bug. Either don't change the dataset, or don't throw the error and allow the script to proceed.
Additional Bug
While it overwrites the split, it doesn’t overwrite the split’s information. Because of this when you pull down the dataset you may end up getting a `NonMatchingSplitsSizesError` if the size of the dataset during the overwrite is different. For example, my original split had 5 rows, but on my overwrite, I only had 4. Then when I try to download the dataset, I get a `NonMatchingSplitsSizesError` because the dataset's data.json states there’s 5 but only 4 exist in the split.
Expected Behavior
This corrupts the dataset rendering it unusable (until you take manual intervention). Either the library should let the overwrite happen (which it does but should also update the metadata) or it shouldn’t do anything.
### Steps to reproduce the bug
[Colab Notebook](https://colab.research.google.com/drive/1bqVkD06Ngs9MQNdSk_ygCG6y1UqXA4pC?usp=sharing)
### Expected behavior
The split should be overwritten and I should be able to use the new version of the dataset without issue.
### Environment info
- `datasets` version: 2.14.5
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- PyArrow version: 9.0.0
- Pandas version: 1.5.3
| {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6271/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6271/timeline | null | completed | false | 374.888611 |
https://api.github.com/repos/huggingface/datasets/issues/6270 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6270/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6270/comments | https://api.github.com/repos/huggingface/datasets/issues/6270/events | https://github.com/huggingface/datasets/issues/6270 | 1,920,329,373 | I_kwDODunzps5ydead | 6,270 | Dataset.from_generator raises with sharded gen_args | {
"avatar_url": "https://avatars.githubusercontent.com/u/53510?v=4",
"events_url": "https://api.github.com/users/hartmans/events{/privacy}",
"followers_url": "https://api.github.com/users/hartmans/followers",
"following_url": "https://api.github.com/users/hartmans/following{/other_user}",
"gists_url": "https://api.github.com/users/hartmans/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hartmans",
"id": 53510,
"login": "hartmans",
"node_id": "MDQ6VXNlcjUzNTEw",
"organizations_url": "https://api.github.com/users/hartmans/orgs",
"received_events_url": "https://api.github.com/users/hartmans/received_events",
"repos_url": "https://api.github.com/users/hartmans/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hartmans/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hartmans/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hartmans",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 6 | 2023-09-30T16:50:06 | 2023-10-11T20:29:12 | 2023-10-11T20:29:11 | CONTRIBUTOR | null | null | null | ### Describe the bug
According to the docs of Datasets.from_generator:
```
gen_kwargs(`dict`, *optional*):
Keyword arguments to be passed to the `generator` callable.
You can define a sharded dataset by passing the list of shards in `gen_kwargs`.
```
So I'd expect that if gen_kwargs was a list, then my generator would be called once for each element in the list with the dict in the list for that element.
It doesn't work that way though.
### Steps to reproduce the bug
```python
#!/usr/bin/python
from pathlib import Path
import datasets
def process_yaml(file):
yield dict(example=42)
if __name__ == '__main__':
import sys
dir = Path(sys.argv[0]).parent
ds = datasets.Dataset.from_generator(process_yaml, gen_kwargs=[{'file':f} for f in dir.glob('*.yml')],
)
ds.to_json('training.jsonl')
```
```
Generating train split: 0 examples [00:00, ? examples/s]
Traceback (most recent call last):
File "/tmp/dataset_bug.py", line 13, in <module>
ds = datasets.Dataset.from_generator(process_yaml, gen_kwargs=[{'file':f} for f in dir.glob('*.yml')],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/arrow_dataset.py", line 1072, in from_generator
).read()
^^^^^^
File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/io/generator.py", line 47, in read
self.builder.download_and_prepare(
File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 954, in download_and_prepare
self._download_and_prepare(
File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 1717, in _download_and_prepare
super()._download_and_prepare(
File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 1049, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 1555, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/hartmans/ai/venv/lib/python3.11/site-packages/datasets/builder.py", line 1656, in _prepare_split_single
generator = self._generate_examples(**gen_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: datasets.packaged_modules.generator.generator.Generator._generate_examples() argument after ** must be a ```
mapping, not list
### Expected behavior
I would expect that process_yaml would be called once for each yaml file in the directory where the script is run.
I also tried with the list being in gen_kwargs, but in that case process_yaml gets called with a list.
### Environment info
- `datasets` version: 2.14.6.dev0 (git commit 0cc77d7f45c7369; also tested with 2.14.0)
- Platform: Linux-6.1.0-10-amd64-x86_64-with-glibc2.36
- Python version: 3.11.2
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
| {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6270/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6270/timeline | null | completed | false | 267.651389 |
https://api.github.com/repos/huggingface/datasets/issues/6269 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6269/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6269/comments | https://api.github.com/repos/huggingface/datasets/issues/6269/events | https://github.com/huggingface/datasets/pull/6269 | 1,919,572,790 | PR_kwDODunzps5bjbDc | 6,269 | Reduce the number of commits in `push_to_hub` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | [] | closed | false | null | [] | null | 21 | 2023-09-29T16:22:31 | 2023-10-16T16:03:18 | 2023-10-16T13:30:46 | COLLABORATOR | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6269.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6269",
"merged_at": "2023-10-16T13:30:46",
"patch_url": "https://github.com/huggingface/datasets/pull/6269.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6269"
} | Reduces the number of commits in `push_to_hub` by using the `preupload` API from https://github.com/huggingface/huggingface_hub/pull/1699. Each commit contains a maximum of 50 uploaded files.
A shard's fingerprint no longer needs to be added as a suffix to support resuming an upload, meaning the shards' naming scheme is the same as the initial one.
Also, it adds support for the following params: `create_pr`, `commit_message` and `revision` (`branch` deprecated; unlike the previous implementation, this one creates a branch if the branch does not exist to be consistent with `transformers`).
(Nit) This implementation keeps the markdown section of the generated README.md empty to enable importing the card template (when the card is accessed on the Hub).
Fixes https://github.com/huggingface/datasets/issues/5492, fixes https://github.com/huggingface/datasets/issues/6257, fixes https://github.com/huggingface/datasets/issues/5045, fixes https://github.com/huggingface/datasets/issues/6271
TODO:
- [x] set the minimal version to the next `hfh` release (once it's published) | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko",
"user_view_type": "public"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 3,
"laugh": 0,
"rocket": 1,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6269/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6269/timeline | null | null | true | 405.1375 |
https://api.github.com/repos/huggingface/datasets/issues/6268 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6268/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6268/comments | https://api.github.com/repos/huggingface/datasets/issues/6268/events | https://github.com/huggingface/datasets/pull/6268 | 1,919,010,645 | PR_kwDODunzps5bhgs7 | 6,268 | Add repo_id to DatasetInfo | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq",
"user_view_type": "public"
} | [] | open | false | null | [] | null | 9 | 2023-09-29T10:24:55 | 2023-10-01T15:29:45 | null | MEMBER | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/6268.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6268",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6268.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6268"
} | ```python
from datasets import load_dataset
ds = load_dataset("lhoestq/demo1", split="train")
ds = ds.map(lambda x: {}, num_proc=2).filter(lambda x: True).remove_columns(["id"])
print(ds.repo_id)
# lhoestq/demo1
```
- repo_id is None when the dataset doesn't come from the Hub, e.g. from Dataset.from_dict
- repo_id is set to None when concatenating datasets with different repo ids
related to https://github.com/huggingface/datasets/issues/4129
TODO:
- [ ] discuss if it's ok for now
- [ ] tests | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6268/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6268/timeline | null | null | true | null |
https://api.github.com/repos/huggingface/datasets/issues/6267 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6267/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6267/comments | https://api.github.com/repos/huggingface/datasets/issues/6267/events | https://github.com/huggingface/datasets/issues/6267 | 1,916,443,262 | I_kwDODunzps5yOpp- | 6,267 | Multi label class encoding | {
"avatar_url": "https://avatars.githubusercontent.com/u/1000442?v=4",
"events_url": "https://api.github.com/users/jmif/events{/privacy}",
"followers_url": "https://api.github.com/users/jmif/followers",
"following_url": "https://api.github.com/users/jmif/following{/other_user}",
"gists_url": "https://api.github.com/users/jmif/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jmif",
"id": 1000442,
"login": "jmif",
"node_id": "MDQ6VXNlcjEwMDA0NDI=",
"organizations_url": "https://api.github.com/users/jmif/orgs",
"received_events_url": "https://api.github.com/users/jmif/received_events",
"repos_url": "https://api.github.com/users/jmif/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jmif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jmif/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jmif",
"user_view_type": "public"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | 7 | 2023-09-27T22:48:08 | 2023-10-26T18:46:08 | null | NONE | null | null | null | ### Feature request
I have a multi label dataset and I'd like to be able to class encode the column and store the mapping directly in the features just as I can with a single label column. `class_encode_column` currently does not support multi labels.
Here's an example of what I'd like to encode:
```
data = {
'text': ['one', 'two', 'three', 'four'],
'labels': [['a', 'b'], ['b'], ['b', 'c'], ['a', 'd']]
}
dataset = Dataset.from_dict(data)
dataset = dataset.class_encode_column('labels')
```
I did some digging into the code base to evaluate the feasibility of this (note I'm very new to this code base) and from what I noticed the `ClassLabel` feature is still stored as an underlying raw data type of int so I thought a `MultiLabel` feature could similarly be stored as a Sequence of ints, thus not requiring significant serialization / conversion work to / from arrow.
I did a POC of this [here](https://github.com/huggingface/datasets/commit/15443098e9ce053943172f7ec6fce3769d7dff6e) and included a simple test case (please excuse all the commented out tests, going for speed of POC here and didn't want to fight IDE to debug a single test). In the test I just assert that `num_classes` is the same to show that things are properly serializing, but if you break after loading from disk you'll see the dataset correct and the dataset feature is as expected.
After digging more I did notice a few issues
- After loading from disk I noticed type of the `labels` class is `Sequence` not `MultiLabel` (though the added `feature` attribute came through). This doesn't happen for `ClassLabel` but I couldn't find the encode / decode code paths that handle this.
- I subclass `Sequence` in `MultiLabel` to leverage existing serialization, but this does miss the custom encode logic that `ClassLabel` has. I'm not sure of the best way to approach this as I haven't fully understood the encode / decode flow for datasets. I suspect my simple implementation will need some improvement as it'll require a significant amount of repeated logic to mimic `ClassLabel` behavior.
### Motivation
See above - would like to support multi label class encodings.
### Your contribution
This would be a big help for us and we're open to contributing but I'll likely need some guidance on how to implement to fit the encode / decode flow. Some suggestions on tests / would be great too, I'm guessing in addition to the class encode tests (that I'll need to expand) we'll need encode / decode tests. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6267/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6267/timeline | null | null | false | null |