url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.25B
| node_id
stringlengths 18
32
| number
int64 1
4.41k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,654B
| updated_at
int64 1,587B
1,654B
| closed_at
int64 1,587B
1,654B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | draft
bool 2
classes | pull_request
dict | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 1
value | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4408/comments | https://api.github.com/repos/huggingface/datasets/issues/4408/events | https://github.com/huggingface/datasets/pull/4408 | 1,248,687,574 | PR_kwDODunzps44ecNI | 4,408 | Update imagenet gate | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,510,739,000 | 1,653,511,511,000 | 1,653,511,007,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4408",
"html_url": "https://github.com/huggingface/datasets/pull/4408",
"diff_url": "https://github.com/huggingface/datasets/pull/4408.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4408.patch",
"merged_at": 1653511007000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4408/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4407/comments | https://api.github.com/repos/huggingface/datasets/issues/4407/events | https://github.com/huggingface/datasets/issues/4407 | 1,248,671,778 | I_kwDODunzps5KbTgi | 4,407 | Dataset Viewer issue for [conll2012_ontonotesv5] | {
"login": "jiangwy99",
"id": 39762734,
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangwy99",
"html_url": "https://github.com/jiangwy99",
"followers_url": "https://api.github.com/users/jiangwy99/followers",
"following_url": "https://api.github.com/users/jiangwy99/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwy99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangwy99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwy99/subscriptions",
"organizations_url": "https://api.github.com/users/jiangwy99/orgs",
"repos_url": "https://api.github.com/users/jiangwy99/repos",
"events_url": "https://api.github.com/users/jiangwy99/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangwy99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @jiangwy99.\r\n\r\nI guess this could be addressed only once we fix our issue with irresponsive backend endpoint.\r\n\r\nCC: @severo "
] | 1,653,509,913,000 | 1,653,509,914,000 | null | NONE | null | null | null | ### Link
https://huggingface.co/datasets/conll2012_ontonotesv5
### Description
Dataset viewer outage.
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4407/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4406/comments | https://api.github.com/repos/huggingface/datasets/issues/4406/events | https://github.com/huggingface/datasets/pull/4406 | 1,248,626,622 | PR_kwDODunzps44ePLU | 4,406 | Improve language tag for PIAF dataset | {
"login": "lbourdois",
"id": 58078086,
"node_id": "MDQ6VXNlcjU4MDc4MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/58078086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lbourdois",
"html_url": "https://github.com/lbourdois",
"followers_url": "https://api.github.com/users/lbourdois/followers",
"following_url": "https://api.github.com/users/lbourdois/following{/other_user}",
"gists_url": "https://api.github.com/users/lbourdois/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lbourdois/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lbourdois/subscriptions",
"organizations_url": "https://api.github.com/users/lbourdois/orgs",
"repos_url": "https://api.github.com/users/lbourdois/repos",
"events_url": "https://api.github.com/users/lbourdois/events{/privacy}",
"received_events_url": "https://api.github.com/users/lbourdois/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,653,507,715,000 | 1,653,507,862,000 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4406",
"html_url": "https://github.com/huggingface/datasets/pull/4406",
"diff_url": "https://github.com/huggingface/datasets/pull/4406.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4406.patch",
"merged_at": null
} | Hi,
As pointed out by @lhoestq in this discussion (https://huggingface.co/datasets/asi/wikitext_fr/discussions/1), it is not yet possible to edit datasets outside of a namespace with the Hub PR feature and that you have to go through GitHub.
This modification should allow better referencing since only the xx language tags are currently taken into account and not the xx-xx. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4406/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4405 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4405/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4405/comments | https://api.github.com/repos/huggingface/datasets/issues/4405/events | https://github.com/huggingface/datasets/issues/4405 | 1,248,574,087 | I_kwDODunzps5Ka7qH | 4,405 | [TypeError: Couldn't cast array of type] Cannot process dataset in v2.2.2 | {
"login": "jiangwy99",
"id": 39762734,
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangwy99",
"html_url": "https://github.com/jiangwy99",
"followers_url": "https://api.github.com/users/jiangwy99/followers",
"following_url": "https://api.github.com/users/jiangwy99/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwy99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangwy99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwy99/subscriptions",
"organizations_url": "https://api.github.com/users/jiangwy99/orgs",
"repos_url": "https://api.github.com/users/jiangwy99/repos",
"events_url": "https://api.github.com/users/jiangwy99/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangwy99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"And if the problem is that the way I am to construct the {Entity Type: list of spans} makes entity types without any spans hard to handle, is there a better way to meet the demand? Although I have verified that to make entity types without any spans to behave like `entity_chunk[label] = [[\"\"]]` can perform normally, I still wonder if there is a more elegant way?"
] | 1,653,505,003,000 | 1,653,506,271,000 | null | NONE | null | null | null | ## Describe the bug
I am trying to process the [conll2012_ontonotesv5](https://huggingface.co/datasets/conll2012_ontonotesv5) dataset in `datasets` v2.2.2 and am running into a type error when casting the features.
## Steps to reproduce the bug
```python
import os
from typing import (
List,
Dict,
)
from collections import (
defaultdict,
)
from dataclasses import (
dataclass,
)
from datasets import (
load_dataset,
)
@dataclass
class ConllConverter:
path: str
name: str
cache_dir: str
def __post_init__(
self,
):
self.dataset = load_dataset(
path=self.path,
name=self.name,
cache_dir=self.cache_dir,
)
def convert(
self,
):
class_label = self.dataset["train"].features["sentences"][0]["named_entities"].feature
# label_set = list(set([
# label.split("-")[1] if label != "O" else label for label in class_label.names
# ]))
def prepare_chunk(token, entity):
assert len(token) == len(entity)
# Sequence length
length = len(token)
# Variable used
entity_chunk = defaultdict(list)
idx = flag = 0
# While loop
while idx < length:
if entity[idx] == "O":
flag += 1
idx += 1
else:
iob_tp, lab_tp = entity[idx].split("-")
assert iob_tp == "B"
idx += 1
while idx < length and entity[idx].startswith("I-"):
idx += 1
entity_chunk[lab_tp].append(token[flag: idx])
flag = idx
entity_chunk = dict(entity_chunk)
# for label in label_set:
# if label != "O" and label not in entity_chunk.keys():
# entity_chunk[label] = None
return entity_chunk
def prepare_features(
batch: Dict[str, List],
) -> Dict[str, List]:
sentence = [
sent for doc_sent in batch["sentences"] for sent in doc_sent
]
feature = {
"sentence": list(),
}
for sent in sentence:
token = sent["words"]
entity = class_label.int2str(sent["named_entities"])
entity_chunk = prepare_chunk(token, entity)
sent_feat = {
"token": token,
"entity": entity,
"entity_chunk": entity_chunk,
}
feature["sentence"].append(sent_feat)
return feature
column_names = self.dataset.column_names["train"]
dataset = self.dataset.map(
function=prepare_features,
with_indices=False,
batched=True,
batch_size=3,
remove_columns=column_names,
num_proc=1,
)
dataset.save_to_disk(
dataset_dict_path=os.path.join("data", self.path, self.name)
)
if __name__ == "__main__":
converter = ConllConverter(
path="conll2012_ontonotesv5",
name="english_v4",
cache_dir="cache",
)
converter.convert()
```
## Expected results
I want to use the dataset to perform NER task and to change the label list into a {Entity Type: list of spans} format.
## Actual results
<details>
<summary>Traceback</summary>
```python
Traceback (most recent call last): | 0/81 [00:00<?, ?ba/s]
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 532, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 499, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2751, in _map_single
writer.write_batch(batch)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 503, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 230, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 198, in __arrow_array__
out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1675, in wrapper
return func(array, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1793, in cast_array_to_feature
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1793, in <listcomp>
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1675, in wrapper
return func(array, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1844, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
struct<CARDINAL: list<item: list<item: string>>, DATE: list<item: list<item: string>>, EVENT: list<item: list<item: string>>, FAC: list<item: list<item: string>>, GPE: list<item: list<item: string>>, LANGUAGE: list<item: list<item: string>>, LAW: list<item: list<item: string>>, LOC: list<item: list<item: string>>, MONEY: list<item: list<item: string>>, NORP: list<item: list<item: string>>, ORDINAL: list<item: list<item: string>>, ORG: list<item: list<item: string>>, PERCENT: list<item: list<item: string>>, PERSON: list<item: list<item: string>>, QUANTITY: list<item: list<item: string>>, TIME: list<item: list<item: string>>, WORK_OF_ART: list<item: list<item: string>>>
to
{'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)}
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home2/jiangwangyi/workspace/work/Entity/dataconverter.py", line 110, in <module>
converter.convert()
File "/home2/jiangwangyi/workspace/work/Entity/dataconverter.py", line 91, in convert
dataset = self.dataset.map(
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 770, in map
{
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 771, in <dictcomp>
k: dataset.map(
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2459, in map
transformed_shards[index] = async_result.get()
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/multiprocess/pool.py", line 771, in get
raise self._value
TypeError: Couldn't cast array of type
struct<CARDINAL: list<item: list<item: string>>, DATE: list<item: list<item: string>>, EVENT: list<item: list<item: string>>, FAC: list<item: list<item: string>>, GPE: list<item: list<item: string>>, LANGUAGE: list<item: list<item: string>>, LAW: list<item: list<item: string>>, LOC: list<item: list<item: string>>, MONEY: list<item: list<item: string>>, NORP: list<item: list<item: string>>, ORDINAL: list<item: list<item: string>>, ORG: list<item: list<item: string>>, PERCENT: list<item: list<item: string>>, PERSON: list<item: list<item: string>>, QUANTITY: list<item: list<item: string>>, TIME: list<item: list<item: string>>, WORK_OF_ART: list<item: list<item: string>>>
to
{'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)}
```
</details>
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2
- Platform: Ubuntu 18.04
- Python version: 3.9.7
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4405/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4404 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4404/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4404/comments | https://api.github.com/repos/huggingface/datasets/issues/4404/events | https://github.com/huggingface/datasets/issues/4404 | 1,248,572,899 | I_kwDODunzps5Ka7Xj | 4,404 | Dataset should have a `.name` field | {
"login": "f4hy",
"id": 36440,
"node_id": "MDQ6VXNlcjM2NDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/36440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/f4hy",
"html_url": "https://github.com/f4hy",
"followers_url": "https://api.github.com/users/f4hy/followers",
"following_url": "https://api.github.com/users/f4hy/following{/other_user}",
"gists_url": "https://api.github.com/users/f4hy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/f4hy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/f4hy/subscriptions",
"organizations_url": "https://api.github.com/users/f4hy/orgs",
"repos_url": "https://api.github.com/users/f4hy/repos",
"events_url": "https://api.github.com/users/f4hy/events{/privacy}",
"received_events_url": "https://api.github.com/users/f4hy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! You can already use `dset.builder_name` and `dset.config_name` for that purpose. And when it comes to versioning, it's better to use `dset._fingerprint` than the `version` attribute as the former represents a deterministic hash that encodes all the mutable ops executed on a dataset, and the latter stays the same unless it's manually updated after each op."
] | 1,653,504,968,000 | 1,653,504,968,000 | null | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
If building pipelines that can evaluate on more than one dataset, it would be nice to be able to log results of things like `Evaluating on {dataset.name}` or `results for {dataset.name} are: {results}`
Without some way of concisely identifying a dataset from the dataset object, tools which might run on more than one dataset must be passed the dataset object _and_ the name/id of the dataset being used.
**Describe the solution you'd like**
The DatasetInfo class should have a `name` field which is the name of a dataset. then for a given dataset if it evolves in time the `version` can be updated but its different versions of the same dataset with a unique `name`. The name could then all be accessed by `dataset.name`
**Describe alternatives you've considered**
For my own purposes I am considering making `NamedDataset[Dataset]` where the subclass just has a .name field.
**Additional context**
My guess is that most usecases are not working with more than one dataset in a given pipeline so a name is not really needed. This has surprised me though as one of the advantages of a standard dataset interface is to be able to build pipelines which can be passed in a dataset and separate responsibilities of the dataset loading from the train or eval pipeline.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4404/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4403/comments | https://api.github.com/repos/huggingface/datasets/issues/4403/events | https://github.com/huggingface/datasets/pull/4403 | 1,248,390,134 | PR_kwDODunzps44dcpl | 4,403 | Uncomment logging deactivation for ArrowBasedBuilder | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4403). All of your documentation changes will be reflected on that endpoint."
] | 1,653,497,175,000 | 1,653,497,463,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4403",
"html_url": "https://github.com/huggingface/datasets/pull/4403",
"diff_url": "https://github.com/huggingface/datasets/pull/4403.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4403.patch",
"merged_at": null
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4403/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4402 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4402/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4402/comments | https://api.github.com/repos/huggingface/datasets/issues/4402/events | https://github.com/huggingface/datasets/pull/4402 | 1,248,078,067 | PR_kwDODunzps44cdR5 | 4,402 | Skip identical files in `push_to_hub` instead of overwriting | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,484,371,000 | 1,653,491,796,000 | 1,653,491,283,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4402",
"html_url": "https://github.com/huggingface/datasets/pull/4402",
"diff_url": "https://github.com/huggingface/datasets/pull/4402.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4402.patch",
"merged_at": 1653491283000
} | Skip identical files instead of overwriting them to save bandwidth and circumvent (user-side/server-side) errors, which can arise when working with large datasets due to long-lasting HTTP connections, by repeating calls to `push_to_hub` to resume an upload.
To be able to check if an upload can be resumed, this PR modifies the shard naming scheme from:
```
data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].parquet
```
to:
```
data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]-<SHARD_FINGERPRINT>.parquet
```
cc @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4402/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4402/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4401 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4401/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4401/comments | https://api.github.com/repos/huggingface/datasets/issues/4401/events | https://github.com/huggingface/datasets/issues/4401 | 1,247,695,921 | I_kwDODunzps5KXlQx | 4,401 | "NonMatchingChecksumError" when importing 'spider' dataset | {
"login": "OmarAlaaeldein",
"id": 81417777,
"node_id": "MDQ6VXNlcjgxNDE3Nzc3",
"avatar_url": "https://avatars.githubusercontent.com/u/81417777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OmarAlaaeldein",
"html_url": "https://github.com/OmarAlaaeldein",
"followers_url": "https://api.github.com/users/OmarAlaaeldein/followers",
"following_url": "https://api.github.com/users/OmarAlaaeldein/following{/other_user}",
"gists_url": "https://api.github.com/users/OmarAlaaeldein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OmarAlaaeldein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OmarAlaaeldein/subscriptions",
"organizations_url": "https://api.github.com/users/OmarAlaaeldein/orgs",
"repos_url": "https://api.github.com/users/OmarAlaaeldein/repos",
"events_url": "https://api.github.com/users/OmarAlaaeldein/events{/privacy}",
"received_events_url": "https://api.github.com/users/OmarAlaaeldein/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Thanks for reporting, @OmarAlaaeldein.\r\n\r\nDatasets hosted at Google Drive give problems quite often due to a change in their service:\r\n- #3786 \r\n\r\nRelated to:\r\n- #3906\r\n\r\nI'm having a look.",
"We have made a Pull Request to replace the Google Drive URL. This fix will be accessible in our next `datasets` library release.\r\n\r\nIn the meantime, once the PR merged into master, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```"
] | 1,653,464,707,000 | 1,653,466,999,000 | null | NONE | null | null | null | ## Describe the bug
When importing 'spider' dataset [https://huggingface.co/datasets/spider] an error occurs
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('spider')
```
## Expected results
Dataset object
## Actual results
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0']
## Environment info
- `datasets` version: 2.2.2
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.11
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4401/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4400 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4400/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4400/comments | https://api.github.com/repos/huggingface/datasets/issues/4400/events | https://github.com/huggingface/datasets/issues/4400 | 1,247,404,237 | I_kwDODunzps5KWeDN | 4,400 | load dataset wikitext-2-raw-v1 failed. Could not reach wikitext-2-raw-v1.py. | {
"login": "cailun01",
"id": 20658907,
"node_id": "MDQ6VXNlcjIwNjU4OTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/20658907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cailun01",
"html_url": "https://github.com/cailun01",
"followers_url": "https://api.github.com/users/cailun01/followers",
"following_url": "https://api.github.com/users/cailun01/following{/other_user}",
"gists_url": "https://api.github.com/users/cailun01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cailun01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cailun01/subscriptions",
"organizations_url": "https://api.github.com/users/cailun01/orgs",
"repos_url": "https://api.github.com/users/cailun01/repos",
"events_url": "https://api.github.com/users/cailun01/events{/privacy}",
"received_events_url": "https://api.github.com/users/cailun01/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,653,448,244,000 | 1,653,449,196,000 | 1,653,449,196,000 | NONE | null | null | null | ## Describe the bug
Could not reach wikitext-2-raw-v1.py
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("wikitext-2-raw-v1")
```
## Expected results
Download `wikitext-2-raw-v1` dataset successfully.
## Actual results
```
File "load_datasets.py", line 13, in <module>
load_dataset("wikitext-2-raw-v1")
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1715, in load_dataset
**config_kwargs,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1536, in load_dataset_builder
data_files=data_files,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1282, in dataset_module_factory
raise e1 from None
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1224, in dataset_module_factory
dynamic_modules_path=dynamic_modules_path,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 559, in get_module
local_path = self.download_loading_script(revision)
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 539, in download_loading_script
return cached_path(file_path, download_config=download_config)
File "/root/miniconda3/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 246, in cached_path
download_desc=download_config.download_desc,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 582, in get_from_cache
raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.2.2/datasets/wikitext-2-raw-v1/wikitext-2-raw-v1.py (ReadTimeout(ReadTimeoutError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=100)",),))
```
I tried to download wikitext-2-raw-v1.py by chrome and got:
![image](https://user-images.githubusercontent.com/20658907/170171595-0ca9f1da-c05a-4b57-861e-9530bfa3bdb9.png)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2
- Platform: CentOS 7
- Python version: 3.6
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4400/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4399 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4399/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4399/comments | https://api.github.com/repos/huggingface/datasets/issues/4399/events | https://github.com/huggingface/datasets/issues/4399 | 1,246,948,299 | I_kwDODunzps5KUuvL | 4,399 | LocalDatasetModuleFactoryWithoutScript extracts invalid builder name | {
"login": "apohllo",
"id": 40543,
"node_id": "MDQ6VXNlcjQwNTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/40543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apohllo",
"html_url": "https://github.com/apohllo",
"followers_url": "https://api.github.com/users/apohllo/followers",
"following_url": "https://api.github.com/users/apohllo/following{/other_user}",
"gists_url": "https://api.github.com/users/apohllo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apohllo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apohllo/subscriptions",
"organizations_url": "https://api.github.com/users/apohllo/orgs",
"repos_url": "https://api.github.com/users/apohllo/repos",
"events_url": "https://api.github.com/users/apohllo/events{/privacy}",
"received_events_url": "https://api.github.com/users/apohllo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Ok, so\r\n```\r\nos.path.basename(\"/home/user/\")\r\n```\r\ngives `''` while \r\n```\r\nos.path.basename(\"/home/user\")\r\n```\r\ngives `user`. \r\nThe code should check if the last char is a slash.\r\n",
"The fix is:\r\n```\r\n\"name\": os.path.basename(self.path[:-1] if self.path[-1] == \"/\" else self.path)\r\n```"
] | 1,653,415,381,000 | 1,653,416,036,000 | null | NONE | null | null | null | ## Describe the bug
Trying to load a local dataset raises an error indicating that the config builder has to have a name.
No error should be reported, since the call is completly valid.
## Steps to reproduce the bug
```python
load_dataset("./data/some-dataset/", name="some-name")
```
## Expected results
The dataset should be loaded.
## Actual results
```
Traceback (most recent call last):
File "train_lquad.py", line 19, in <module>
load(tokenize_target_function, tokenize_target_function, {}, tokenizer)
File "train_lquad.py", line 14, in load
dataset = load_dataset("./data/lquad/", name="lquad")
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/load.py", line 1708, in load_dataset
builder_instance = load_dataset_builder(
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/load.py", line 1560, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/builder.py", line 269, in __init__
self.config, self.config_id = self._create_builder_config(
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/builder.py", line 403, in _create_builder_config
raise ValueError(f"BuilderConfig must have a name, got {builder_config.name}")
ValueError: BuilderConfig must have a name, got
```
## Environment info
- `datasets` version: 2.2.2
- Platform: Linux-4.18.0-348.20.1.el8_5.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.6
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
The error is probably in line 795 in load.py:
```
builder_kwargs = {
"hash": hash,
"data_files": data_files,
"name": os.path.basename(self.path),
"base_path": self.path,
**builder_kwargs,
}
```
`os.path.basename` for a directory returns an empty string, rather than the name of the directory.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4399/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4398 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4398/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4398/comments | https://api.github.com/repos/huggingface/datasets/issues/4398/events | https://github.com/huggingface/datasets/issues/4398 | 1,246,666,749 | I_kwDODunzps5KTp_9 | 4,398 | Calling `cast_column` and a sequence of `map` operations ends up making `faiss` fail while looking for removed columns | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"It works if we either remove the `ds = ds.cast_column(\"id\", Value(\"int32\"))` line from the code above, or if instead calling `ds.remove_columns()` we remove the columns inside each mapping as `ds.map(..., remove_columns=[...])` instead of right after the mapping.\r\n\r\nBoth of those solutions seem to fix the issue, so the root cause of it may be around that. Sorry I cannot provide you more insights, in case I get to fix it I'll submit a PR, in the meanwhile the code that I'm using as a workaround is the following:\r\n\r\n```python\r\nfrom transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\nimport torch\r\n\r\ntorch.set_grad_enabled(False)\r\nctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\nctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n\r\nfrom datasets import load_dataset, Value\r\n\r\nds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\nds = ds.cast_column(\"id\", Value(\"int32\"))\r\nds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])}, remove_columns=[\"title\", \"summary\"])\r\n\r\ndef generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n\r\nds = ds.map(generate_embeddings, remove_columns=[\"inputs\"])\r\nds.add_faiss_index(column=\"embeddings\")\r\n```",
"FYI the main reason I want to use `dataset.remove_columns` rather than the function inside `dataset.map` is because according to the 🤗 Datasets documentation, it's faster.\r\n\r\n\"🤗 Datasets also has a [Dataset.remove_columns()](https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/main_classes#datasets.Dataset.remove_columns) method that is functionally identical, but faster, because it doesn’t copy the data of the remaining columns.\"\r\n\r\nMore information at https://huggingface.co/docs/datasets/process#map",
"Here I'm presenting all the scenarios so that you can further investigate the issue:\r\n\r\n- ✅ `cast_column` -> `map` with `remove_columns` -> `map` with `remove_columns` -> `add_faiss_index`\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])}, remove_columns=[\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings, remove_columns=[\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- ❌ `cast_column` -> `map` -> `remove_columns` -> `map` -> `remove_columns` -> `add_faiss_index`\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])})\r\n ds = ds.remove_columns([\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings)\r\n ds = ds.remove_columns([\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- ❌ `cast_column` -> `map` with `remove_columns` -> `map` -> `remove_columns` -> `add_faiss_index`\r\n\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])}, remove_columns=[\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings)\r\n ds = ds.remove_columns([\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- ✅ `cast_column` -> `map` -> `remove_columns` -> `map` with `remove_columns` -> `add_faiss_index`\r\n\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])})\r\n ds = ds.remove_columns([\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings, remove_columns=[\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- ✅ `map` -> `remove_columns` -> `map` -> `remove_columns` -> `add_faiss_index`\r\n\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])})\r\n ds = ds.remove_columns([\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings)\r\n ds = ds.remove_columns([\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```",
"So on, I've created #4411 so as to fix the bug with `remove_columns` under certain conditions before `add_faiss_index`, which means that the scenarios not working above are already working fine."
] | 1,653,403,294,000 | 1,653,408,554,000 | null | CONTRIBUTOR | null | null | null | First of all, sorry in advance for the unclear title, but this bug is weird to explain (at least for me), so I tried my best to summarize all the information in this issue.
## Describe the bug
Calling a certain combination of operations over a 🤗 `Dataset` and then trying to calculate the `faiss` index with `.add_faiss_index` ends up throwing an exception while trying to set the format back of a previously removed column. But this just happens over certain conditions... I'll present some scenarios below!
## Steps to reproduce the bug
Assuming the following dataset named `sample.csv` with some IMDb data:
```csv
id,title,summary
1877830,"The Batman","When a sadistic serial killer begins murdering key political figures in Gotham, Batman is forced to investigate the city's hidden corruption and question his family's involvement."
9419884,"Doctor Strange in the Multiverse of Madness","Doctor Strange teams up with a mysterious teenage girl from his dreams who can travel across multiverses, to battle multiple threats, including other-universe versions of himself, which threaten to wipe out millions across the multiverse. They seek help from Wanda the Scarlet Witch, Wong and others."
11138512,"The Northman","From visionary director Robert Eggers comes The Northman, an action-filled epic that follows a young Viking prince on his quest to avenge his father's murder."
1745960,"Top Gun: Maverick","After more than thirty years of service as one of the Navy's top aviators, Pete Mitchell is where he belongs, pushing the envelope as a courageous test pilot and dodging the advancement in rank that would ground him."
```
We'll be able to reproduce the bug using the following piece of code:
```python
# Sample code to reproduce the bug
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
import torch
torch.set_grad_enabled(False)
ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
from datasets import load_dataset, Value
ds = load_dataset("csv", data_files=["sample.csv"], split="train")
ds = ds.cast_column("id", Value("int32")) # from `int64` to `int32`
ds = ds.map(lambda x: {"inputs": f"{ctx_tokenizer.sep_token}".join(["title", "summary"])})
ds = ds.remove_columns(["title", "summary"])
def generate_embeddings(x):
return {"embeddings": ctx_encoder(**ctx_tokenizer(x["inputs"], return_tensors="pt"))[0][0].numpy()}
ds = ds.map(generate_embeddings)
ds = ds.remove_columns("inputs")
ds.add_faiss_index(column="embeddings") # It fails here!
```
The code above is an adaptation of https://huggingface.co/docs/datasets/faiss_es, for the sake of presenting the bug with a simple example.
## Expected results
Ideally, the `faiss` index should be calculated over the 🤗 `Dataset` and no exception should be triggered.
## Actual results
But what happens instead is that a `ValueError: Columns ['inputs'] not in the dataset. Current columns in the dataset: ['id', 'embeddings']`, which makes no sense as that column has been previously dropped.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-1074-azure-x86_64-with-glibc2.31
- Python version: 3.9.5
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4398/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4397 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4397/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4397/comments | https://api.github.com/repos/huggingface/datasets/issues/4397/events | https://github.com/huggingface/datasets/pull/4397 | 1,246,597,632 | PR_kwDODunzps44XcG3 | 4,397 | Fix dependency on dill version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,400,463,000 | 1,653,487,372,000 | 1,653,486,848,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4397",
"html_url": "https://github.com/huggingface/datasets/pull/4397",
"diff_url": "https://github.com/huggingface/datasets/pull/4397.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4397.patch",
"merged_at": 1653486848000
} | We had to make a hotfix by pinning dill:
- #4380
because from version 0.3.5, our custom `save_function` pickling function was raising an exception:
- #4379
This PR fixes this by implementing our custom `save_function` depending on the version of dill.
CC: @anivegesana
This PR needs first being merged:
- [x] #4384
- so that a circular import is fixed
It is also convenient to merge first:
- [x] #4385 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4397/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4396 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4396/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4396/comments | https://api.github.com/repos/huggingface/datasets/issues/4396/events | https://github.com/huggingface/datasets/pull/4396 | 1,245,479,399 | PR_kwDODunzps44T0Di | 4,396 | Fix URL in gem dataset for totto config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,326,172,000 | 1,653,371,351,000 | 1,653,370,860,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4396",
"html_url": "https://github.com/huggingface/datasets/pull/4396",
"diff_url": "https://github.com/huggingface/datasets/pull/4396.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4396.patch",
"merged_at": 1653370859000
} | As commented in:
- https://github.com/huggingface/datasets/issues/4386#issuecomment-1134902372
CC: @StevenTang1998 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4396/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4395 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4395/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4395/comments | https://api.github.com/repos/huggingface/datasets/issues/4395/events | https://github.com/huggingface/datasets/pull/4395 | 1,245,436,486 | PR_kwDODunzps44TrBA | 4,395 | Add Pascal VOC dataset | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4395). All of your documentation changes will be reflected on that endpoint."
] | 1,653,323,645,000 | 1,653,401,733,000 | null | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4395",
"html_url": "https://github.com/huggingface/datasets/pull/4395",
"diff_url": "https://github.com/huggingface/datasets/pull/4395.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4395.patch",
"merged_at": null
} | This PR adds the Pascal VOC dataset in the same way TFDS has it added. I believe we can iterate on this dataset and in future versions include more data, such as segmentation masks, but for now I think it is a good idea to just add it the same way as TFDS to get a solid first version out there. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4395/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4394 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4394/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4394/comments | https://api.github.com/repos/huggingface/datasets/issues/4394/events | https://github.com/huggingface/datasets/issues/4394 | 1,245,221,657 | I_kwDODunzps5KOJMZ | 4,394 | trainer became extremely slow after reload dataset by `load_from_disk` | {
"login": "conan1024hao",
"id": 50416856,
"node_id": "MDQ6VXNlcjUwNDE2ODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/50416856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/conan1024hao",
"html_url": "https://github.com/conan1024hao",
"followers_url": "https://api.github.com/users/conan1024hao/followers",
"following_url": "https://api.github.com/users/conan1024hao/following{/other_user}",
"gists_url": "https://api.github.com/users/conan1024hao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/conan1024hao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/conan1024hao/subscriptions",
"organizations_url": "https://api.github.com/users/conan1024hao/orgs",
"repos_url": "https://api.github.com/users/conan1024hao/repos",
"events_url": "https://api.github.com/users/conan1024hao/events{/privacy}",
"received_events_url": "https://api.github.com/users/conan1024hao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"I tried to make the dataset much more smaller (100000 rows) , then the speed became `33.88it/s` from`8.62s/it`. It's nearly 200 times... Do you have any idea? Thank you!",
"Similar issue: https://github.com/huggingface/transformers/issues/8818\r\n\r\nI changed `RandomSampler` to `SequentialSampler` in the `trainer.py`, but the speed didn't become faster.",
"I changed\r\n```\r\ntokenized_datasets = load_from_disk(\r\n \"/pathto/dataset\"\r\n )\r\n```\r\nto\r\n```\r\ntokenized_datasets = load_from_disk(\r\n \"/pathto/dataset\", keep_in_memory=True\r\n )\r\n```\r\nand obtained normal speed. It's seems that the problem is on the os's speed limit."
] | 1,653,314,677,000 | 1,653,321,761,000 | null | NONE | null | null | null | ## Describe the bug
Due to memory problem, I need to save my tokenized datasets locally by CPU and reload it by multi GPU for running training script. However, after I reload it by `load_from_disk` and start training, the speed is extremely slow. It says I need about 1500 hours with 8 A100 cards. Before this, I can run the whole script in one day with a single A100 card.
Since I am try to pre-train a BERT, **my dataset is very large(29058165 rows)**
## Steps to reproduce the bug
```python
tokenized_datasets.save_to_disk(
"/pathto/dataset"
)
tokenized_datasets = load_from_disk(
"/pathto/dataset"
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"] if training_args.do_train else None,
eval_dataset=tokenized_datasets["validation"]
if training_args.do_eval
else None,
tokenizer=tokenizer,
data_collator=data_collator,
)
train_result = trainer.train(resume_from_checkpoint=checkpoint)
```
## Expected results
Without the save and reload process, I only need about one day to run the whole script with one A100 card.
## Actual results
```
[INFO|trainer.py:1290] 2022-05-23 22:49:46,266 >> ***** Running training *****
[INFO|trainer.py:1291] 2022-05-23 22:49:46,266 >> Num examples = 29058165
[INFO|trainer.py:1292] 2022-05-23 22:49:46,266 >> Num Epochs = 5
[INFO|trainer.py:1293] 2022-05-23 22:49:46,266 >> Instantaneous batch size per device = 16
[INFO|trainer.py:1294] 2022-05-23 22:49:46,266 >> Total train batch size (w. parallel, distributed & accumulation) = 256
[INFO|trainer.py:1295] 2022-05-23 22:49:46,266 >> Gradient Accumulation steps = 2
[INFO|trainer.py:1296] 2022-05-23 22:49:46,266 >> Total optimization steps = 567540
0%| | 1/567540 [00:09<1544:49:04, 9.80s/it]
0%| | 2/567540 [00:17<1320:00:17, 8.37s/it]
0%| | 3/567540 [00:26<1393:10:17, 8.84s/it]
0%| | 4/567540 [00:34<1344:56:33, 8.53s/it]
0%| | 5/567540 [00:43<1359:36:12, 8.62s/it]
```
## Environment info
```
torch 1.11.0+cu113
torchaudio 0.11.0+cu113
torchvision 0.12.0+cu113
transformers 4.18.0
datasets 2.2.2
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4394/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4393 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4393/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4393/comments | https://api.github.com/repos/huggingface/datasets/issues/4393/events | https://github.com/huggingface/datasets/pull/4393 | 1,244,876,662 | PR_kwDODunzps44RxWN | 4,393 | Update CI deprecated legacy image | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,298,542,000 | 1,653,300,508,000 | 1,653,299,995,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4393",
"html_url": "https://github.com/huggingface/datasets/pull/4393",
"diff_url": "https://github.com/huggingface/datasets/pull/4393.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4393.patch",
"merged_at": 1653299995000
} | Now our CI still uses a deprecated legacy image:
> You’re using a [deprecated Docker convenience image.](https://discuss.circleci.com/t/legacy-convenience-image-deprecation/41034) Upgrade to a next-gen Docker convenience image.
This PR updates to next-generation convenience image.
Related to:
- #2955 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4393/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4392 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4392/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4392/comments | https://api.github.com/repos/huggingface/datasets/issues/4392/events | https://github.com/huggingface/datasets/pull/4392 | 1,244,859,971 | PR_kwDODunzps44RtsX | 4,392 | remove int documentation from logging docs | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,297,895,000 | 1,653,319,015,000 | 1,653,318,512,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4392",
"html_url": "https://github.com/huggingface/datasets/pull/4392",
"diff_url": "https://github.com/huggingface/datasets/pull/4392.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4392.patch",
"merged_at": 1653318512000
} | Removes the `int` documentation from the [logging section](https://huggingface.co/docs/datasets/package_reference/logging_methods#levels) of the docs. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4392/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4391 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4391/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4391/comments | https://api.github.com/repos/huggingface/datasets/issues/4391/events | https://github.com/huggingface/datasets/pull/4391 | 1,244,839,185 | PR_kwDODunzps44RpGv | 4,391 | Refactor column mappings for question answering datasets | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks.\r\n> \r\n> I have no visibility about this, but if you say it is more useful for AutoTrain this way...\r\n\r\nThanks for the review @albertvillanova ! Yes, I need some way to reconstruct the original column names with a period because that's how they appear after we flatten the nested columns. In any case, we can adjust this later if needed :)",
"Does that mean that we need to change the metadata?",
"> Does that mean that we need to change the metadata?\r\n\r\nYes, but this PR takes care of it :)",
"Oh good! thanks for the heads up!"
] | 1,653,297,194,000 | 1,653,397,020,000 | 1,653,396,528,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4391",
"html_url": "https://github.com/huggingface/datasets/pull/4391",
"diff_url": "https://github.com/huggingface/datasets/pull/4391.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4391.patch",
"merged_at": 1653396528000
} | This PR tweaks the keys in the metadata that are used to define the column mapping for question answering datasets. This is needed in order to faithfully reconstruct column names like `answers.text` and `answers.answer_start` from the keys in AutoTrain.
As observed in https://github.com/huggingface/datasets/pull/4367 we cannot use periods `.` in the keys of the YAML tags, so a decision was made to use a flat mapping with underscores. For QA datasets, however, it's handy to be able to reconstruct the nesting -- hence this PR.
cc @sashavor | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4391/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4390 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4390/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4390/comments | https://api.github.com/repos/huggingface/datasets/issues/4390/events | https://github.com/huggingface/datasets/pull/4390 | 1,244,835,877 | PR_kwDODunzps44RoXs | 4,390 | Fix metadata validation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4390). All of your documentation changes will be reflected on that endpoint."
] | 1,653,297,080,000 | 1,653,298,473,000 | null | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4390",
"html_url": "https://github.com/huggingface/datasets/pull/4390",
"diff_url": "https://github.com/huggingface/datasets/pull/4390.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4390.patch",
"merged_at": null
} | Since Python 3.8, the typing module:
- raises an AttributeError when trying to access `__args__` on any type, e.g.: `List.__args__`
- provides the `get_args` function instead: `get_args(List)`
This PR implements a fix for Python >=3.8 whereas maintaining backward compatibility. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4390/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4389/comments | https://api.github.com/repos/huggingface/datasets/issues/4389/events | https://github.com/huggingface/datasets/pull/4389 | 1,244,693,690 | PR_kwDODunzps44RKMn | 4,389 | Fix bug in gem dataset for wiki_auto_asset_turk config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,290,389,000 | 1,653,302,306,000 | 1,653,301,795,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4389",
"html_url": "https://github.com/huggingface/datasets/pull/4389",
"diff_url": "https://github.com/huggingface/datasets/pull/4389.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4389.patch",
"merged_at": 1653301795000
} | This PR fixes some URLs.
Fix #4386. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4389/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4388/comments | https://api.github.com/repos/huggingface/datasets/issues/4388/events | https://github.com/huggingface/datasets/pull/4388 | 1,244,645,158 | PR_kwDODunzps44RAG1 | 4,388 | Set builder name from module instead of class | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,287,195,000 | 1,653,456,283,000 | 1,653,455,775,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4388",
"html_url": "https://github.com/huggingface/datasets/pull/4388",
"diff_url": "https://github.com/huggingface/datasets/pull/4388.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4388.patch",
"merged_at": 1653455775000
} | Now the builder name attribute is set from from the builder class name.
This PR sets the builder name attribute from the module name instead. Some motivating reasons:
- The dataset ID is relevant and unique among all datasets and this is directly related to the repository name, i.e., the name of the directory containing the dataset
- The name of the module (i.e. the file containing the loading loading script) is already relevant for loading: it must have the same name as its containing directory (related to the dataset ID), as we search for it using its directory name
- On the other hand, the name of the builder class is not relevant for loading: in our code, we just search for a class which is subclass of `DatasetBuilder` (independently of its name). We do not put any constraint on the naming of the builder class and indeed it can have a name completely different from its module/direcotry/dataset_id
IMO it makes more sense to align the caching directory name with the dataset_id/directory/module name instead of the builder class name.
Fix #4381. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4388/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4388/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4387 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4387/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4387/comments | https://api.github.com/repos/huggingface/datasets/issues/4387/events | https://github.com/huggingface/datasets/issues/4387 | 1,244,147,817 | I_kwDODunzps5KKDBp | 4,387 | device/google/accessory/adk2012 - Git at Google | {
"login": "Aeckard45",
"id": 87345839,
"node_id": "MDQ6VXNlcjg3MzQ1ODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/87345839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aeckard45",
"html_url": "https://github.com/Aeckard45",
"followers_url": "https://api.github.com/users/Aeckard45/followers",
"following_url": "https://api.github.com/users/Aeckard45/following{/other_user}",
"gists_url": "https://api.github.com/users/Aeckard45/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aeckard45/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aeckard45/subscriptions",
"organizations_url": "https://api.github.com/users/Aeckard45/orgs",
"repos_url": "https://api.github.com/users/Aeckard45/repos",
"events_url": "https://api.github.com/users/Aeckard45/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aeckard45/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,653,195,439,000 | 1,653,287,787,000 | 1,653,287,787,000 | NONE | null | null | null | "git clone https://android.googlesource.com/device/google/accessory/adk2012"
https://android.googlesource.com/device/google/accessory/adk2012/#:~:text=git%20clone%20https%3A//android.googlesource.com/device/google/accessory/adk2012 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4387/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4386/comments | https://api.github.com/repos/huggingface/datasets/issues/4386/events | https://github.com/huggingface/datasets/issues/4386 | 1,243,965,532 | I_kwDODunzps5KJWhc | 4,386 | Bug for wiki_auto_asset_turk from GEM | {
"login": "StevenTang1998",
"id": 37647985,
"node_id": "MDQ6VXNlcjM3NjQ3OTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/37647985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StevenTang1998",
"html_url": "https://github.com/StevenTang1998",
"followers_url": "https://api.github.com/users/StevenTang1998/followers",
"following_url": "https://api.github.com/users/StevenTang1998/following{/other_user}",
"gists_url": "https://api.github.com/users/StevenTang1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StevenTang1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StevenTang1998/subscriptions",
"organizations_url": "https://api.github.com/users/StevenTang1998/orgs",
"repos_url": "https://api.github.com/users/StevenTang1998/repos",
"events_url": "https://api.github.com/users/StevenTang1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/StevenTang1998/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @StevenTang1998.\r\n\r\nI'm looking into it. ",
"Hi @StevenTang1998,\r\n\r\nWe have fixed the issue:\r\n- #4389\r\n\r\nThe fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by installing `datasets` from our GitHub repo:\r\n```\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```",
"Thanks for your reply!!\r\nAnd the totto dataset has the same problem. The url should be change to [https://storage.googleapis.com/totto-public/totto_data.zip](https://storage.googleapis.com/totto-public/totto_data.zip).",
"Hi again @StevenTang1998,\r\n\r\nI don't see any problem when loading `totto` dataset:\r\n```python\r\nIn [4]: import datasets\r\n ...: ds = datasets.load_dataset(\"totto\")\r\nDownloading builder script: 5.58kB [00:00, 5.33MB/s] \r\nDownloading metadata: 2.78kB [00:00, 2.96MB/s] \r\nUsing custom data configuration default\r\nDownloading and preparing dataset totto/default (download: 179.03 MiB, generated: 706.59 MiB, post-processed: Unknown size, total: 885.62 MiB) to .../.cache/huggingface/datasets/totto/default/1.0.0/263c85871e5451bc892c65ca0306c0629eb7beb161e0eb998f56231562335dd2...\r\nDownloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 188M/188M [00:32<00:00, 5.77MB/s]\r\nDataset totto downloaded and prepared to .../.cache/huggingface/datasets/totto/default/1.0.0/263c85871e5451bc892c65ca0306c0629eb7beb161e0eb998f56231562335dd2. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 147.95it/s]\r\n\r\nIn [5]: ds\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 120761\r\n })\r\n validation: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 7700\r\n })\r\n test: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 7700\r\n })\r\n})\r\n```",
"Sorry, I didn't express it clearly. It's the totto dataset from gem.\r\ndatasets.load_dataset('gem', 'totto')\r\n",
"@StevenTang1998 fixed in:\r\n- #4396",
"Thanks!!"
] | 1,653,136,290,000 | 1,653,371,752,000 | 1,653,301,795,000 | NONE | null | null | null | ## Describe the bug
The script of wiki_auto_asset_turk for GEM may be out of date.
## Steps to reproduce the bug
```python
import datasets
datasets.load_dataset('gem', 'wiki_auto_asset_turk')
```
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/load.py", line 1731, in load_dataset
builder_instance.download_and_prepare(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 640, in download_and_prepare
self._download_and_prepare(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 1158, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 707, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/tangtianyi/.cache/huggingface/modules/datasets_modules/datasets/gem/982a54473b12c6a6e40d4356e025fb7172a5bb2065e655e2c1af51f2b3cf4ca1/gem.py", line 538, in _split_generators
dl_dir = dl_manager.download_and_extract(_URLs[self.config.name])
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 416, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 294, in download
downloaded_path_or_paths = map_nested(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 351, in map_nested
mapped = [
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 352, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 288, in _single_map_nested
return function(data_struct)
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 320, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 234, in cached_path
output_path = get_from_cache(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 579, in get_from_cache
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.orig
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4386/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4385 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4385/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4385/comments | https://api.github.com/repos/huggingface/datasets/issues/4385/events | https://github.com/huggingface/datasets/pull/4385 | 1,243,921,287 | PR_kwDODunzps44OwXF | 4,385 | Test dill | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I should point out that the hash will be the same if computed twice with the same code on the same version of dill (after adding huggingface's code that removes line numbers and file names, and sorts globals.) My changes in dill 0.3.5 and ones that I will make in 0.3.6 will result in different pickles than the ones dill 0.3.4 was making. This should still be fine for caching.",
"Just some comments @lhoestq:\r\n\r\nThe best practice for testing is to have a `test_<filename>.py` for each `<filename>.py`. Therefore in order to have the filenames aligned, I would propose:\r\n- either renaming `fingerprint.py` to `caching.py`\r\n- or renaming `test_caching.py` to `test_fingerprint.py`\r\n\r\nOn the other hand, my idea when implementing this test was not to test all the functionalities of the `Hasher`, but just to have a regression test that fails if dill version is > 0.3.4 and the pin in our `setup.py` is not present. Just recall that we had no failing test in our CI when the issue with dill was found on `transformers`.\r\n\r\nThe objective of this PR is just to have a regression test for that case: I tested and I got `AttributeError: module 'dill._dill' has no attribute 'stack'`\r\n\r\nFor this regression test, I took into account this comment by @gugarosa: https://github.com/huggingface/datasets/issues/4379#issuecomment-1133131825\r\n\r\nThere is no equivalent test in `test_caching.py` because our CI did not fail before pinning dill.",
"Ok I see, renaming it to `test_fingerprint.py` sounds like a good idea :)"
] | 1,653,123,463,000 | 1,653,467,413,000 | 1,653,466,908,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4385",
"html_url": "https://github.com/huggingface/datasets/pull/4385",
"diff_url": "https://github.com/huggingface/datasets/pull/4385.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4385.patch",
"merged_at": 1653466908000
} | Regression test for future releases of `dill`.
Related to #4379. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4385/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4384 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4384/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4384/comments | https://api.github.com/repos/huggingface/datasets/issues/4384/events | https://github.com/huggingface/datasets/pull/4384 | 1,243,919,748 | PR_kwDODunzps44OwFr | 4,384 | Refactor download | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This looks like a breaking change no ?\r\nAlso could you explain why it would be better this way ?",
"The might be only there to help type checkers, but I am not too familiar with the code base to know for sure. I think this might be useful:\n\nhttps://docs.python.org/3/library/typing.html#typing.TYPE_CHECKING",
"> This looks like a breaking change no ?\r\n> Also could you explain why it would be better this way ?\r\n\r\nSorry, @lhoestq, I naively thought it was obvious. I have tried to give some arguments in the motivation of this PR (see above). I can give additional arguments if needed. "
] | 1,653,122,964,000 | 1,653,475,922,000 | 1,653,475,423,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4384",
"html_url": "https://github.com/huggingface/datasets/pull/4384",
"diff_url": "https://github.com/huggingface/datasets/pull/4384.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4384.patch",
"merged_at": 1653475423000
} | This PR performs a refactoring of the download functionalities, by proposing a modular solution and moving them to their own package "download". Some motivating arguments:
- understandability: from a logical partitioning of the library, it makes sense to have all download functionalities grouped together instead of scattered in a much larger directory containing many more different functionalities
- abstraction: the level of abstraction of "download" (higher) is not the same as "utils" (lower); putting different levels of abstraction together, makes dependencies more intricate (potential circular dependencies) and the system more tightly coupled; when the levels of abstraction are clearly separated, the dependencies flow in a neat direction from higher to lower
- architectural: "download" is a domain-specific functionality of our library/application (a dataset builder performs several actions: download, generate dataset and cache it); these functionalities are at the core of our library; on the other hand, "utils" are always a low-level set of functionalities, not directly related to our domain/business core logic (all libraries have "utils"), thus at the periphery of our lib architecture
Also note that when a library is not architecturally designed following simple, neat, clean principles, this has a negative impact on extensibility, making more and more difficult to make enhancements.
As a concrete example in this case, please see: https://app.circleci.com/pipelines/github/huggingface/datasets/12185/workflows/ff25a790-8e3f-45a1-aadd-9d79dfb73c4d/jobs/72860
- After an extension, a circular import is found
- Diving into the cause of this circular import, see the dependency flow, which should be from higher to lower levels of abstraction:
```
ImportError while loading conftest '/home/circleci/datasets/tests/conftest.py'.
tests/conftest.py:12: in <module>
import datasets
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/__init__.py:37: in <module>
from .arrow_dataset import Dataset, concatenate_datasets
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/arrow_dataset.py:59: in <module>
from . import config
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/config.py:8: in <module>
from .utils.logging import get_logger
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/__init__.py:30: in <module>
from .download_manager import DownloadConfig, DownloadManager, DownloadMode
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/download_manager.py:39: in <module>
from .py_utils import NestedDataStructure, map_nested, size_str
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/py_utils.py:608: in <module>
if config.DILL_VERSION < version.parse("0.3.5"):
E AttributeError: module 'datasets.config' has no attribute 'DILL_VERSION'
```
Imports:
- datasets
- Dataset: lower level than datasets
- config: lower level than Dataset
- logger: lower level than config
- DownloadManager: !!! HIGHER level of abstraction than logger!!
Why when importing logger we require importing DownloadManager?!?
- Logically, it does not make sense
- This is due to an error in the design/architecture of our library:
- To import the logger, we need to import it from `.utils.logging`
- To import `.utils.logging` we need to import `.utils`
- The import of `.utils` require the import of all its submodules defined in `utils.__init__.py`, among them: `.utils.download_manager`!
When putting `logging` and `download_manager` both inside `utils`, in order to import `logging` we need to import `download_manager` first: this is a strong coupling between modules and moreover between modules at different levels of abstraction (to import a lower level module, we require to import a higher level module). Additionally, it is clear that is makes no sense that in order to import `logging` we require to import `download_manager` first. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4384/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4383 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4383/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4383/comments | https://api.github.com/repos/huggingface/datasets/issues/4383/events | https://github.com/huggingface/datasets/issues/4383 | 1,243,856,981 | I_kwDODunzps5KI8BV | 4,383 | L | {
"login": "AronCodes21",
"id": 99847861,
"node_id": "U_kgDOBfOOtQ",
"avatar_url": "https://avatars.githubusercontent.com/u/99847861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AronCodes21",
"html_url": "https://github.com/AronCodes21",
"followers_url": "https://api.github.com/users/AronCodes21/followers",
"following_url": "https://api.github.com/users/AronCodes21/following{/other_user}",
"gists_url": "https://api.github.com/users/AronCodes21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AronCodes21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AronCodes21/subscriptions",
"organizations_url": "https://api.github.com/users/AronCodes21/orgs",
"repos_url": "https://api.github.com/users/AronCodes21/repos",
"events_url": "https://api.github.com/users/AronCodes21/events{/privacy}",
"received_events_url": "https://api.github.com/users/AronCodes21/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,653,104,878,000 | 1,653,160,813,000 | 1,653,160,813,000 | NONE | null | null | null | ## Describe the L
L
## Expected L
A clear and concise lmll
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version: | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4383/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4382 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4382/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4382/comments | https://api.github.com/repos/huggingface/datasets/issues/4382/events | https://github.com/huggingface/datasets/issues/4382 | 1,243,839,783 | I_kwDODunzps5KI30n | 4,382 | First time trying | {
"login": "Aeckard45",
"id": 87345839,
"node_id": "MDQ6VXNlcjg3MzQ1ODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/87345839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aeckard45",
"html_url": "https://github.com/Aeckard45",
"followers_url": "https://api.github.com/users/Aeckard45/followers",
"following_url": "https://api.github.com/users/Aeckard45/following{/other_user}",
"gists_url": "https://api.github.com/users/Aeckard45/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aeckard45/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aeckard45/subscriptions",
"organizations_url": "https://api.github.com/users/Aeckard45/orgs",
"repos_url": "https://api.github.com/users/Aeckard45/repos",
"events_url": "https://api.github.com/users/Aeckard45/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aeckard45/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [] | 1,653,099,318,000 | 1,653,160,844,000 | 1,653,160,844,000 | NONE | null | null | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4382/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4381 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4381/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4381/comments | https://api.github.com/repos/huggingface/datasets/issues/4381/events | https://github.com/huggingface/datasets/issues/4381 | 1,243,478,863 | I_kwDODunzps5KHftP | 4,381 | Bug in caching 2 datasets both with the same builder class name | {
"login": "NouamaneTazi",
"id": 29777165,
"node_id": "MDQ6VXNlcjI5Nzc3MTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NouamaneTazi",
"html_url": "https://github.com/NouamaneTazi",
"followers_url": "https://api.github.com/users/NouamaneTazi/followers",
"following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}",
"gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions",
"organizations_url": "https://api.github.com/users/NouamaneTazi/orgs",
"repos_url": "https://api.github.com/users/NouamaneTazi/repos",
"events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}",
"received_events_url": "https://api.github.com/users/NouamaneTazi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @NouamaneTazi, thanks for reporting.\r\n\r\nPlease note that both datasets are cached in the same directory because their loading builder classes have the same name: `class MTOP(datasets.GeneratorBasedBuilder)`.\r\n\r\nYou should name their builder classes differently, e.g.:\r\n- `MtopDomain`\r\n- `MtopIntent`"
] | 1,653,070,683,000 | 1,653,455,775,000 | 1,653,455,775,000 | MEMBER | null | null | null | ## Describe the bug
The two datasets `mteb/mtop_intent` and `mteb/mtop_domain `use both the same cache folder `.cache/huggingface/datasets/mteb___mtop`. So if you first load `mteb/mtop_intent` then datasets will not load `mteb/mtop_domain`.
If you delete this cache folder and flip the order how you load the two datasets , you will get the opposite datasets loaded (difference is here in terms of the label and label_text).
## Steps to reproduce the bug
```python
import datasets
dataset = datasets.load_dataset("mteb/mtop_intent", "en")
print(dataset['train'][0])
dataset = datasets.load_dataset("mteb/mtop_domain", "en")
print(dataset['train'][0])
```
## Expected results
```
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop_intent/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 920.14it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'}
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop_domain/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1307.59it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 0, 'label_text': 'messaging'}
```
## Actual results
```
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 920.14it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'}
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1307.59it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.1
- Platform: macOS-12.1-arm64-arm-64bit
- Python version: 3.9.12
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4381/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4380 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4380/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4380/comments | https://api.github.com/repos/huggingface/datasets/issues/4380/events | https://github.com/huggingface/datasets/pull/4380 | 1,243,183,054 | PR_kwDODunzps44MUz0 | 4,380 | Pin dill | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,054,859,000 | 1,653,064,887,000 | 1,653,064,384,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4380",
"html_url": "https://github.com/huggingface/datasets/pull/4380",
"diff_url": "https://github.com/huggingface/datasets/pull/4380.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4380.patch",
"merged_at": 1653064384000
} | Hotfix #4379.
CC: @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4380/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4380/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4379 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4379/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4379/comments | https://api.github.com/repos/huggingface/datasets/issues/4379/events | https://github.com/huggingface/datasets/issues/4379 | 1,243,175,854 | I_kwDODunzps5KGVuu | 4,379 | Latest dill release raises exception | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Fixed by:\r\n- #4380 ",
"Just an additional insight, the latest dill (either 0.3.5 or 0.3.5.1) also broke the hashing/fingerprinting of any mapping function.\r\n\r\nFor example:\r\n```\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"rotten_tomatoes\")\r\nd.map(lambda x: x)\r\n```\r\n\r\nReturns the standard non-dillable error:\r\n```\r\nParameter 'function'=<function <lambda> at 0x7fe7d18c9560> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly....\r\n```",
"@albertvillanova ExamplesTests.test_run_speech_recognition_seq2seq is in which file?",
"Thanks a lot @gugarosa for the insight: we will incorporate it in our CI as regression testing for future dill releases.",
"Hi @anivegesana, that test is in `transformers` library:\r\n- https://github.com/huggingface/transformers/blob/main/examples/pytorch/test_pytorch_examples.py#L449\r\n- https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py ",
"@albertvillanova\n\nI did a deep dive into @gugarosa's problem and found the issue and it might be related to the one @sgugger discovered. In dill 0.3.5(.1), I created a new `save_function` that fixes a bug in dill that prevented the pickling of recursive inner functions. It was a more complete solution to the problem that `dill._dill.stack` tried to solve in the internal API of dill. Since `dill._dill.stack` was no longer needed, I removed it. Since datasets copies the `save_function` directly from the dill API, it stops working with the new dill version since `dill._dill.stack` is no longer present and the `save_function` has been updated with new code.\r\n\r\nhttps://github.com/huggingface/datasets/blob/95193ae61e92aa537d0c65d37a1fd9d2393aae89/src/datasets/utils/py_utils.py#L607-L678\r\n\r\n~If the dill version is below 0.3.5, you should keep this function. If it is after, you would need to update your copy of `save_function` to use the code I introduced, or manually add a `stack` variable to `dill._dill` if it doesn't exist. Fortunately, in any version of Python 3.7+, dictionaries are always in insertion order and dill no longer supports Python 3.6 or older. So, any globals dictionary saved by dill 0.3.5+ will be deterministic given that the version of dill is held constant and this save_function is unnecessary for newer versions of dill.~\r\n\r\nAh. I see what is happening. I guess a different copy of the function code is needed that sorts the global variables by name.\r\n\r\n```py\r\nif dill.__version__.split('.') < ['0', '3', '5']:\r\n # current save_function code inside here\r\nelse:\r\n # new save_function code inside here with the following line inserted after creating the globals\r\n globs = {k: globs[k] for k in sorted(globs.keys())} \r\n```\r\n\r\nWill look into the test case @sgugger pointed out after that and verify if this is causing the problem.\r\n\r\nI am actually looking into rewriting the global variables code in uqfoundation/dill#466 and will keep this in mind and will try to create an easy way to modify the global variables in dill 0.3.6 (for example, sort them by key like datasets does).",
"Thanks a lot for your investigation @anivegesana.\r\n\r\nYes, we copied-pasted the old `save_function` function from `dill`, just adding a line to make deterministic the order of global variables `globs`. \r\n\r\nHowever, this function has changed a lot from version 0.3.5, after your PR (thank you for the fix in recursiveness, indeed):\r\n- uqfoundation/dill#443\r\n\r\nWe have to address this change.\r\n\r\nIf finally your PR to sort global variables is merged into dill 0.3.6, that will make our life easier, as the tweak will no longer be necessary. ;)\r\n\r\nI have included a regression test so that we are sure future releases of dill do not break `datasets`:\r\n- #4385 ",
"I should note that because Python 3.6 and older are now deprecated and Python 3.7 has insertion order dictionaries, the globals in dill will have a deterministic order, just not sorted. I would still keep it sorted like you have it to help with stability (for example, if someone reorders variables in a file, then sorting the globals would not invalidate the cache.)\n\nIt seems that the order is not quite deterministic in IPython. Huggingface datasets seems to do well in Jupyter regardless, so it is not a good idea to remove the sorting. uqfoundation/dill#19"
] | 1,653,054,516,000 | 1,653,148,406,000 | 1,653,066,387,000 | MEMBER | null | null | null | ## Describe the bug
As reported by @sgugger, latest dill release is breaking things with Datasets.
```
______________ ExamplesTests.test_run_speech_recognition_seq2seq _______________
self = <multiprocess.pool.ApplyResult object at 0x7fa5981a1cd0>, timeout = None
def get(self, timeout=None):
self.wait(timeout)
if not self.ready():
raise TimeoutError
if self._success:
return self._value
else:
> raise self._value
E TypeError: '>' not supported between instances of 'NoneType' and 'float'
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4379/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4379/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4378 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4378/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4378/comments | https://api.github.com/repos/huggingface/datasets/issues/4378/events | https://github.com/huggingface/datasets/pull/4378 | 1,242,935,373 | PR_kwDODunzps44Lf2R | 4,378 | Tidy up license metadata for google_wellformed_query, newspop, sick | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"& thank you!"
] | 1,653,041,772,000 | 1,653,400,223,000 | 1,653,397,827,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4378",
"html_url": "https://github.com/huggingface/datasets/pull/4378",
"diff_url": "https://github.com/huggingface/datasets/pull/4378.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4378.patch",
"merged_at": 1653397827000
} | Amend three licenses on datasets to fit naming convention (lower case, cc licenses include sub-version number). I think that's it - everything else on datasets looks great & super-searchable now! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4378/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4377 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4377/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4377/comments | https://api.github.com/repos/huggingface/datasets/issues/4377/events | https://github.com/huggingface/datasets/pull/4377 | 1,242,746,186 | PR_kwDODunzps44K4OY | 4,377 | Fix checksum and bug in irc_disentangle dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,031,768,000 | 1,653,039,276,000 | 1,653,038,792,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4377",
"html_url": "https://github.com/huggingface/datasets/pull/4377",
"diff_url": "https://github.com/huggingface/datasets/pull/4377.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4377.patch",
"merged_at": 1653038792000
} | There was a bug in filepath segment:
- wrong: `jkkummerfeld-irc-disentanglement-fd379e9`
- right: `jkkummerfeld-irc-disentanglement-35f0a40`
Also there was a bug in the checksum of the downloaded file.
This PR fixes these issues.
Fix partially #4376.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4377/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4376 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4376/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4376/comments | https://api.github.com/repos/huggingface/datasets/issues/4376/events | https://github.com/huggingface/datasets/issues/4376 | 1,242,218,144 | I_kwDODunzps5KCr6g | 4,376 | irc_disentagle viewer error | {
"login": "labouz",
"id": 25671683,
"node_id": "MDQ6VXNlcjI1NjcxNjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/25671683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/labouz",
"html_url": "https://github.com/labouz",
"followers_url": "https://api.github.com/users/labouz/followers",
"following_url": "https://api.github.com/users/labouz/following{/other_user}",
"gists_url": "https://api.github.com/users/labouz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/labouz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/labouz/subscriptions",
"organizations_url": "https://api.github.com/users/labouz/orgs",
"repos_url": "https://api.github.com/users/labouz/repos",
"events_url": "https://api.github.com/users/labouz/events{/privacy}",
"received_events_url": "https://api.github.com/users/labouz/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"DUPLICATED comment from https://github.com/huggingface/datasets/issues/3807:\r\n\r\nmy code:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"irc_disentangle\", download_mode=\"force_redownload\")\r\n```\r\nhowever, it produces the same error\r\n```\r\n[38](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=37) if len(bad_urls) > 0:\r\n [39](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=38) error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> [40](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=39) raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n [41](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=40) logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/jkkummerfeld/irc-disentanglement/tarball/master']\r\n```\r\nI attempted to use the `ignore_verifications' as such:\r\n\r\n```\r\nds = datasets.load_dataset('irc_disentangle', download_mode=\"force_redownload\", ignore_verifications=True)\r\n\r\nDownloading builder script: 12.0kB [00:00, 5.92MB/s] \r\nDownloading metadata: 7.58kB [00:00, 3.48MB/s] \r\nNo config specified, defaulting to: irc_disentangle/ubuntu\r\nDownloading and preparing dataset irc_disentangle/ubuntu (download: 112.98 MiB, generated: 60.05 MiB, post-processed: Unknown size, total: 173.03 MiB) to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5...\r\nDownloading data: 118MB [00:09, 12.1MB/s] \r\n \r\nDataset irc_disentangle downloaded and prepared to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5. Subsequent calls will reuse this data.\r\n100%|██████████| 3/3 [00:00<00:00, 675.38it/s]\r\n```\r\nbut, this returns an empty set?\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n test: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n validation: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n})\r\n```\r\nnot sure what else to try at this point?\r\nThanks in advanced🤗",
"Thanks for reporting, @labouz. I'm addressing it. ",
"The issue with checksum and empty dataset has been fixed by:\r\n- #4377\r\n\r\nTo load the dataset, you should force the re-generation of the dataset from the downloaded file by passing `download_mode=\"reuse_cache_if_exists\"` to `load_dataset`.\r\n\r\nIn relation with the issue with the dataset viewer, first the dataset should be refactored to support streaming.",
"parfait!\r\nit works now, thank you 🙏 "
] | 1,652,987,716,000 | 1,653,045,130,000 | null | NONE | null | null | null | the dataviewer shows this message for "ubuntu" - "train", "test", and "validation" splits:
```
Server error
Status code: 400
Exception: ValueError
Message: Cannot seek streaming HTTP file
```
it appears to give the same message for the "channel_two" data as well.
I get a Checksums error when using `load_data()` with this dataset. Even with the `download_mode` and `ignore_verifications` options set. i referenced the issue here: https://github.com/huggingface/datasets/issues/3807 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4376/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4375 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4375/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4375/comments | https://api.github.com/repos/huggingface/datasets/issues/4375/events | https://github.com/huggingface/datasets/pull/4375 | 1,241,921,147 | PR_kwDODunzps44IMCS | 4,375 | Support DataLoader with num_workers > 0 in streaming mode | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4375). All of your documentation changes will be reflected on that endpoint."
] | 1,652,972,431,000 | 1,652,979,451,000 | null | MEMBER | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4375",
"html_url": "https://github.com/huggingface/datasets/pull/4375",
"diff_url": "https://github.com/huggingface/datasets/pull/4375.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4375.patch",
"merged_at": null
} | Previously it was not possible to properly stream a dataset using multiple `torch.utils.data.DataLoader` workers:
- the `TorchIterableDataset` couldn't be pickled and passed to the subprocesses: https://github.com/huggingface/datasets/issues/3950
- streaming extension was failing: https://github.com/huggingface/datasets/issues/3951
- `fsspec` doesn't work out of the box in subprocesses
From https://github.com/fsspec/filesystem_spec/pull/963#issuecomment-1131709948 :
> Non-async instances might be safe in the forked child, if they hold no open files/sockets etc.; I'm not sure any implementations pass this test!
> If any async instance has been created, the newly forked processes must:
> 1. discard references to locks, threads and event loops and make new ones
> 2. not use any async fsspec instances from the parent process
> 3. clear all class instance caches
Therefore in a DataLoader's worker, I clear the clear the reference to the loop and thread (1). We should be fine for 2 and 3 already since we don't use class instances from the parent process.
Fix https://github.com/huggingface/datasets/issues/3950
Fix https://github.com/huggingface/datasets/issues/3951
TODO: fix tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4375/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4374 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4374/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4374/comments | https://api.github.com/repos/huggingface/datasets/issues/4374/events | https://github.com/huggingface/datasets/issues/4374 | 1,241,860,535 | I_kwDODunzps5KBUm3 | 4,374 | extremely slow processing when using a custom dataset | {
"login": "StephennFernandes",
"id": 32235549,
"node_id": "MDQ6VXNlcjMyMjM1NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/32235549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StephennFernandes",
"html_url": "https://github.com/StephennFernandes",
"followers_url": "https://api.github.com/users/StephennFernandes/followers",
"following_url": "https://api.github.com/users/StephennFernandes/following{/other_user}",
"gists_url": "https://api.github.com/users/StephennFernandes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StephennFernandes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StephennFernandes/subscriptions",
"organizations_url": "https://api.github.com/users/StephennFernandes/orgs",
"repos_url": "https://api.github.com/users/StephennFernandes/repos",
"events_url": "https://api.github.com/users/StephennFernandes/events{/privacy}",
"received_events_url": "https://api.github.com/users/StephennFernandes/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,652,969,885,000 | 1,652,978,716,000 | null | NONE | null | null | null | ## processing a custom dataset loaded as .txt file is extremely slow, compared to a dataset of similar volume from the hub
I have a large .txt file of 22 GB which i load into HF dataset
`lang_dataset = datasets.load_dataset("text", data_files="hi.txt")`
further i use a pre-processing function to clean the dataset
`lang_dataset["train"] = lang_dataset["train"].map(
remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)`
the following processing takes astronomical time to process, while hoging all the ram.
similar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data.
`lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", use_auth_token=True)`
the hours predicted to preprocess are as follows:
huggingface hub dataset: 6.5 hrs
custom loaded dataset: 7000 hrs
note: both the datasets are almost actually same, just provided by different sources with has +/- some samples, only one is hosted on the HF hub and the other is downloaded in a text format.
## Steps to reproduce the bug
```
import datasets
import psutil
import sys
import glob
from fastcore.utils import listify
import re
import gc
def remove_non_indic_sentences(example):
tmp_ls = []
eng_regex = r'[. a-zA-Z0-9ÖÄÅöäå _.,!"\'\/$]*'
for e in listify(example['text']):
matches = re.findall(eng_regex, e)
for match in (str(match).strip() for match in matches if match not in [""," ", " ", ",", " ,", ", ", " , "]):
if len(list(match.split(" "))) > 2:
e = re.sub(match," ",e,count=1)
tmp_ls.append(e)
gc.collect()
example['clean_text'] = tmp_ls
return example
lang_dataset = datasets.load_dataset("text", data_files="hi.txt")
lang_dataset["train"] = lang_dataset["train"].map(
remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)
## same thing work much faster when loading similar dataset from hub
lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", split="train", use_auth_token=True)
lang_dataset["train"] = lang_dataset["train"].map(
remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)
```
## Actual results
similar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data.
`lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", use_auth_token=True)
**the hours predicted to preprocess are as follows:**
huggingface hub dataset: 6.5 hrs
custom loaded dataset: 7000 hrs
**i even tried the following:**
- sharding the large 22gb text files into smaller files and loading
- saving the file to disk and then loading
- using lesser num_proc
- using smaller batch size
- processing without batches ie : without `batched=True`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2.dev0
- Platform: Ubuntu 20.04 LTS
- Python version: 3.9.7
- PyArrow version:8.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4374/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4373 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4373/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4373/comments | https://api.github.com/repos/huggingface/datasets/issues/4373/events | https://github.com/huggingface/datasets/pull/4373 | 1,241,769,310 | PR_kwDODunzps44HsaY | 4,373 | Remove links in docs to old dataset viewer | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652,966,679,000 | 1,653,060,268,000 | 1,653,059,765,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4373",
"html_url": "https://github.com/huggingface/datasets/pull/4373",
"diff_url": "https://github.com/huggingface/datasets/pull/4373.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4373.patch",
"merged_at": 1653059765000
} | Remove the links in the docs to the no longer maintained dataset viewer. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4373/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4372 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4372/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4372/comments | https://api.github.com/repos/huggingface/datasets/issues/4372/events | https://github.com/huggingface/datasets/pull/4372 | 1,241,703,826 | PR_kwDODunzps44HeYC | 4,372 | Check if dataset features match before push in `DatasetDict.push_to_hub` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652,963,550,000 | 1,653,060,216,000 | 1,653,059,730,000 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4372",
"html_url": "https://github.com/huggingface/datasets/pull/4372",
"diff_url": "https://github.com/huggingface/datasets/pull/4372.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4372.patch",
"merged_at": 1653059730000
} | Fix #4211 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4372/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4371 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4371/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4371/comments | https://api.github.com/repos/huggingface/datasets/issues/4371/events | https://github.com/huggingface/datasets/pull/4371 | 1,241,500,906 | PR_kwDODunzps44GzSZ | 4,371 | Add missing language tags for udhr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652,952,850,000 | 1,653,040,272,000 | 1,653,039,790,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4371",
"html_url": "https://github.com/huggingface/datasets/pull/4371",
"diff_url": "https://github.com/huggingface/datasets/pull/4371.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4371.patch",
"merged_at": 1653039790000
} | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4371/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4369 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4369/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4369/comments | https://api.github.com/repos/huggingface/datasets/issues/4369/events | https://github.com/huggingface/datasets/pull/4369 | 1,240,245,642 | PR_kwDODunzps44CpCe | 4,369 | Add redirect to dataset script in the repo structure page | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652,893,533,000 | 1,652,948,341,000 | 1,652,947,851,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4369",
"html_url": "https://github.com/huggingface/datasets/pull/4369",
"diff_url": "https://github.com/huggingface/datasets/pull/4369.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4369.patch",
"merged_at": 1652947851000
} | Following https://github.com/huggingface/hub-docs/pull/146 I added a redirection to the dataset scripts documentation in the repository structure page. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4369/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4368/comments | https://api.github.com/repos/huggingface/datasets/issues/4368/events | https://github.com/huggingface/datasets/pull/4368 | 1,240,064,860 | PR_kwDODunzps44CDFk | 4,368 | Add long answer candidates to natural questions dataset | {
"login": "seirasto",
"id": 4257308,
"node_id": "MDQ6VXNlcjQyNTczMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4257308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seirasto",
"html_url": "https://github.com/seirasto",
"followers_url": "https://api.github.com/users/seirasto/followers",
"following_url": "https://api.github.com/users/seirasto/following{/other_user}",
"gists_url": "https://api.github.com/users/seirasto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seirasto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seirasto/subscriptions",
"organizations_url": "https://api.github.com/users/seirasto/orgs",
"repos_url": "https://api.github.com/users/seirasto/repos",
"events_url": "https://api.github.com/users/seirasto/events{/privacy}",
"received_events_url": "https://api.github.com/users/seirasto/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4368). All of your documentation changes will be reflected on that endpoint."
] | 1,652,884,542,000 | 1,653,436,148,000 | null | NONE | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4368",
"html_url": "https://github.com/huggingface/datasets/pull/4368",
"diff_url": "https://github.com/huggingface/datasets/pull/4368.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4368.patch",
"merged_at": null
} | This is a modification of the Natural Questions dataset to include missing information specifically related to long answer candidates. (See here: https://github.com/google-research-datasets/natural-questions#long-answer-candidates). This information is important to ensure consistent comparison with prior work. It does not disturb the rest of the format . @lhoestq @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4368/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4367 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4367/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4367/comments | https://api.github.com/repos/huggingface/datasets/issues/4367/events | https://github.com/huggingface/datasets/pull/4367 | 1,240,011,602 | PR_kwDODunzps44B340 | 4,367 | Remove config names as yaml keys | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I included the change from https://github.com/huggingface/datasets/pull/4302 directly in this PR, this way the datasets will be updated right away in the CI (the CI is only triggered when a dataset card is changed)",
"_The documentation is not available anymore as the PR was closed or merged._",
"Alright it's ready now :)\r\n\r\nHere is an example for the `ade_corpus_v2` dataset card. Notice the new `configs` key:\r\n\r\nhttps://github.com/huggingface/datasets/blob/76d9a141740a03f6836feb251f6059894b8d8046/datasets/ade_corpus_v2/README.md#L1-L78\r\n\r\nCI failures are only related to dataset cards missing some content."
] | 1,652,882,364,000 | 1,653,039,326,000 | 1,653,038,839,000 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4367",
"html_url": "https://github.com/huggingface/datasets/pull/4367",
"diff_url": "https://github.com/huggingface/datasets/pull/4367.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4367.patch",
"merged_at": 1653038839000
} | Many datasets have dots in their config names. However it causes issues with the YAML tags of the dataset cards since we can't have dots in YAML keys.
I fix this, I removed the tags separations per config name completely, and have a single flat YAML for all configurations. Dataset search doesn't use this info anyway. I removed all the config names used as YAML keys, and I moved them in under a new `config:` key.
This is related to https://github.com/huggingface/datasets/pull/2362 (internal https://github.com/huggingface/moon-landing/issues/946).
Also removing the dots in the YAML keys would allow us to do as in https://github.com/huggingface/datasets/pull/4302 which removes a hack that replaces all the dots by underscores in the YAML tags.
I also added a test in the CI that checks that all the YAML tags to make sure that:
- they can be parsed using a YAML parser
- they contain only valid YAML tags like languages or task_ids | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4367/timeline | null | null | true |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
- Downloads last month
- 9