url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.26B
node_id
stringlengths
18
32
number
int64
1
4.44k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
int64
1,587B
1,654B
updated_at
int64
1,587B
1,654B
closed_at
int64
1,587B
1,654B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
1 value
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/1288
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1288/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1288/comments
https://api.github.com/repos/huggingface/datasets/issues/1288/events
https://github.com/huggingface/datasets/pull/1288
759,309,457
MDExOlB1bGxSZXF1ZXN0NTM0MzM2Mzgz
1,288
Add CodeSearchNet corpus dataset
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq ready for a second review" ]
1,607,422,070,000
1,607,533,528,000
1,607,533,528,000
CONTRIBUTOR
null
This PR adds the CodeSearchNet corpus proxy dataset for semantic code search: https://github.com/github/CodeSearchNet I have had a few issues, mentioned below. Would appreciate some help on how to solve them. ## Issues generating dataset card Is there something wrong with my declaration of the dataset features ? ``` features=datasets.Features( { "repository_name": datasets.Value("string"), "func_path_in_repository": datasets.Value("string"), "func_name": datasets.Value("string"), "whole_func_string": datasets.Value("string"), "language": datasets.Value("string"), "func_code_string": datasets.Value("string"), "func_code_tokens": datasets.Sequence(datasets.Value("string")), "func_documentation_string": datasets.Value("string"), "func_documentation_tokens": datasets.Sequence(datasets.Value("string")), "split_name": datasets.Value("string"), "func_code_url": datasets.Value("string"), # TODO - add licensing info in the examples } ), ``` When running the streamlite app for tagging the dataset on my machine, I get the following error : ![image](https://user-images.githubusercontent.com/33657802/101469132-9ed12c80-3944-11eb-94ff-2d9c1d0ea080.png) ## Issues with dummy data Due to the unusual structure of the data, I have been unable to generate dummy data automatically. I tried to generate it manually, but pytests fail when using the manually-generated dummy data ! Pytests work fine when using the real data. ``` ============================================================================================== test session starts ============================================================================================== platform linux -- Python 3.7.9, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 plugins: xdist-2.1.0, forked-1.3.0 collected 1 item tests/test_dataset_common.py F [100%] =================================================================================================== FAILURES ==================================================================================================== ________________________________________________________________________ LocalDatasetTest.test_load_dataset_all_configs_code_search_net _________________________________________________________________________ self = <tests.test_dataset_common.LocalDatasetTest testMethod=test_load_dataset_all_configs_code_search_net>, dataset_name = 'code_search_net' @slow def test_load_dataset_all_configs(self, dataset_name): configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True) > self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True) tests/test_dataset_common.py:237: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tests/test_dataset_common.py:198: in check_load_dataset self.parent.assertTrue(len(dataset[split]) > 0) E AssertionError: False is not true --------------------------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------------------------- Downloading and preparing dataset code_search_net/all (download: 1.00 MiB, generated: 1.00 MiB, post-processed: Unknown size, total: 2.00 MiB) to /tmp/tmppx78sj24/code_search_net/all/1.0.0... Dataset code_search_net downloaded and prepared to /tmp/tmppx78sj24/code_search_net/all/1.0.0. Subsequent calls will reuse this data. --------------------------------------------------------------------------------------------- Captured stderr call ---------------------------------------------------------------------------------------------- ... (irrelevant info - Deprecation warnings) ============================================================================================ short test summary info ============================================================================================ FAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_code_search_net - AssertionError: False is not true ========================================================================================= 1 failed, 4 warnings in 3.00s ======================================================================================== ``` ## Note : Data structure in S3 The data is stored on S3, and organized by programming languages. It is stored in the following repository structure: ``` . ├── <language_name> # e.g. python │   └── final │   └── jsonl │   ├── test │   │   └── <language_name>_test_0.jsonl.gz │   ├── train │   │   ├── <language_name>_train_0.jsonl.gz │   │   ├── <language_name>_train_1.jsonl.gz │   │   ├── ... │   │   └── <language_name>_train_n.jsonl.gz │   └── valid │   └── <language_name>_valid_0.jsonl.gz ├── <language_name>_dedupe_definitions_v2.pkl └── <language_name>_licenses.pkl ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1288/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1288/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1288", "html_url": "https://github.com/huggingface/datasets/pull/1288", "diff_url": "https://github.com/huggingface/datasets/pull/1288.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1288.patch", "merged_at": 1607533527000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1287
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1287/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1287/comments
https://api.github.com/repos/huggingface/datasets/issues/1287/events
https://github.com/huggingface/datasets/issues/1287
759,300,992
MDU6SXNzdWU3NTkzMDA5OTI=
1,287
'iwslt2017-ro-nl', cannot be downloaded
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[ "the same issue with datasets.load_dataset(\"iwslt2017\", 'iwslt2017-en-nl', split=split), ..... ", "even with setting master like the following command, still remains \r\n\r\ndatasets.load_dataset(\"iwslt2017\", 'iwslt2017-en-nl', split=\"train\", script_version=\"master\")\r\n", "Looks like the data has been moved from its original location to google drive\r\n\r\nNew url: https://drive.google.com/u/0/uc?id=12ycYSzLIG253AFN35Y6qoyf9wtkOjakp&export=download" ]
1,607,421,415,000
1,608,056,694,000
null
CONTRIBUTOR
null
Hi I am trying `>>> datasets.load_dataset("iwslt2017", 'iwslt2017-ro-nl', split="train")` getting this error thank you for your help ``` cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets Downloading and preparing dataset iwsl_t217/iwslt2017-ro-nl (download: 314.07 MiB, generated: 39.92 MiB, post-processed: Unknown size, total: 354.00 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/iwsl_t217/iwslt2017-ro-nl/1.0.0/cca6935a0851a8ceac1202a62c958738bdfa23c57a51bc52ac1c5ebd2aa172cd... cahce dir /idiap/temp/rkarimi/cache_home_1/datasets cahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/iwslt2017/cca6935a0851a8ceac1202a62c958738bdfa23c57a51bc52ac1c5ebd2aa172cd/iwslt2017.py", line 118, in _split_generators dl_dir = dl_manager.download_and_extract(MULTI_URL) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 216, in map_nested return function(data_struct) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 477, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1287/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1287/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1286
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1286/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1286/comments
https://api.github.com/repos/huggingface/datasets/issues/1286/events
https://github.com/huggingface/datasets/issues/1286
759,291,509
MDU6SXNzdWU3NTkyOTE1MDk=
1,286
[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I remember also getting the same issue for several other translation datasets like all the iwslt2017 group, this is blokcing me and I really need to fix it and I was wondering if you have an idea on this. @lhoestq thanks,. ", "maybe there is an empty line or something inside these datasets? could you tell me why this is happening? thanks ", "I just checked and the wmt16 en-ro doesn't have empty lines\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"wmt16\", \"ro-en\", split=\"train\")\r\nlen(d) # 610320\r\nlen(d.filter(lambda x: len(x[\"translation\"][\"en\"].strip()) > 0)) # 610320\r\nlen(d.filter(lambda x: len(x[\"translation\"][\"ro\"].strip()) > 0)) # 610320\r\n# also tested for split=\"validation\" and \"test\"\r\n```\r\n\r\nCan you open an issue on the `transformers` repo ? also cc @sgugger ", "Hi @lhoestq \r\nI am not really sure which part is causing this, to me this is more related to dataset library as this is happening for some of the datassets below please find the information to reprodcue the bug, this is really blocking me and I appreciate your help\r\n\r\n\r\n## Environment info\r\n- `transformers` version: 3.5.1\r\n- Platform: GPU\r\n- Python version: 3.7 \r\n- PyTorch version (GPU?): 1.0.4\r\n- Tensorflow version (GPU?): - \r\n- Using GPU in script?: - \r\n- Using distributed or parallel set-up in script?: - \r\n\r\n### Who can help\r\n tokenizers: @mfuntowicz\r\n Trainer: @sgugger\r\n TextGeneration: @TevenLeScao \r\n nlp datasets: [different repo](https://github.com/huggingface/nlp)\r\n rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)\r\n examples/seq2seq: @patil-suraj\r\n\r\n## Information\r\nHi\r\nI am testing seq2seq model with T5 on different datasets and this is always getting the following bug, this is really blocking me as this fails for many datasets. could you have a look please? thanks \r\n\r\n```\r\n[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): \r\nterminate called after throwing an instance of 'google::protobuf::FatalException'\r\n what(): CHECK failed: (index) >= (0): \r\nAborted\r\n\r\n```\r\n\r\nTo reproduce the error please run on 1 GPU:\r\n```\r\ngit clone git@github.com:rabeehk/debug-seq2seq.git\r\npython setup.py develop \r\ncd seq2seq \r\npython finetune_t5_trainer.py temp.json\r\n\r\n```\r\n\r\nFull output of the program:\r\n\r\n```\r\n(internship) rkarimi@vgnh008:/idiap/user/rkarimi/dev/debug-seq2seq/seq2seq$ python finetune_t5_trainer.py temp.json \r\n2020-12-12 15:38:16.234542: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n2020-12-12 15:38:16.234598: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n12/12/2020 15:38:32 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: False\r\n12/12/2020 15:38:32 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(output_dir='outputs/test', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluate_during_training=False, evaluation_strategy=<EvaluationStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=64, per_device_eval_batch_size=64, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=0.01, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=2, max_steps=-1, warmup_steps=500, logging_dir='runs/Dec12_15-38-32_vgnh008', logging_first_step=True, logging_steps=200, save_steps=200, save_total_limit=1, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=200, dataloader_num_workers=0, past_index=-1, run_name='outputs/test', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, label_smoothing=0.1, sortish_sampler=False, predict_with_generate=True, adafactor=False, encoder_layerdrop=None, decoder_layerdrop=None, dropout=None, attention_dropout=None, lr_scheduler='linear', fixed_length_emb=None, encoder_projection=None, encoder_pooling=None, projection_length=None, only_projection_bottleneck=False, concat_projection_token=False, gcs_bucket='ruse-xcloud-bucket', temperature=10, train_adapters=True, do_finetune=True, parametric_task_embedding=False, eval_output_dir='outputs/finetune-adapter/test-n-1-lr-1e-02-e-20')\r\nSome weights of T5ForConditionalGeneration were not initialized from the model checkpoint at t5-small and are newly initialized: ['encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.0.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.0.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.0.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.0.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.0.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.1.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.1.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.1.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.1.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.1.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.2.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.2.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.2.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.2.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.2.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.3.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.3.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.3.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.3.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.3.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.4.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.4.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.4.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.4.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.4.layer.1.adapter_controller.post_layer_norm.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.5.layer.0.adapter_controller.post_layer_norm.weight', 'encoder.block.5.layer.0.adapter_controller.post_layer_norm.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'encoder.block.5.layer.1.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'encoder.block.5.layer.1.adapter_controller.post_layer_norm.weight', 'encoder.block.5.layer.1.adapter_controller.post_layer_norm.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.0.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.0.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.0.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.0.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.0.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.0.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.1.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.1.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.1.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.1.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.1.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.1.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.2.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.2.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.2.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.2.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.2.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.2.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.3.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.3.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.3.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.3.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.3.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.3.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.4.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.4.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.4.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.4.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.4.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.4.layer.2.adapter_controller.post_layer_norm.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.5.layer.0.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.5.layer.0.adapter_controller.post_layer_norm.weight', 'decoder.block.5.layer.0.adapter_controller.post_layer_norm.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.weight_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_up_sampler.bias_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.weight_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.0.bias', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.weight', 'decoder.block.5.layer.2.adapter_controller.meta_down_sampler.bias_generator.1.bias', 'decoder.block.5.layer.2.adapter_controller.post_layer_norm.weight', 'decoder.block.5.layer.2.adapter_controller.post_layer_norm.bias']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140079090376272 acquired on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140079090376272 released on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\nUsing custom data configuration default\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140082549312272 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140082549312272 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140082549365648 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nReusing dataset boolq (/idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)\r\n12/12/2020 15:38:44 - INFO - filelock - Lock 140082549365648 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nLoading cached processed dataset at /idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534/cache-6810ece2a440c3be.arrow\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 acquired on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 released on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\nUsing custom data configuration default\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549560848 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549365200 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nReusing dataset boolq (/idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)\r\n12/12/2020 15:38:45 - INFO - filelock - Lock 140082549365200 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nLoading cached processed dataset at /idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534/cache-9a2822394a3a4e34.arrow\r\n12/12/2020 15:38:45 - INFO - seq2seq.metrics.metrics - selected metric <function build_compute_metrics_fn.<locals>.classification_metrics at 0x7f66b464cc20> for task boolq\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - ***** Running training *****\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Num examples = 10\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Num Epochs = 2\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Instantaneous batch size per device = 64\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 64\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Gradient Accumulation steps = 1\r\n12/12/2020 15:38:45 - INFO - seq2seq.trainers.trainer - Total optimization steps = 2\r\n{'loss': 529.79443359375, 'learning_rate': 2e-05, 'epoch': 1.0} \r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.37it/s]12/12/2020 15:38:46 - INFO - seq2seq.trainers.trainer - \r\n\r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\n\r\n\r\n{'epoch': 2.0} \r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2.43it/s]\r\n12/12/2020 15:38:46 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs/test\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929680 acquired on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929680 released on /idiap/home/rkarimi/.cache/huggingface/datasets/4c7b1146606607c193d1ef601d8d0c134521b2ac59f61ee98c09119be925ee16.7ad892de9d7f1b4f9dfc598ef31e4a398a7224176bc9a3110e0e2075ff943e8f.py.lock\r\nUsing custom data configuration default\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929360 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079084929360 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079085355216 acquired on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nReusing dataset boolq (/idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534)\r\n12/12/2020 15:38:59 - INFO - filelock - Lock 140079085355216 released on /idiap/temp/rkarimi/cache_home_1/datasets/_idiap_temp_rkarimi_cache_home_1_datasets_boolq_default_0.1.0_1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534.lock\r\nLoading cached processed dataset at /idiap/temp/rkarimi/cache_home_1/datasets/boolq/default/0.1.0/1fcfdc6f36dc89a2245ffbbd5248ab33890594b50396731ebc78411bdd2ca534/cache-164dd1d57e9fa69a.arrow\r\n12/12/2020 15:38:59 - INFO - seq2seq.metrics.metrics - selected metric <function build_compute_metrics_fn.<locals>.classification_metrics at 0x7f66b40c67a0> for task boolq\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - ***** Running training *****\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Num examples = 1\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Num Epochs = 2\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Instantaneous batch size per device = 64\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Total train batch size (w. parallel, distributed & accumulation) = 64\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Gradient Accumulation steps = 1\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Total optimization steps = 2\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from checkpoint, will skip to saved global_step\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from epoch 2\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Continuing training from global step 2\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Will skip the first 0 steps in the first epoch\r\n 0%| | 0/2 [00:00<?, ?it/s]12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - \r\n\r\nTraining completed. Do not forget to share your model on huggingface.co/models =)\r\n\r\n\r\n{'epoch': 2.0} \r\n 0%| | 0/2 [00:00<?, ?it/s]\r\n12/12/2020 15:38:59 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs/finetune-adapter/test-n-1-lr-1e-02-e-20/boolq\r\n12/12/2020 15:39:07 - INFO - seq2seq.utils.utils - using task specific params for boolq: {'max_length': 3}\r\n12/12/2020 15:39:07 - INFO - seq2seq.trainers.trainer - ***** Running Evaluation *****\r\n12/12/2020 15:39:07 - INFO - seq2seq.trainers.trainer - Num examples = 3269\r\n12/12/2020 15:39:07 - INFO - seq2seq.trainers.trainer - Batch size = 64\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 52/52 [00:12<00:00, 4.86it/s][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): \r\nterminate called after throwing an instance of 'google::protobuf::FatalException'\r\n what(): CHECK failed: (index) >= (0): \r\nAborted\r\n```\r\n\r\n\r\n\r\n", "solved see https://github.com/huggingface/transformers/issues/9079?_pjax=%23js-repo-pjax-container ", "Hii please follow me" ]
1,607,420,655,000
1,607,801,782,000
1,607,790,156,000
CONTRIBUTOR
null
Hi I am getting this error when evaluating on wmt16-ro-en using finetune_trainer.py of huggingface repo. thank for your help {'epoch': 20.0} 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:16<00:00, 1.22it/s] 12/08/2020 10:41:19 - INFO - seq2seq.trainers.trainer - Saving model checkpoint to outputs/experiment/joint/finetune/lr-2e-5 12/08/2020 10:41:24 - INFO - __main__ - {'wmt16-en-ro': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 1998), 'qnli': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 5462), 'scitail': Dataset(features: {'src_texts': Value(dtype='string', id=None), 'task': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 1303)} 12/08/2020 10:41:24 - INFO - __main__ - *** Evaluate *** 12/08/2020 10:41:24 - INFO - seq2seq.utils.utils - using task specific params for wmt16-en-ro: {'max_length': 300, 'num_beams': 4} 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - ***** Running Evaluation ***** 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - Num examples = 1998 12/08/2020 10:41:24 - INFO - seq2seq.trainers.trainer - Batch size = 64 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 32/32 [00:37<00:00, 1.19s/it][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): Aborted
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1286/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1286/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1285
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1285/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1285/comments
https://api.github.com/repos/huggingface/datasets/issues/1285/events
https://github.com/huggingface/datasets/issues/1285
759,278,758
MDU6SXNzdWU3NTkyNzg3NTg=
1,285
boolq does not work
{ "login": "rabeehk", "id": 6278280, "node_id": "MDQ6VXNlcjYyNzgyODA=", "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rabeehk", "html_url": "https://github.com/rabeehk", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "repos_url": "https://api.github.com/users/rabeehk/repos", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "here is the minimal code to reproduce\r\n\r\n`datasets>>> datasets.load_dataset(\"boolq\", \"train\")\r\n\r\nthe errors\r\n\r\n```\r\n`cahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\nUsing custom data configuration train\r\nDownloading and preparing dataset boolq/train (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /idiap/temp/rkarimi/cache_home_1/datasets/boolq/train/0.1.0/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11...\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py\", line 531, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \" /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py\", line 74, in _split_generators\r\n downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy)\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py\", line 149, in download_custom\r\n custom_download(url, path)\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py\", line 516, in copy_v2\r\n compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite)\r\n\r\n\r\n\r\n```", "This has been fixed by #881 \r\nthis fix will be available in the next release soon.\r\n\r\nIf you don't want to wait for the release you can actually load the latest version of boolq by specifying `script_version=\"master\"` in `load_dataset`", "thank you this solved this issue, for now seems to work, thanks " ]
1,607,419,727,000
1,607,420,830,000
1,607,420,830,000
CONTRIBUTOR
null
Hi I am getting this error when trying to load boolq, thanks for your help ts_boolq_default_0.1.0_2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11.lock Traceback (most recent call last): File "finetune_t5_trainer.py", line 274, in <module> main() File "finetune_t5_trainer.py", line 147, in main for task in data_args.tasks] File "finetune_t5_trainer.py", line 147, in <listcomp> for task in data_args.tasks] File "/remote/idiap.svm/user.active/rkarimi/dev/ruse/seq2seq/tasks/tasks.py", line 58, in get_dataset dataset = self.load_dataset(split=split) File "/remote/idiap.svm/user.active/rkarimi/dev/ruse/seq2seq/tasks/tasks.py", line 54, in load_dataset return datasets.load_dataset(self.task.name, split=split) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 149, in download_custom custom_download(url, path) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/tensorflow/python/lib/io/file_io.py", line 516, in copy_v2 compat.path_to_bytes(src), compat.path_to_bytes(dst), overwrite) tensorflow.python.framework.errors_impl.AlreadyExistsError: file already exists
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1285/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1285/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1284
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1284/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1284/comments
https://api.github.com/repos/huggingface/datasets/issues/1284/events
https://github.com/huggingface/datasets/pull/1284
759,269,920
MDExOlB1bGxSZXF1ZXN0NTM0MzAzNDk0
1,284
Update coqa dataset url
{ "login": "ojasaar", "id": 73708394, "node_id": "MDQ6VXNlcjczNzA4Mzk0", "avatar_url": "https://avatars.githubusercontent.com/u/73708394?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ojasaar", "html_url": "https://github.com/ojasaar", "followers_url": "https://api.github.com/users/ojasaar/followers", "following_url": "https://api.github.com/users/ojasaar/following{/other_user}", "gists_url": "https://api.github.com/users/ojasaar/gists{/gist_id}", "starred_url": "https://api.github.com/users/ojasaar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ojasaar/subscriptions", "organizations_url": "https://api.github.com/users/ojasaar/orgs", "repos_url": "https://api.github.com/users/ojasaar/repos", "events_url": "https://api.github.com/users/ojasaar/events{/privacy}", "received_events_url": "https://api.github.com/users/ojasaar/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,418,998,000
1,607,451,549,000
1,607,451,549,000
CONTRIBUTOR
null
`datasets.stanford.edu` is invalid.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1284/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1284/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1284", "html_url": "https://github.com/huggingface/datasets/pull/1284", "diff_url": "https://github.com/huggingface/datasets/pull/1284.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1284.patch", "merged_at": 1607451549000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1283
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1283/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1283/comments
https://api.github.com/repos/huggingface/datasets/issues/1283/events
https://github.com/huggingface/datasets/pull/1283
759,251,457
MDExOlB1bGxSZXF1ZXN0NTM0Mjg4MDg2
1,283
Add dutch book review dataset
{ "login": "benjaminvdb", "id": 8875786, "node_id": "MDQ6VXNlcjg4NzU3ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/8875786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benjaminvdb", "html_url": "https://github.com/benjaminvdb", "followers_url": "https://api.github.com/users/benjaminvdb/followers", "following_url": "https://api.github.com/users/benjaminvdb/following{/other_user}", "gists_url": "https://api.github.com/users/benjaminvdb/gists{/gist_id}", "starred_url": "https://api.github.com/users/benjaminvdb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benjaminvdb/subscriptions", "organizations_url": "https://api.github.com/users/benjaminvdb/orgs", "repos_url": "https://api.github.com/users/benjaminvdb/repos", "events_url": "https://api.github.com/users/benjaminvdb/events{/privacy}", "received_events_url": "https://api.github.com/users/benjaminvdb/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> Really cool thanks !\r\n> \r\n> I left some (minor) comments\r\n\r\nThank you for your comments! 👍 I went ahead and improved the dataset card using your suggestions and some tweaks of my own. I hope you like it! 😄" ]
1,607,417,448,000
1,607,545,318,000
1,607,534,725,000
CONTRIBUTOR
null
- Name: Dutch Book Review Dataset (DBRD) - Description: The DBRD (pronounced dee-bird) dataset contains over 110k book reviews along with associated binary sentiment polarity labels and is intended as a benchmark for sentiment classification in Dutch. - Paper: https://arxiv.org/abs/1910.00896 - Data: https://github.com/benjaminvdb/DBRD - Motivation: A large (real-life) dataset of Dutch book reviews and sentiment polarity (positive/negative), based on the associated rating. Checks - [x] Create the dataset script /datasets/dbrd/dbrd.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _info(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class. - [x] Generate the metadata file dataset_infos.json for all configurations - [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card README.md using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1283/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1283/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1283", "html_url": "https://github.com/huggingface/datasets/pull/1283", "diff_url": "https://github.com/huggingface/datasets/pull/1283.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1283.patch", "merged_at": 1607534725000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1282
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1282/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1282/comments
https://api.github.com/repos/huggingface/datasets/issues/1282/events
https://github.com/huggingface/datasets/pull/1282
759,208,335
MDExOlB1bGxSZXF1ZXN0NTM0MjQ4NzI5
1,282
add thaiqa_squad
{ "login": "cstorm125", "id": 15519308, "node_id": "MDQ6VXNlcjE1NTE5MzA4", "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cstorm125", "html_url": "https://github.com/cstorm125", "followers_url": "https://api.github.com/users/cstorm125/followers", "following_url": "https://api.github.com/users/cstorm125/following{/other_user}", "gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}", "starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions", "organizations_url": "https://api.github.com/users/cstorm125/orgs", "repos_url": "https://api.github.com/users/cstorm125/repos", "events_url": "https://api.github.com/users/cstorm125/events{/privacy}", "received_events_url": "https://api.github.com/users/cstorm125/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,415,278,000
1,607,452,578,000
1,607,452,578,000
CONTRIBUTOR
null
Example format is a little different from SQuAD since `thaiqa` always have one answer per question so I added a check to convert answers to lists if they are not already one to future-proof additional questions that might have multiple answers. `thaiqa_squad` is an open-domain, extractive question answering dataset (4,000 questions in `train` and 74 questions in `dev`) in [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, originally created by [NECTEC](https://www.nectec.or.th/en/) from Wikipedia articles and adapted to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format by [PyThaiNLP](https://github.com/PyThaiNLP/).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1282/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1282/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1282", "html_url": "https://github.com/huggingface/datasets/pull/1282", "diff_url": "https://github.com/huggingface/datasets/pull/1282.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1282.patch", "merged_at": 1607452578000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1281
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1281/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1281/comments
https://api.github.com/repos/huggingface/datasets/issues/1281/events
https://github.com/huggingface/datasets/pull/1281
759,203,317
MDExOlB1bGxSZXF1ZXN0NTM0MjQ0MTA1
1,281
adding hybrid_qa
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,415,019,000
1,607,450,968,000
1,607,450,820,000
MEMBER
null
Adding HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data https://github.com/wenhuchen/HybridQA
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1281/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1281/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1281", "html_url": "https://github.com/huggingface/datasets/pull/1281", "diff_url": "https://github.com/huggingface/datasets/pull/1281.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1281.patch", "merged_at": 1607450820000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1280/comments
https://api.github.com/repos/huggingface/datasets/issues/1280/events
https://github.com/huggingface/datasets/pull/1280
759,151,028
MDExOlB1bGxSZXF1ZXN0NTM0MTk2MDc0
1,280
disaster response messages dataset
{ "login": "darshan-gandhi", "id": 44197177, "node_id": "MDQ6VXNlcjQ0MTk3MTc3", "avatar_url": "https://avatars.githubusercontent.com/u/44197177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/darshan-gandhi", "html_url": "https://github.com/darshan-gandhi", "followers_url": "https://api.github.com/users/darshan-gandhi/followers", "following_url": "https://api.github.com/users/darshan-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/darshan-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/darshan-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/darshan-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/darshan-gandhi/orgs", "repos_url": "https://api.github.com/users/darshan-gandhi/repos", "events_url": "https://api.github.com/users/darshan-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/darshan-gandhi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I have added the Readme.md as well, the PR is ready for review. \r\n\r\nThank you ", "Hi @lhoestq I have updated the code and files. Please if you could check once.\r\n\r\nThank you" ]
1,607,412,436,000
1,607,530,917,000
1,607,530,917,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1280/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1280/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1280", "html_url": "https://github.com/huggingface/datasets/pull/1280", "diff_url": "https://github.com/huggingface/datasets/pull/1280.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1280.patch", "merged_at": 1607530917000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1279
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1279/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1279/comments
https://api.github.com/repos/huggingface/datasets/issues/1279/events
https://github.com/huggingface/datasets/pull/1279
759,108,726
MDExOlB1bGxSZXF1ZXN0NTM0MTU4OTY5
1,279
added para_pat
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Updated with Translation feature type. Working on dataset tags and README", "merging since the CI is fixed on master" ]
1,607,408,927,000
1,607,953,277,000
1,607,953,277,000
CONTRIBUTOR
null
Dataset link : https://figshare.com/articles/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632 Working on README.md currently
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1279/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1279/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1279", "html_url": "https://github.com/huggingface/datasets/pull/1279", "diff_url": "https://github.com/huggingface/datasets/pull/1279.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1279.patch", "merged_at": 1607953277000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1278
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1278/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1278/comments
https://api.github.com/repos/huggingface/datasets/issues/1278/events
https://github.com/huggingface/datasets/pull/1278
758,988,465
MDExOlB1bGxSZXF1ZXN0NTM0MDYwNDY5
1,278
Craigslist bargains
{ "login": "ZacharySBrown", "id": 7950786, "node_id": "MDQ6VXNlcjc5NTA3ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/7950786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZacharySBrown", "html_url": "https://github.com/ZacharySBrown", "followers_url": "https://api.github.com/users/ZacharySBrown/followers", "following_url": "https://api.github.com/users/ZacharySBrown/following{/other_user}", "gists_url": "https://api.github.com/users/ZacharySBrown/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZacharySBrown/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZacharySBrown/subscriptions", "organizations_url": "https://api.github.com/users/ZacharySBrown/orgs", "repos_url": "https://api.github.com/users/ZacharySBrown/repos", "events_url": "https://api.github.com/users/ZacharySBrown/events{/privacy}", "received_events_url": "https://api.github.com/users/ZacharySBrown/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Seeing this in the CircleCI builds, this is what I was originally getting before I started messing around with the download URLS to try to fix this:\r\n\r\n`FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpwvji917g/extracted/d6185140afb24ad8fee67392100a478269cba286b0d88915a137fdf88872de14/dummy_data/train__VARIABLE_MISUSE__SStuB.txt-00001-of-00300'`\r\n\r\nCould this be because of the files in my `dummy_data.zip`? I had to manually create it, and it looked like the test was looking for the following files, so I created the `.zip` with this structure:\r\n\r\n```\r\nArchive: dummy_data.zip\r\n creating: dummy_data/\r\n inflating: dummy_data/blobtest \r\n inflating: dummy_data/parsed.jsontrain \r\n inflating: dummy_data/parsed.jsonvalidation \r\n```", "Going to close this out and link to a new (cleaner) PR" ]
1,607,391,955,000
1,607,474,775,000
1,607,474,775,000
CONTRIBUTOR
null
`craigslist_bargains` dataset from [here](https://worksheets.codalab.org/worksheets/0x453913e76b65495d8b9730d41c7e0a0c/)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1278/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1278/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1278", "html_url": "https://github.com/huggingface/datasets/pull/1278", "diff_url": "https://github.com/huggingface/datasets/pull/1278.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1278.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1276
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1276/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1276/comments
https://api.github.com/repos/huggingface/datasets/issues/1276/events
https://github.com/huggingface/datasets/pull/1276
758,965,936
MDExOlB1bGxSZXF1ZXN0NTM0MDQyODYy
1,276
add One Million Posts Corpus
{ "login": "aseifert", "id": 4944799, "node_id": "MDQ6VXNlcjQ5NDQ3OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/4944799?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aseifert", "html_url": "https://github.com/aseifert", "followers_url": "https://api.github.com/users/aseifert/followers", "following_url": "https://api.github.com/users/aseifert/following{/other_user}", "gists_url": "https://api.github.com/users/aseifert/gists{/gist_id}", "starred_url": "https://api.github.com/users/aseifert/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aseifert/subscriptions", "organizations_url": "https://api.github.com/users/aseifert/orgs", "repos_url": "https://api.github.com/users/aseifert/repos", "events_url": "https://api.github.com/users/aseifert/events{/privacy}", "received_events_url": "https://api.github.com/users/aseifert/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "merging since the CI is fixed on master" ]
1,607,388,608,000
1,607,711,298,000
1,607,711,298,000
CONTRIBUTOR
null
- **Name:** One Million Posts Corpus - **Description:** The “One Million Posts” corpus is an annotated data set consisting of user comments posted to an Austrian newspaper website (in German language). - **Paper:** https://dl.acm.org/doi/10.1145/3077136.3080711 - **Data:** https://github.com/OFAI/million-post-corpus - **Motivation:** Big German (real-life) dataset containing different annotations around forum moderation with expert annotations. ### Checkbox - [X] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [X] Fill the `_DESCRIPTION` and `_CITATION` variables - [X] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [X] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [X] Generate the metadata file `dataset_infos.json` for all configurations - [X] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [X] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [X] Both tests for the real data and the dummy data pass.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1276/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1276/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1276", "html_url": "https://github.com/huggingface/datasets/pull/1276", "diff_url": "https://github.com/huggingface/datasets/pull/1276.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1276.patch", "merged_at": 1607711298000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1275/comments
https://api.github.com/repos/huggingface/datasets/issues/1275/events
https://github.com/huggingface/datasets/pull/1275
758,958,066
MDExOlB1bGxSZXF1ZXN0NTM0MDM2NjIw
1,275
Yoruba GV NER added
{ "login": "dadelani", "id": 23586676, "node_id": "MDQ6VXNlcjIzNTg2Njc2", "avatar_url": "https://avatars.githubusercontent.com/u/23586676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dadelani", "html_url": "https://github.com/dadelani", "followers_url": "https://api.github.com/users/dadelani/followers", "following_url": "https://api.github.com/users/dadelani/following{/other_user}", "gists_url": "https://api.github.com/users/dadelani/gists{/gist_id}", "starred_url": "https://api.github.com/users/dadelani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dadelani/subscriptions", "organizations_url": "https://api.github.com/users/dadelani/orgs", "repos_url": "https://api.github.com/users/dadelani/repos", "events_url": "https://api.github.com/users/dadelani/events{/privacy}", "received_events_url": "https://api.github.com/users/dadelani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thank you. Okay, I will add the dataset card." ]
1,607,387,498,000
1,607,469,928,000
1,607,469,928,000
CONTRIBUTOR
null
I just added Yoruba GV NER dataset from this paper https://www.aclweb.org/anthology/2020.lrec-1.335/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1275/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1275/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1275", "html_url": "https://github.com/huggingface/datasets/pull/1275", "diff_url": "https://github.com/huggingface/datasets/pull/1275.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1275.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1274
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1274/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1274/comments
https://api.github.com/repos/huggingface/datasets/issues/1274/events
https://github.com/huggingface/datasets/pull/1274
758,943,174
MDExOlB1bGxSZXF1ZXN0NTM0MDI0MTQx
1,274
oclar-dataset
{ "login": "alaameloh", "id": 26907161, "node_id": "MDQ6VXNlcjI2OTA3MTYx", "avatar_url": "https://avatars.githubusercontent.com/u/26907161?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alaameloh", "html_url": "https://github.com/alaameloh", "followers_url": "https://api.github.com/users/alaameloh/followers", "following_url": "https://api.github.com/users/alaameloh/following{/other_user}", "gists_url": "https://api.github.com/users/alaameloh/gists{/gist_id}", "starred_url": "https://api.github.com/users/alaameloh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alaameloh/subscriptions", "organizations_url": "https://api.github.com/users/alaameloh/orgs", "repos_url": "https://api.github.com/users/alaameloh/repos", "events_url": "https://api.github.com/users/alaameloh/events{/privacy}", "received_events_url": "https://api.github.com/users/alaameloh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "merging since the CI is fixed on master" ]
1,607,385,405,000
1,607,528,168,000
1,607,528,168,000
CONTRIBUTOR
null
Opinion Corpus for Lebanese Arabic Reviews (OCLAR) corpus is utilizable for Arabic sentiment classification on reviews, including hotels, restaurants, shops, and others. : [homepage](http://archive.ics.uci.edu/ml/datasets/Opinion+Corpus+for+Lebanese+Arabic+Reviews+%28OCLAR%29#)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1274/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1274/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1274", "html_url": "https://github.com/huggingface/datasets/pull/1274", "diff_url": "https://github.com/huggingface/datasets/pull/1274.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1274.patch", "merged_at": 1607528168000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1273
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1273/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1273/comments
https://api.github.com/repos/huggingface/datasets/issues/1273/events
https://github.com/huggingface/datasets/pull/1273
758,935,768
MDExOlB1bGxSZXF1ZXN0NTM0MDE4MjQ2
1,273
Created wiki_movies dataset.
{ "login": "aclifton314", "id": 53267795, "node_id": "MDQ6VXNlcjUzMjY3Nzk1", "avatar_url": "https://avatars.githubusercontent.com/u/53267795?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aclifton314", "html_url": "https://github.com/aclifton314", "followers_url": "https://api.github.com/users/aclifton314/followers", "following_url": "https://api.github.com/users/aclifton314/following{/other_user}", "gists_url": "https://api.github.com/users/aclifton314/gists{/gist_id}", "starred_url": "https://api.github.com/users/aclifton314/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aclifton314/subscriptions", "organizations_url": "https://api.github.com/users/aclifton314/orgs", "repos_url": "https://api.github.com/users/aclifton314/repos", "events_url": "https://api.github.com/users/aclifton314/events{/privacy}", "received_events_url": "https://api.github.com/users/aclifton314/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "looks like your PR includes changes about many other files than the ones for wiki_movies\r\n\r\nCan you create another branch and another PR please ?", "I'm happy to. What's the best way to do that (sorry, I'm new to PRs etc.)?", "Sure !\r\n\r\nFirst please save your new dataset files somewhere.\r\nThen you can do in this order:\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit push\r\ngit checkout -b my-new-branch-name\r\n```\r\nThis will create a new branch from the updated master branch.\r\nThen you can re-add your files and commit + push them\r\n\r\nOnce it's done you should be able to create a new PR using your new branch :) ", "Done!", "closing in favor of #1485 " ]
1,607,384,334,000
1,607,954,209,000
1,607,954,209,000
CONTRIBUTOR
null
First PR (ever). Hopefully this movies dataset is useful to others!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1273/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1273/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1273", "html_url": "https://github.com/huggingface/datasets/pull/1273", "diff_url": "https://github.com/huggingface/datasets/pull/1273.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1273.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1272
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1272/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1272/comments
https://api.github.com/repos/huggingface/datasets/issues/1272/events
https://github.com/huggingface/datasets/pull/1272
758,924,960
MDExOlB1bGxSZXF1ZXN0NTM0MDA5MTk0
1,272
Psc
{ "login": "abecadel", "id": 1654113, "node_id": "MDQ6VXNlcjE2NTQxMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/1654113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abecadel", "html_url": "https://github.com/abecadel", "followers_url": "https://api.github.com/users/abecadel/followers", "following_url": "https://api.github.com/users/abecadel/following{/other_user}", "gists_url": "https://api.github.com/users/abecadel/gists{/gist_id}", "starred_url": "https://api.github.com/users/abecadel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abecadel/subscriptions", "organizations_url": "https://api.github.com/users/abecadel/orgs", "repos_url": "https://api.github.com/users/abecadel/repos", "events_url": "https://api.github.com/users/abecadel/events{/privacy}", "received_events_url": "https://api.github.com/users/abecadel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,383,176,000
1,607,384,885,000
1,607,384,868,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1272/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1272/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1272", "html_url": "https://github.com/huggingface/datasets/pull/1272", "diff_url": "https://github.com/huggingface/datasets/pull/1272.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1272.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1271
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1271/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1271/comments
https://api.github.com/repos/huggingface/datasets/issues/1271/events
https://github.com/huggingface/datasets/pull/1271
758,924,203
MDExOlB1bGxSZXF1ZXN0NTM0MDA4NTg4
1,271
SMS Spam Dataset
{ "login": "czabo", "id": 75574105, "node_id": "MDQ6VXNlcjc1NTc0MTA1", "avatar_url": "https://avatars.githubusercontent.com/u/75574105?v=4", "gravatar_id": "", "url": "https://api.github.com/users/czabo", "html_url": "https://github.com/czabo", "followers_url": "https://api.github.com/users/czabo/followers", "following_url": "https://api.github.com/users/czabo/following{/other_user}", "gists_url": "https://api.github.com/users/czabo/gists{/gist_id}", "starred_url": "https://api.github.com/users/czabo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/czabo/subscriptions", "organizations_url": "https://api.github.com/users/czabo/orgs", "repos_url": "https://api.github.com/users/czabo/repos", "events_url": "https://api.github.com/users/czabo/events{/privacy}", "received_events_url": "https://api.github.com/users/czabo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,383,086,000
1,607,449,339,000
1,607,449,339,000
CONTRIBUTOR
null
Hi :) I added this [SMS Spam Dataset](http://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1271/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1271/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1271", "html_url": "https://github.com/huggingface/datasets/pull/1271", "diff_url": "https://github.com/huggingface/datasets/pull/1271.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1271.patch", "merged_at": 1607449339000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1270
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1270/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1270/comments
https://api.github.com/repos/huggingface/datasets/issues/1270/events
https://github.com/huggingface/datasets/pull/1270
758,917,216
MDExOlB1bGxSZXF1ZXN0NTM0MDAyODIz
1,270
add DFKI SmartData Corpus
{ "login": "aseifert", "id": 4944799, "node_id": "MDQ6VXNlcjQ5NDQ3OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/4944799?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aseifert", "html_url": "https://github.com/aseifert", "followers_url": "https://api.github.com/users/aseifert/followers", "following_url": "https://api.github.com/users/aseifert/following{/other_user}", "gists_url": "https://api.github.com/users/aseifert/gists{/gist_id}", "starred_url": "https://api.github.com/users/aseifert/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aseifert/subscriptions", "organizations_url": "https://api.github.com/users/aseifert/orgs", "repos_url": "https://api.github.com/users/aseifert/repos", "events_url": "https://api.github.com/users/aseifert/events{/privacy}", "received_events_url": "https://api.github.com/users/aseifert/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,382,228,000
1,607,449,283,000
1,607,449,283,000
CONTRIBUTOR
null
- **Name:** DFKI SmartData Corpus - **Description:** DFKI SmartData Corpus is a dataset of 2598 German-language documents which has been annotated with fine-grained geo-entities, such as streets, stops and routes, as well as standard named entity types. - **Paper:** https://www.dfki.de/fileadmin/user_upload/import/9427_lrec_smartdata_corpus.pdf - **Data:** https://github.com/DFKI-NLP/smartdata-corpus - **Motivation:** Contains fine-grained NER labels for German. ### Checkbox - [X] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [X] Fill the `_DESCRIPTION` and `_CITATION` variables - [X] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [X] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [X] Generate the metadata file `dataset_infos.json` for all configurations - [X] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [X] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [X] Both tests for the real data and the dummy data pass.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1270/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1270/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1270", "html_url": "https://github.com/huggingface/datasets/pull/1270", "diff_url": "https://github.com/huggingface/datasets/pull/1270.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1270.patch", "merged_at": 1607449283000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1269
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1269/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1269/comments
https://api.github.com/repos/huggingface/datasets/issues/1269/events
https://github.com/huggingface/datasets/pull/1269
758,886,174
MDExOlB1bGxSZXF1ZXN0NTMzOTc3MTE2
1,269
Adding OneStopEnglish corpus dataset
{ "login": "purvimisal", "id": 22298787, "node_id": "MDQ6VXNlcjIyMjk4Nzg3", "avatar_url": "https://avatars.githubusercontent.com/u/22298787?v=4", "gravatar_id": "", "url": "https://api.github.com/users/purvimisal", "html_url": "https://github.com/purvimisal", "followers_url": "https://api.github.com/users/purvimisal/followers", "following_url": "https://api.github.com/users/purvimisal/following{/other_user}", "gists_url": "https://api.github.com/users/purvimisal/gists{/gist_id}", "starred_url": "https://api.github.com/users/purvimisal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/purvimisal/subscriptions", "organizations_url": "https://api.github.com/users/purvimisal/orgs", "repos_url": "https://api.github.com/users/purvimisal/repos", "events_url": "https://api.github.com/users/purvimisal/events{/privacy}", "received_events_url": "https://api.github.com/users/purvimisal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq, thanks for the review.\r\nI have made all the changes, PTAL! :) " ]
1,607,378,711,000
1,607,539,418,000
1,607,528,033,000
CONTRIBUTOR
null
This PR adds OneStopEnglish Corpus containing texts classified into reading levels (elementary, intermediate, advance) for automatic readability assessment and text simplification. Link to the paper: https://www.aclweb.org/anthology/W18-0535.pdf
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1269/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1269/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1269", "html_url": "https://github.com/huggingface/datasets/pull/1269", "diff_url": "https://github.com/huggingface/datasets/pull/1269.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1269.patch", "merged_at": 1607528033000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1268
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1268/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1268/comments
https://api.github.com/repos/huggingface/datasets/issues/1268/events
https://github.com/huggingface/datasets/pull/1268
758,871,252
MDExOlB1bGxSZXF1ZXN0NTMzOTY0OTQ4
1,268
new pr for Turkish NER
{ "login": "merveenoyan", "id": 53175384, "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/merveenoyan", "html_url": "https://github.com/merveenoyan", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "repos_url": "https://api.github.com/users/merveenoyan/repos", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Can you run `make style` to fix the code format ?\r\n\r\nAlso it looks like the file `file_downloaded/TWNERTC_TC_Coarse Grained NER_DomainIndependent_NoiseReduction.zip/TWNERTC_TC_Coarse Grained NER_DomainIndependent_NoiseReduction.DUMP` is missing inside the dummy_data.zip\r\n\r\n\r\n(note that `TWNERTC_TC_Coarse Grained NER_DomainIndependent_NoiseReduction.zip` is a directory name, not an actual zip file)", "Hi Quentin, thank you for your patience with me. I've fixed the preprocessing pipeline, got this very weird error that Yacine told me to push. I've pushed it and after I'll find out that it will work, I will have my final pr on styling.", "looks like you removed the dataset script file in your latest commit, is it expected ?" ]
1,607,377,226,000
1,607,521,505,000
1,607,521,505,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1268/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1268/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1268", "html_url": "https://github.com/huggingface/datasets/pull/1268", "diff_url": "https://github.com/huggingface/datasets/pull/1268.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1268.patch", "merged_at": 1607521505000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1267
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1267/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1267/comments
https://api.github.com/repos/huggingface/datasets/issues/1267/events
https://github.com/huggingface/datasets/pull/1267
758,826,568
MDExOlB1bGxSZXF1ZXN0NTMzOTMwNzU2
1,267
Has part
{ "login": "jeromeku", "id": 2455711, "node_id": "MDQ6VXNlcjI0NTU3MTE=", "avatar_url": "https://avatars.githubusercontent.com/u/2455711?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jeromeku", "html_url": "https://github.com/jeromeku", "followers_url": "https://api.github.com/users/jeromeku/followers", "following_url": "https://api.github.com/users/jeromeku/following{/other_user}", "gists_url": "https://api.github.com/users/jeromeku/gists{/gist_id}", "starred_url": "https://api.github.com/users/jeromeku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeromeku/subscriptions", "organizations_url": "https://api.github.com/users/jeromeku/orgs", "repos_url": "https://api.github.com/users/jeromeku/repos", "events_url": "https://api.github.com/users/jeromeku/events{/privacy}", "received_events_url": "https://api.github.com/users/jeromeku/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "merging since the CI is fixed on master" ]
1,607,373,123,000
1,607,711,142,000
1,607,711,142,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1267/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1267/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1267", "html_url": "https://github.com/huggingface/datasets/pull/1267", "diff_url": "https://github.com/huggingface/datasets/pull/1267.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1267.patch", "merged_at": 1607711142000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1266/comments
https://api.github.com/repos/huggingface/datasets/issues/1266/events
https://github.com/huggingface/datasets/pull/1266
758,704,178
MDExOlB1bGxSZXF1ZXN0NTMzODMyNTQ1
1,266
removing unzipped hansards dummy data
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,362,276,000
1,607,362,349,000
1,607,362,349,000
MEMBER
null
which were added by mistake
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1266/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1266/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1266", "html_url": "https://github.com/huggingface/datasets/pull/1266", "diff_url": "https://github.com/huggingface/datasets/pull/1266.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1266.patch", "merged_at": 1607362348000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1265
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1265/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1265/comments
https://api.github.com/repos/huggingface/datasets/issues/1265/events
https://github.com/huggingface/datasets/pull/1265
758,687,223
MDExOlB1bGxSZXF1ZXN0NTMzODE4NjY0
1,265
Add CovidQA dataset
{ "login": "olinguyen", "id": 4341867, "node_id": "MDQ6VXNlcjQzNDE4Njc=", "avatar_url": "https://avatars.githubusercontent.com/u/4341867?v=4", "gravatar_id": "", "url": "https://api.github.com/users/olinguyen", "html_url": "https://github.com/olinguyen", "followers_url": "https://api.github.com/users/olinguyen/followers", "following_url": "https://api.github.com/users/olinguyen/following{/other_user}", "gists_url": "https://api.github.com/users/olinguyen/gists{/gist_id}", "starred_url": "https://api.github.com/users/olinguyen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/olinguyen/subscriptions", "organizations_url": "https://api.github.com/users/olinguyen/orgs", "repos_url": "https://api.github.com/users/olinguyen/repos", "events_url": "https://api.github.com/users/olinguyen/events{/privacy}", "received_events_url": "https://api.github.com/users/olinguyen/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It seems to share the same name as this dataset: https://openreview.net/forum?id=JENSKEEzsoU", "> It seems to share the same name as this dataset: https://openreview.net/forum?id=JENSKEEzsoU\r\n\r\nyou're right it can be confusing. I'll add the organization/research group for clarity: `covid_qa_castorini`. I added the dataset you shared as `covid_qa_deepset` in another PR (#1182) ", "Thanks for avoiding the name collision !" ]
1,607,360,811,000
1,607,446,946,000
1,607,446,946,000
CONTRIBUTOR
null
This PR adds CovidQA, a question answering dataset specifically designed for COVID-19, built by hand from knowledge gathered from Kaggle’s COVID-19 Open Research Dataset Challenge. Link to the paper: https://arxiv.org/pdf/2004.11339.pdf Link to the homepage: https://covidqa.ai
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1265/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1265/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1265", "html_url": "https://github.com/huggingface/datasets/pull/1265", "diff_url": "https://github.com/huggingface/datasets/pull/1265.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1265.patch", "merged_at": 1607446946000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1264
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1264/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1264/comments
https://api.github.com/repos/huggingface/datasets/issues/1264/events
https://github.com/huggingface/datasets/pull/1264
758,686,474
MDExOlB1bGxSZXF1ZXN0NTMzODE4MDM2
1,264
enriched webnlg dataset rebase
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I've removed the `en` within `de` and reciprocally; but I don't think I will be able to thin it more than this. (Edit: ignore the close, I missclicked !)" ]
1,607,360,745,000
1,607,533,229,000
1,607,533,227,000
MEMBER
null
Rebase of #1206 !
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1264/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1264/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1264", "html_url": "https://github.com/huggingface/datasets/pull/1264", "diff_url": "https://github.com/huggingface/datasets/pull/1264.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1264.patch", "merged_at": 1607533227000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1263
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1263/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1263/comments
https://api.github.com/repos/huggingface/datasets/issues/1263/events
https://github.com/huggingface/datasets/pull/1263
758,663,787
MDExOlB1bGxSZXF1ZXN0NTMzNzk5NzU5
1,263
Added kannada news headlines classification dataset.
{ "login": "vrindaprabhu", "id": 16264631, "node_id": "MDQ6VXNlcjE2MjY0NjMx", "avatar_url": "https://avatars.githubusercontent.com/u/16264631?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vrindaprabhu", "html_url": "https://github.com/vrindaprabhu", "followers_url": "https://api.github.com/users/vrindaprabhu/followers", "following_url": "https://api.github.com/users/vrindaprabhu/following{/other_user}", "gists_url": "https://api.github.com/users/vrindaprabhu/gists{/gist_id}", "starred_url": "https://api.github.com/users/vrindaprabhu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vrindaprabhu/subscriptions", "organizations_url": "https://api.github.com/users/vrindaprabhu/orgs", "repos_url": "https://api.github.com/users/vrindaprabhu/repos", "events_url": "https://api.github.com/users/vrindaprabhu/events{/privacy}", "received_events_url": "https://api.github.com/users/vrindaprabhu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! Let me know if any more comments! Will fix it! :-)" ]
1,607,358,937,000
1,607,610,655,000
1,607,536,891,000
CONTRIBUTOR
null
Manual Download of a kaggle dataset. Mostly followed process as ms_terms.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1263/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1263/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1263", "html_url": "https://github.com/huggingface/datasets/pull/1263", "diff_url": "https://github.com/huggingface/datasets/pull/1263.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1263.patch", "merged_at": 1607536891000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1262
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1262/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1262/comments
https://api.github.com/repos/huggingface/datasets/issues/1262/events
https://github.com/huggingface/datasets/pull/1262
758,637,124
MDExOlB1bGxSZXF1ZXN0NTMzNzc3OTcy
1,262
Adding msr_genomics_kbcomp dataset
{ "login": "manandey", "id": 6687858, "node_id": "MDQ6VXNlcjY2ODc4NTg=", "avatar_url": "https://avatars.githubusercontent.com/u/6687858?v=4", "gravatar_id": "", "url": "https://api.github.com/users/manandey", "html_url": "https://github.com/manandey", "followers_url": "https://api.github.com/users/manandey/followers", "following_url": "https://api.github.com/users/manandey/following{/other_user}", "gists_url": "https://api.github.com/users/manandey/gists{/gist_id}", "starred_url": "https://api.github.com/users/manandey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manandey/subscriptions", "organizations_url": "https://api.github.com/users/manandey/orgs", "repos_url": "https://api.github.com/users/manandey/repos", "events_url": "https://api.github.com/users/manandey/events{/privacy}", "received_events_url": "https://api.github.com/users/manandey/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,356,890,000
1,607,450,935,000
1,607,450,927,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1262/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1262/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1262", "html_url": "https://github.com/huggingface/datasets/pull/1262", "diff_url": "https://github.com/huggingface/datasets/pull/1262.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1262.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1261
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1261/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1261/comments
https://api.github.com/repos/huggingface/datasets/issues/1261/events
https://github.com/huggingface/datasets/pull/1261
758,626,112
MDExOlB1bGxSZXF1ZXN0NTMzNzY4OTgy
1,261
Add Google Sentence Compression dataset
{ "login": "mattbui", "id": 46804938, "node_id": "MDQ6VXNlcjQ2ODA0OTM4", "avatar_url": "https://avatars.githubusercontent.com/u/46804938?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mattbui", "html_url": "https://github.com/mattbui", "followers_url": "https://api.github.com/users/mattbui/followers", "following_url": "https://api.github.com/users/mattbui/following{/other_user}", "gists_url": "https://api.github.com/users/mattbui/gists{/gist_id}", "starred_url": "https://api.github.com/users/mattbui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mattbui/subscriptions", "organizations_url": "https://api.github.com/users/mattbui/orgs", "repos_url": "https://api.github.com/users/mattbui/repos", "events_url": "https://api.github.com/users/mattbui/events{/privacy}", "received_events_url": "https://api.github.com/users/mattbui/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,356,063,000
1,607,446,919,000
1,607,446,919,000
CONTRIBUTOR
null
For more information: https://www.aclweb.org/anthology/D13-1155.pdf
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1261/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1261/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1261", "html_url": "https://github.com/huggingface/datasets/pull/1261", "diff_url": "https://github.com/huggingface/datasets/pull/1261.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1261.patch", "merged_at": 1607446919000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1260
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1260/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1260/comments
https://api.github.com/repos/huggingface/datasets/issues/1260/events
https://github.com/huggingface/datasets/pull/1260
758,601,828
MDExOlB1bGxSZXF1ZXN0NTMzNzQ4ODM3
1,260
Added NewsPH Raw Dataset
{ "login": "jcblaisecruz02", "id": 24757547, "node_id": "MDQ6VXNlcjI0NzU3NTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/24757547?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jcblaisecruz02", "html_url": "https://github.com/jcblaisecruz02", "followers_url": "https://api.github.com/users/jcblaisecruz02/followers", "following_url": "https://api.github.com/users/jcblaisecruz02/following{/other_user}", "gists_url": "https://api.github.com/users/jcblaisecruz02/gists{/gist_id}", "starred_url": "https://api.github.com/users/jcblaisecruz02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcblaisecruz02/subscriptions", "organizations_url": "https://api.github.com/users/jcblaisecruz02/orgs", "repos_url": "https://api.github.com/users/jcblaisecruz02/repos", "events_url": "https://api.github.com/users/jcblaisecruz02/events{/privacy}", "received_events_url": "https://api.github.com/users/jcblaisecruz02/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "looks like this PR has changes to many files other than the ones for `NewsPH`\r\n\r\nCan you create another branch and another PR please ?" ]
1,607,354,273,000
1,607,444,835,000
1,607,444,835,000
NONE
null
Added the raw version of the NewsPH dataset, which was used to automatically generate the NewsPH-NLI corpus. Dataset of news articles in Filipino from mainstream Philippine news sites on the internet. Can be used as a language modeling dataset or to reproduce the NewsPH-NLI dataset. Paper: https://arxiv.org/abs/2010.11574 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1260/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1260/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1260", "html_url": "https://github.com/huggingface/datasets/pull/1260", "diff_url": "https://github.com/huggingface/datasets/pull/1260.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1260.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1259
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1259/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1259/comments
https://api.github.com/repos/huggingface/datasets/issues/1259/events
https://github.com/huggingface/datasets/pull/1259
758,565,320
MDExOlB1bGxSZXF1ZXN0NTMzNzE4NjMz
1,259
Add KorQPair dataset
{ "login": "jaketae", "id": 25360440, "node_id": "MDQ6VXNlcjI1MzYwNDQw", "avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jaketae", "html_url": "https://github.com/jaketae", "followers_url": "https://api.github.com/users/jaketae/followers", "following_url": "https://api.github.com/users/jaketae/following{/other_user}", "gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}", "starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaketae/subscriptions", "organizations_url": "https://api.github.com/users/jaketae/orgs", "repos_url": "https://api.github.com/users/jaketae/repos", "events_url": "https://api.github.com/users/jaketae/events{/privacy}", "received_events_url": "https://api.github.com/users/jaketae/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "dummy data is missing", "Hey @cceyda, thanks for pointing that out. I thought I'd added it, but seems like that wasn't the case. Just pushed a new commit with the dummy data." ]
1,607,351,637,000
1,640,738,980,000
1,607,440,301,000
MEMBER
null
This PR adds a [Korean paired question dataset](https://github.com/songys/Question_pair) containing labels indicating whether two questions in a given pair are semantically identical. This dataset was used to evaluate the performance of [KoGPT2](https://github.com/SKT-AI/KoGPT2#subtask-evaluations) on a phrase detection downstream task.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1259/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1259/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1259", "html_url": "https://github.com/huggingface/datasets/pull/1259", "diff_url": "https://github.com/huggingface/datasets/pull/1259.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1259.patch", "merged_at": 1607440301000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1258
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1258/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1258/comments
https://api.github.com/repos/huggingface/datasets/issues/1258/events
https://github.com/huggingface/datasets/pull/1258
758,557,169
MDExOlB1bGxSZXF1ZXN0NTMzNzExOTQz
1,258
arXiv dataset added
{ "login": "tanmoyio", "id": 33005287, "node_id": "MDQ6VXNlcjMzMDA1Mjg3", "avatar_url": "https://avatars.githubusercontent.com/u/33005287?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tanmoyio", "html_url": "https://github.com/tanmoyio", "followers_url": "https://api.github.com/users/tanmoyio/followers", "following_url": "https://api.github.com/users/tanmoyio/following{/other_user}", "gists_url": "https://api.github.com/users/tanmoyio/gists{/gist_id}", "starred_url": "https://api.github.com/users/tanmoyio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanmoyio/subscriptions", "organizations_url": "https://api.github.com/users/tanmoyio/orgs", "repos_url": "https://api.github.com/users/tanmoyio/repos", "events_url": "https://api.github.com/users/tanmoyio/events{/privacy}", "received_events_url": "https://api.github.com/users/tanmoyio/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Need help" ]
1,607,351,013,000
1,607,436,435,000
1,607,436,435,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1258/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1258/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1258", "html_url": "https://github.com/huggingface/datasets/pull/1258", "diff_url": "https://github.com/huggingface/datasets/pull/1258.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1258.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1257
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1257/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1257/comments
https://api.github.com/repos/huggingface/datasets/issues/1257/events
https://github.com/huggingface/datasets/pull/1257
758,550,490
MDExOlB1bGxSZXF1ZXN0NTMzNzA2NDQy
1,257
Add Swahili news classification dataset
{ "login": "yvonnegitau", "id": 7923902, "node_id": "MDQ6VXNlcjc5MjM5MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/7923902?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yvonnegitau", "html_url": "https://github.com/yvonnegitau", "followers_url": "https://api.github.com/users/yvonnegitau/followers", "following_url": "https://api.github.com/users/yvonnegitau/following{/other_user}", "gists_url": "https://api.github.com/users/yvonnegitau/gists{/gist_id}", "starred_url": "https://api.github.com/users/yvonnegitau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yvonnegitau/subscriptions", "organizations_url": "https://api.github.com/users/yvonnegitau/orgs", "repos_url": "https://api.github.com/users/yvonnegitau/repos", "events_url": "https://api.github.com/users/yvonnegitau/events{/privacy}", "received_events_url": "https://api.github.com/users/yvonnegitau/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,350,513,000
1,607,438,659,000
1,607,438,659,000
CONTRIBUTOR
null
Add Swahili news classification dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1257/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1257/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1257", "html_url": "https://github.com/huggingface/datasets/pull/1257", "diff_url": "https://github.com/huggingface/datasets/pull/1257.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1257.patch", "merged_at": 1607438659000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1256
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1256/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1256/comments
https://api.github.com/repos/huggingface/datasets/issues/1256/events
https://github.com/huggingface/datasets/pull/1256
758,531,980
MDExOlB1bGxSZXF1ZXN0NTMzNjkwMTQ2
1,256
adding LiMiT dataset
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,349,641,000
1,607,439,508,000
1,607,438,571,000
MEMBER
null
Adding LiMiT: The Literal Motion in Text Dataset https://github.com/ilmgut/limit_dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1256/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1256/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1256", "html_url": "https://github.com/huggingface/datasets/pull/1256", "diff_url": "https://github.com/huggingface/datasets/pull/1256.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1256.patch", "merged_at": 1607438571000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1255
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1255/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1255/comments
https://api.github.com/repos/huggingface/datasets/issues/1255/events
https://github.com/huggingface/datasets/pull/1255
758,530,243
MDExOlB1bGxSZXF1ZXN0NTMzNjg4Njg2
1,255
[doc] nlp/viewer ➡️datasets/viewer
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,349,521,000
1,607,447,874,000
1,607,447,873,000
MEMBER
null
cc @srush
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1255/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1255/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1255", "html_url": "https://github.com/huggingface/datasets/pull/1255", "diff_url": "https://github.com/huggingface/datasets/pull/1255.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1255.patch", "merged_at": 1607447873000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1254
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1254/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1254/comments
https://api.github.com/repos/huggingface/datasets/issues/1254/events
https://github.com/huggingface/datasets/pull/1254
758,518,774
MDExOlB1bGxSZXF1ZXN0NTMzNjc5MTYy
1,254
Added WikiText-TL-39
{ "login": "jcblaisecruz02", "id": 24757547, "node_id": "MDQ6VXNlcjI0NzU3NTQ3", "avatar_url": "https://avatars.githubusercontent.com/u/24757547?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jcblaisecruz02", "html_url": "https://github.com/jcblaisecruz02", "followers_url": "https://api.github.com/users/jcblaisecruz02/followers", "following_url": "https://api.github.com/users/jcblaisecruz02/following{/other_user}", "gists_url": "https://api.github.com/users/jcblaisecruz02/gists{/gist_id}", "starred_url": "https://api.github.com/users/jcblaisecruz02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcblaisecruz02/subscriptions", "organizations_url": "https://api.github.com/users/jcblaisecruz02/orgs", "repos_url": "https://api.github.com/users/jcblaisecruz02/repos", "events_url": "https://api.github.com/users/jcblaisecruz02/events{/privacy}", "received_events_url": "https://api.github.com/users/jcblaisecruz02/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "looks like this PR also includes changes about another dataset `covid_qa_deepset`\r\n\r\nCould you create another branch and another PR that only includes the changes for the wikitext-tl-39 dataset ?" ]
1,607,348,628,000
1,607,443,258,000
1,607,443,258,000
NONE
null
This PR adds the WikiText-TL-39 Filipino Language Modeling dataset. Paper: https://arxiv.org/abs/1907.00409 Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1254/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1254/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1254", "html_url": "https://github.com/huggingface/datasets/pull/1254", "diff_url": "https://github.com/huggingface/datasets/pull/1254.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1254.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1253
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1253/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1253/comments
https://api.github.com/repos/huggingface/datasets/issues/1253/events
https://github.com/huggingface/datasets/pull/1253
758,517,391
MDExOlB1bGxSZXF1ZXN0NTMzNjc4MDE1
1,253
add thainer
{ "login": "cstorm125", "id": 15519308, "node_id": "MDQ6VXNlcjE1NTE5MzA4", "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cstorm125", "html_url": "https://github.com/cstorm125", "followers_url": "https://api.github.com/users/cstorm125/followers", "following_url": "https://api.github.com/users/cstorm125/following{/other_user}", "gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}", "starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions", "organizations_url": "https://api.github.com/users/cstorm125/orgs", "repos_url": "https://api.github.com/users/cstorm125/repos", "events_url": "https://api.github.com/users/cstorm125/events{/privacy}", "received_events_url": "https://api.github.com/users/cstorm125/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,348,514,000
1,607,438,689,000
1,607,438,689,000
CONTRIBUTOR
null
ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence [unnamed dataset](http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) by [Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/). It is used to train NER taggers in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp). The NER tags are annotated by [Tirasaroj and Aroonmanakun (2012)]((http://pioneer.chula.ac.th/~awirote/publications/)) for 2,258 sentences and the rest by [@wannaphong](https://github.com/wannaphong/). The POS tags are done by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s `perceptron` engine trained on `orchid_ud`. [@wannaphong](https://github.com/wannaphong/) is now the only maintainer of this dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1253/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1253/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1253", "html_url": "https://github.com/huggingface/datasets/pull/1253", "diff_url": "https://github.com/huggingface/datasets/pull/1253.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1253.patch", "merged_at": 1607438689000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1252
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1252/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1252/comments
https://api.github.com/repos/huggingface/datasets/issues/1252/events
https://github.com/huggingface/datasets/pull/1252
758,511,388
MDExOlB1bGxSZXF1ZXN0NTMzNjczMDcx
1,252
Add Naver sentiment movie corpus
{ "login": "jaketae", "id": 25360440, "node_id": "MDQ6VXNlcjI1MzYwNDQw", "avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jaketae", "html_url": "https://github.com/jaketae", "followers_url": "https://api.github.com/users/jaketae/followers", "following_url": "https://api.github.com/users/jaketae/following{/other_user}", "gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}", "starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaketae/subscriptions", "organizations_url": "https://api.github.com/users/jaketae/orgs", "repos_url": "https://api.github.com/users/jaketae/repos", "events_url": "https://api.github.com/users/jaketae/events{/privacy}", "received_events_url": "https://api.github.com/users/jaketae/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,348,025,000
1,607,437,953,000
1,607,437,297,000
MEMBER
null
Supersedes #1168 > This PR adds the [Naver sentiment movie corpus](https://github.com/e9t/nsmc), a dataset containing Korean movie reviews from Naver, the most commonly used search engine in Korea. This dataset is often used to benchmark models on Korean NLP tasks, as seen in [this paper](https://www.aclweb.org/anthology/2020.lrec-1.199.pdf).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1252/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1252/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1252", "html_url": "https://github.com/huggingface/datasets/pull/1252", "diff_url": "https://github.com/huggingface/datasets/pull/1252.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1252.patch", "merged_at": 1607437297000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1251
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1251/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1251/comments
https://api.github.com/repos/huggingface/datasets/issues/1251/events
https://github.com/huggingface/datasets/pull/1251
758,503,689
MDExOlB1bGxSZXF1ZXN0NTMzNjY2NTg2
1,251
Add Wiki Atomic Edits Dataset (43M edits)
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq fixed :)" ]
1,607,347,388,000
1,607,940,301,000
1,607,940,300,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1251/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1251/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1251", "html_url": "https://github.com/huggingface/datasets/pull/1251", "diff_url": "https://github.com/huggingface/datasets/pull/1251.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1251.patch", "merged_at": 1607940300000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1250
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1250/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1250/comments
https://api.github.com/repos/huggingface/datasets/issues/1250/events
https://github.com/huggingface/datasets/pull/1250
758,491,704
MDExOlB1bGxSZXF1ZXN0NTMzNjU2NTI4
1,250
added Nergrit dataset
{ "login": "cahya-wirawan", "id": 7669893, "node_id": "MDQ6VXNlcjc2Njk4OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cahya-wirawan", "html_url": "https://github.com/cahya-wirawan", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,346,372,000
1,607,438,009,000
1,607,438,009,000
CONTRIBUTOR
null
Nergrit Corpus is a dataset collection for Indonesian Named Entity Recognition, Statement Extraction, and Sentiment Analysis. This PR is only for the Named Entity Recognition.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1250/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1250/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1250", "html_url": "https://github.com/huggingface/datasets/pull/1250", "diff_url": "https://github.com/huggingface/datasets/pull/1250.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1250.patch", "merged_at": 1607438009000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1249
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1249/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1249/comments
https://api.github.com/repos/huggingface/datasets/issues/1249/events
https://github.com/huggingface/datasets/pull/1249
758,472,863
MDExOlB1bGxSZXF1ZXN0NTMzNjQwNjA1
1,249
Add doc2dial dataset
{ "login": "KMFODA", "id": 35491698, "node_id": "MDQ6VXNlcjM1NDkxNjk4", "avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KMFODA", "html_url": "https://github.com/KMFODA", "followers_url": "https://api.github.com/users/KMFODA/followers", "following_url": "https://api.github.com/users/KMFODA/following{/other_user}", "gists_url": "https://api.github.com/users/KMFODA/gists{/gist_id}", "starred_url": "https://api.github.com/users/KMFODA/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/KMFODA/subscriptions", "organizations_url": "https://api.github.com/users/KMFODA/orgs", "repos_url": "https://api.github.com/users/KMFODA/repos", "events_url": "https://api.github.com/users/KMFODA/events{/privacy}", "received_events_url": "https://api.github.com/users/KMFODA/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It not always practical to use nested `Sequence`. If you have troubles with sequence you can use lists instead. \r\n\r\nFor example\r\n```python\r\n\r\nfeatures=datasets.Features(\r\n {\r\n \"dial_id\": datasets.Value(\"string\"),\r\n \"doc_id\": datasets.Value(\"string\"),\r\n \"domain\": datasets.Value(\"string\"),\r\n \"turns\": [\r\n {\r\n \"turn_id\": datasets.Value(\"int32\"),\r\n \"role\": datasets.Value(\"string\"),\r\n \"da\": datasets.Value(\"string\"),\r\n \"reference\": [\r\n {\r\n \"keys\" : datasets.Value(\"string\"),\r\n \"values\": datasets.Value(\"string\"), \r\n }\r\n\r\n ],\r\n \"utterance\": datasets.Value(\"string\"),\r\n }\r\n ],\r\n }\r\n),\r\n```\r\n\r\nthis way `turns` will be a list of dict, and the \"reference\" key of `turns` will be a list of dict as well", "No problem thanks for all your help getting this to the final stages! Added .gitignore, removed .lock and applied the changes you asked for." ]
1,607,344,749,000
1,607,962,634,000
1,607,962,634,000
CONTRIBUTOR
null
### Doc2dial: A Goal-Oriented Document-Grounded Dialogue Dataset v0.9 Once complete this will add the [Doc2dial](https://doc2dial.github.io/data.html) dataset from the generic data sets list.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1249/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1249/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1249", "html_url": "https://github.com/huggingface/datasets/pull/1249", "diff_url": "https://github.com/huggingface/datasets/pull/1249.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1249.patch", "merged_at": 1607962634000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1248
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1248/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1248/comments
https://api.github.com/repos/huggingface/datasets/issues/1248/events
https://github.com/huggingface/datasets/pull/1248
758,454,438
MDExOlB1bGxSZXF1ZXN0NTMzNjI0ODY5
1,248
Update step-by-step guide about the dataset cards
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,343,132,000
1,607,347,164,000
1,607,347,163,000
MEMBER
null
Small update in the step-by-step guide about the dataset cards to indicate it can be created and completing while exploring the dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1248/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1248/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1248", "html_url": "https://github.com/huggingface/datasets/pull/1248", "diff_url": "https://github.com/huggingface/datasets/pull/1248.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1248.patch", "merged_at": 1607347163000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1247
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1247/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1247/comments
https://api.github.com/repos/huggingface/datasets/issues/1247/events
https://github.com/huggingface/datasets/pull/1247
758,431,640
MDExOlB1bGxSZXF1ZXN0NTMzNjA1NzE2
1,247
Adding indonlu dataset
{ "login": "yasirabd", "id": 6518504, "node_id": "MDQ6VXNlcjY1MTg1MDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/6518504?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yasirabd", "html_url": "https://github.com/yasirabd", "followers_url": "https://api.github.com/users/yasirabd/followers", "following_url": "https://api.github.com/users/yasirabd/following{/other_user}", "gists_url": "https://api.github.com/users/yasirabd/gists{/gist_id}", "starred_url": "https://api.github.com/users/yasirabd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yasirabd/subscriptions", "organizations_url": "https://api.github.com/users/yasirabd/orgs", "repos_url": "https://api.github.com/users/yasirabd/repos", "events_url": "https://api.github.com/users/yasirabd/events{/privacy}", "received_events_url": "https://api.github.com/users/yasirabd/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "looks like this PR includes changes about many files other than the ones for IndoNLU\r\nCould you create another branch and another PR please ?", "> looks like this PR includes changes about many files other than the ones for IndoNLU\r\n> Could you create another branch and another PR please ?\r\n\r\nOkay I'll make it" ]
1,607,341,125,000
1,607,436,710,000
1,607,436,710,000
CONTRIBUTOR
null
IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for Bahasa Indonesia. It contains 12 datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1247/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1247/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1247", "html_url": "https://github.com/huggingface/datasets/pull/1247", "diff_url": "https://github.com/huggingface/datasets/pull/1247.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1247.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1246
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1246/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1246/comments
https://api.github.com/repos/huggingface/datasets/issues/1246/events
https://github.com/huggingface/datasets/pull/1246
758,418,652
MDExOlB1bGxSZXF1ZXN0NTMzNTk0NjIz
1,246
arXiv dataset added
{ "login": "tanmoyio", "id": 33005287, "node_id": "MDQ6VXNlcjMzMDA1Mjg3", "avatar_url": "https://avatars.githubusercontent.com/u/33005287?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tanmoyio", "html_url": "https://github.com/tanmoyio", "followers_url": "https://api.github.com/users/tanmoyio/followers", "following_url": "https://api.github.com/users/tanmoyio/following{/other_user}", "gists_url": "https://api.github.com/users/tanmoyio/gists{/gist_id}", "starred_url": "https://api.github.com/users/tanmoyio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanmoyio/subscriptions", "organizations_url": "https://api.github.com/users/tanmoyio/orgs", "repos_url": "https://api.github.com/users/tanmoyio/repos", "events_url": "https://api.github.com/users/tanmoyio/events{/privacy}", "received_events_url": "https://api.github.com/users/tanmoyio/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,340,023,000
1,607,350,978,000
1,607,350,978,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1246/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1246/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1246", "html_url": "https://github.com/huggingface/datasets/pull/1246", "diff_url": "https://github.com/huggingface/datasets/pull/1246.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1246.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1245
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1245/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1245/comments
https://api.github.com/repos/huggingface/datasets/issues/1245/events
https://github.com/huggingface/datasets/pull/1245
758,411,233
MDExOlB1bGxSZXF1ZXN0NTMzNTg4NDUw
1,245
Add Google Turkish Treebank Dataset
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,607,339,357,000
1,608,136,224,000
null
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1245/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1245/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1245", "html_url": "https://github.com/huggingface/datasets/pull/1245", "diff_url": "https://github.com/huggingface/datasets/pull/1245.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1245.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1244
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1244/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1244/comments
https://api.github.com/repos/huggingface/datasets/issues/1244/events
https://github.com/huggingface/datasets/pull/1244
758,384,417
MDExOlB1bGxSZXF1ZXN0NTMzNTY1ODMz
1,244
arxiv dataset added
{ "login": "tanmoyio", "id": 33005287, "node_id": "MDQ6VXNlcjMzMDA1Mjg3", "avatar_url": "https://avatars.githubusercontent.com/u/33005287?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tanmoyio", "html_url": "https://github.com/tanmoyio", "followers_url": "https://api.github.com/users/tanmoyio/followers", "following_url": "https://api.github.com/users/tanmoyio/following{/other_user}", "gists_url": "https://api.github.com/users/tanmoyio/gists{/gist_id}", "starred_url": "https://api.github.com/users/tanmoyio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tanmoyio/subscriptions", "organizations_url": "https://api.github.com/users/tanmoyio/orgs", "repos_url": "https://api.github.com/users/tanmoyio/repos", "events_url": "https://api.github.com/users/tanmoyio/events{/privacy}", "received_events_url": "https://api.github.com/users/tanmoyio/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,337,174,000
1,607,339,063,000
1,607,339,063,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1244/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1244/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1244", "html_url": "https://github.com/huggingface/datasets/pull/1244", "diff_url": "https://github.com/huggingface/datasets/pull/1244.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1244.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1243
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1243/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1243/comments
https://api.github.com/repos/huggingface/datasets/issues/1243/events
https://github.com/huggingface/datasets/pull/1243
758,378,904
MDExOlB1bGxSZXF1ZXN0NTMzNTYxNDAx
1,243
Add Google Noun Verb Dataset
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,607,336,765,000
1,608,641,236,000
null
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1243/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1243/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1243", "html_url": "https://github.com/huggingface/datasets/pull/1243", "diff_url": "https://github.com/huggingface/datasets/pull/1243.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1243.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1242
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1242/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1242/comments
https://api.github.com/repos/huggingface/datasets/issues/1242/events
https://github.com/huggingface/datasets/pull/1242
758,370,579
MDExOlB1bGxSZXF1ZXN0NTMzNTU0MzAx
1,242
adding bprec
{ "login": "kldarek", "id": 15803781, "node_id": "MDQ6VXNlcjE1ODAzNzgx", "avatar_url": "https://avatars.githubusercontent.com/u/15803781?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kldarek", "html_url": "https://github.com/kldarek", "followers_url": "https://api.github.com/users/kldarek/followers", "following_url": "https://api.github.com/users/kldarek/following{/other_user}", "gists_url": "https://api.github.com/users/kldarek/gists{/gist_id}", "starred_url": "https://api.github.com/users/kldarek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kldarek/subscriptions", "organizations_url": "https://api.github.com/users/kldarek/orgs", "repos_url": "https://api.github.com/users/kldarek/repos", "events_url": "https://api.github.com/users/kldarek/events{/privacy}", "received_events_url": "https://api.github.com/users/kldarek/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "looks like this PR includes changes to many files other than the ones related to bprec\r\nCan you create another branch and another PR please ?", "> looks like this PR includes changes to many files other than the ones related to bprec\r\n> Can you create another branch and another PR please ?\r\n\r\nYes, I realized I messed this one up, learning my way :) I'll close this one and open another hopefully clean PR :) Thanks!" ]
1,607,336,149,000
1,607,438,029,000
1,607,438,028,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1242/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1242/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1242", "html_url": "https://github.com/huggingface/datasets/pull/1242", "diff_url": "https://github.com/huggingface/datasets/pull/1242.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1242.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1241
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1241/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1241/comments
https://api.github.com/repos/huggingface/datasets/issues/1241/events
https://github.com/huggingface/datasets/pull/1241
758,360,643
MDExOlB1bGxSZXF1ZXN0NTMzNTQ1OTQ0
1,241
Opus elhuyar dataset for MT task having languages pair in Spanish to Basque
{ "login": "spatil6", "id": 6419011, "node_id": "MDQ6VXNlcjY0MTkwMTE=", "avatar_url": "https://avatars.githubusercontent.com/u/6419011?v=4", "gravatar_id": "", "url": "https://api.github.com/users/spatil6", "html_url": "https://github.com/spatil6", "followers_url": "https://api.github.com/users/spatil6/followers", "following_url": "https://api.github.com/users/spatil6/following{/other_user}", "gists_url": "https://api.github.com/users/spatil6/gists{/gist_id}", "starred_url": "https://api.github.com/users/spatil6/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/spatil6/subscriptions", "organizations_url": "https://api.github.com/users/spatil6/orgs", "repos_url": "https://api.github.com/users/spatil6/repos", "events_url": "https://api.github.com/users/spatil6/events{/privacy}", "received_events_url": "https://api.github.com/users/spatil6/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,335,414,000
1,608,389,712,000
1,607,526,768,000
CONTRIBUTOR
null
Opus elhuyar dataset for MT task having languages pair in Spanish to Basque More info : http://opus.nlpl.eu/Elhuyar.php
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1241/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1241/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1241", "html_url": "https://github.com/huggingface/datasets/pull/1241", "diff_url": "https://github.com/huggingface/datasets/pull/1241.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1241.patch", "merged_at": 1607526768000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1240
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1240/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1240/comments
https://api.github.com/repos/huggingface/datasets/issues/1240/events
https://github.com/huggingface/datasets/pull/1240
758,355,523
MDExOlB1bGxSZXF1ZXN0NTMzNTQxNjk5
1,240
Multi Domain Sentiment Analysis Dataset (MDSA)
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "can you also run `make style` to format the code ?", "I'll come back to this one in sometime :) @lhoestq ", "Also if you would use `xml.etree.ElementTree` to parse the XML it would be awesome, because right now you're using an external dependency `xmltodict `", "> Also if you would use xml.etree.ElementTree to parse the XML it would be awesome, because right now you're using an external dependency xmltodict\r\n\r\nIts pseudo xml so elementtree fails. xmltodict seems to be working quite good for this. do we have examples of pseudo xml datasets?", "for the other pseudo xml the text is parsed manually", "Can you add `xmltodict` to the test dependencies in setup.py please to fix the CI please ?", "Also can you add the dataset card with the tags and run `make style` ?", "Hi :) have you had a chance to fix the test dependency and apply `make style` ?\r\n\r\nFeel fee to ping me when it's ready for a review" ]
1,607,335,035,000
1,608,135,983,000
null
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1240/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1240/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1240", "html_url": "https://github.com/huggingface/datasets/pull/1240", "diff_url": "https://github.com/huggingface/datasets/pull/1240.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1240.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1239
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1239/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1239/comments
https://api.github.com/repos/huggingface/datasets/issues/1239/events
https://github.com/huggingface/datasets/pull/1239
758,339,593
MDExOlB1bGxSZXF1ZXN0NTMzNTI4NTU5
1,239
add yelp_review_full dataset
{ "login": "hfawaz", "id": 29229602, "node_id": "MDQ6VXNlcjI5MjI5NjAy", "avatar_url": "https://avatars.githubusercontent.com/u/29229602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hfawaz", "html_url": "https://github.com/hfawaz", "followers_url": "https://api.github.com/users/hfawaz/followers", "following_url": "https://api.github.com/users/hfawaz/following{/other_user}", "gists_url": "https://api.github.com/users/hfawaz/gists{/gist_id}", "starred_url": "https://api.github.com/users/hfawaz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hfawaz/subscriptions", "organizations_url": "https://api.github.com/users/hfawaz/orgs", "repos_url": "https://api.github.com/users/hfawaz/repos", "events_url": "https://api.github.com/users/hfawaz/events{/privacy}", "received_events_url": "https://api.github.com/users/hfawaz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Moved to https://github.com/huggingface/datasets/pull/1315" ]
1,607,333,736,000
1,607,442,204,000
1,607,439,650,000
CONTRIBUTOR
null
This corresponds to the Yelp-5 requested in https://github.com/huggingface/datasets/issues/353
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1239/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1239/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1239", "html_url": "https://github.com/huggingface/datasets/pull/1239", "diff_url": "https://github.com/huggingface/datasets/pull/1239.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1239.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1238
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1238/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1238/comments
https://api.github.com/repos/huggingface/datasets/issues/1238/events
https://github.com/huggingface/datasets/pull/1238
758,321,688
MDExOlB1bGxSZXF1ZXN0NTMzNTEzODUw
1,238
adding poem_sentiment
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,332,312,000
1,607,531,770,000
1,607,529,765,000
MEMBER
null
Adding poem_sentiment dataset. https://github.com/google-research-datasets/poem-sentiment
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1238/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1238/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1238", "html_url": "https://github.com/huggingface/datasets/pull/1238", "diff_url": "https://github.com/huggingface/datasets/pull/1238.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1238.patch", "merged_at": 1607529765000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1237
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1237/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1237/comments
https://api.github.com/repos/huggingface/datasets/issues/1237/events
https://github.com/huggingface/datasets/pull/1237
758,318,353
MDExOlB1bGxSZXF1ZXN0NTMzNTExMDky
1,237
Add AmbigQA dataset
{ "login": "cceyda", "id": 15624271, "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cceyda", "html_url": "https://github.com/cceyda", "followers_url": "https://api.github.com/users/cceyda/followers", "following_url": "https://api.github.com/users/cceyda/following{/other_user}", "gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}", "starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cceyda/subscriptions", "organizations_url": "https://api.github.com/users/cceyda/orgs", "repos_url": "https://api.github.com/users/cceyda/repos", "events_url": "https://api.github.com/users/cceyda/events{/privacy}", "received_events_url": "https://api.github.com/users/cceyda/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,332,039,000
1,607,434,732,000
1,607,434,732,000
CONTRIBUTOR
null
# AmbigQA: Answering Ambiguous Open-domain Questions Dataset Adding the [AmbigQA](https://nlp.cs.washington.edu/ambigqa/) dataset as part of the sprint 🎉 (from Open dataset list for Dataset sprint) Added both the light and full versions (as seen on the dataset homepage) The json format changes based on the value of one 'type' field, so I set the unavailable field to an empty list. This is explained in the README -> Data Fields ```py train_light_dataset = load_dataset('./datasets/ambig_qa',"light",split="train") val_light_dataset = load_dataset('./datasets/ambig_qa',"light",split="validation") train_full_dataset = load_dataset('./datasets/ambig_qa',"full",split="train") val_full_dataset = load_dataset('./datasets/ambig_qa',"full",split="validation") for example in train_light_dataset: for i,t in enumerate(example['annotations']['type']): if t =='singleAnswer': # use the example['annotations']['answer'][i] # example['annotations']['qaPairs'][i] - > is [] print(example['annotations']['answer'][i]) else: # use the example['annotations']['qaPairs'][i] # example['annotations']['answer'][i] - > is [] print(example['annotations']['qaPairs'][i]) ``` - [x] All tests passed - [x] Added dummy data - [x] Added data card (as much as I could)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1237/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1237/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1237", "html_url": "https://github.com/huggingface/datasets/pull/1237", "diff_url": "https://github.com/huggingface/datasets/pull/1237.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1237.patch", "merged_at": 1607434732000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1236
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1236/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1236/comments
https://api.github.com/repos/huggingface/datasets/issues/1236/events
https://github.com/huggingface/datasets/pull/1236
758,263,012
MDExOlB1bGxSZXF1ZXN0NTMzNDYzOTg2
1,236
Opus finlex dataset of language pair Finnish and Swedish
{ "login": "spatil6", "id": 6419011, "node_id": "MDQ6VXNlcjY0MTkwMTE=", "avatar_url": "https://avatars.githubusercontent.com/u/6419011?v=4", "gravatar_id": "", "url": "https://api.github.com/users/spatil6", "html_url": "https://github.com/spatil6", "followers_url": "https://api.github.com/users/spatil6/followers", "following_url": "https://api.github.com/users/spatil6/following{/other_user}", "gists_url": "https://api.github.com/users/spatil6/gists{/gist_id}", "starred_url": "https://api.github.com/users/spatil6/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/spatil6/subscriptions", "organizations_url": "https://api.github.com/users/spatil6/orgs", "repos_url": "https://api.github.com/users/spatil6/repos", "events_url": "https://api.github.com/users/spatil6/events{/privacy}", "received_events_url": "https://api.github.com/users/spatil6/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,327,637,000
1,607,434,233,000
1,607,434,233,000
CONTRIBUTOR
null
Added Opus_finlex dataset of language pair Finnish and Swedish More info : http://opus.nlpl.eu/Finlex.php
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1236/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1236/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1236", "html_url": "https://github.com/huggingface/datasets/pull/1236", "diff_url": "https://github.com/huggingface/datasets/pull/1236.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1236.patch", "merged_at": 1607434233000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1235
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1235/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1235/comments
https://api.github.com/repos/huggingface/datasets/issues/1235/events
https://github.com/huggingface/datasets/pull/1235
758,234,511
MDExOlB1bGxSZXF1ZXN0NTMzNDM5NDk4
1,235
Wino bias
{ "login": "akshayb7", "id": 29649801, "node_id": "MDQ6VXNlcjI5NjQ5ODAx", "avatar_url": "https://avatars.githubusercontent.com/u/29649801?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akshayb7", "html_url": "https://github.com/akshayb7", "followers_url": "https://api.github.com/users/akshayb7/followers", "following_url": "https://api.github.com/users/akshayb7/following{/other_user}", "gists_url": "https://api.github.com/users/akshayb7/gists{/gist_id}", "starred_url": "https://api.github.com/users/akshayb7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akshayb7/subscriptions", "organizations_url": "https://api.github.com/users/akshayb7/orgs", "repos_url": "https://api.github.com/users/akshayb7/repos", "events_url": "https://api.github.com/users/akshayb7/events{/privacy}", "received_events_url": "https://api.github.com/users/akshayb7/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Closing this PR because of messed up history and opening another one after discussion with Quentin Lhoest.\r\n" ]
1,607,325,162,000
1,607,633,292,000
1,607,633,281,000
CONTRIBUTOR
null
The PR will fail circleCi tests because of the requirement of manual loading of data. Fresh PR because of messed up history of the previous one.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1235/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1235/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1235", "html_url": "https://github.com/huggingface/datasets/pull/1235", "diff_url": "https://github.com/huggingface/datasets/pull/1235.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1235.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1234
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1234/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1234/comments
https://api.github.com/repos/huggingface/datasets/issues/1234/events
https://github.com/huggingface/datasets/pull/1234
758,229,304
MDExOlB1bGxSZXF1ZXN0NTMzNDM0ODkz
1,234
Added ade_corpus_v2, with 3 configs for relation extraction and classification task
{ "login": "Nilanshrajput", "id": 28673745, "node_id": "MDQ6VXNlcjI4NjczNzQ1", "avatar_url": "https://avatars.githubusercontent.com/u/28673745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nilanshrajput", "html_url": "https://github.com/Nilanshrajput", "followers_url": "https://api.github.com/users/Nilanshrajput/followers", "following_url": "https://api.github.com/users/Nilanshrajput/following{/other_user}", "gists_url": "https://api.github.com/users/Nilanshrajput/gists{/gist_id}", "starred_url": "https://api.github.com/users/Nilanshrajput/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Nilanshrajput/subscriptions", "organizations_url": "https://api.github.com/users/Nilanshrajput/orgs", "repos_url": "https://api.github.com/users/Nilanshrajput/repos", "events_url": "https://api.github.com/users/Nilanshrajput/events{/privacy}", "received_events_url": "https://api.github.com/users/Nilanshrajput/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq I have added the tags they are in separate files for 3 different configs", "@lhoestq thanks for the review I added your suggested changes.", "merging since the CI is fixed on master" ]
1,607,324,714,000
1,607,968,154,000
1,607,968,154,000
CONTRIBUTOR
null
Adverse Drug Reaction Data: ADE-Corpus-V2 dataset added configs for different tasks with given data
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1234/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1234/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1234", "html_url": "https://github.com/huggingface/datasets/pull/1234", "diff_url": "https://github.com/huggingface/datasets/pull/1234.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1234.patch", "merged_at": 1607968154000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1233
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1233/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1233/comments
https://api.github.com/repos/huggingface/datasets/issues/1233/events
https://github.com/huggingface/datasets/pull/1233
758,188,699
MDExOlB1bGxSZXF1ZXN0NTMzMzk5NTY3
1,233
Add Curiosity Dialogs Dataset
{ "login": "vineeths96", "id": 50873201, "node_id": "MDQ6VXNlcjUwODczMjAx", "avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vineeths96", "html_url": "https://github.com/vineeths96", "followers_url": "https://api.github.com/users/vineeths96/followers", "following_url": "https://api.github.com/users/vineeths96/following{/other_user}", "gists_url": "https://api.github.com/users/vineeths96/gists{/gist_id}", "starred_url": "https://api.github.com/users/vineeths96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vineeths96/subscriptions", "organizations_url": "https://api.github.com/users/vineeths96/orgs", "repos_url": "https://api.github.com/users/vineeths96/repos", "events_url": "https://api.github.com/users/vineeths96/events{/privacy}", "received_events_url": "https://api.github.com/users/vineeths96/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq I tried manually creating the dummy files. But unfortunately it was raising an error during testing the dummy data (regarding JSON parsing).\r\n\r\nThe JSONs are pretty big so I cannot actually open it without crashing the text editor.\r\n\r\n Do you have any suggestions?", "@lhoestq I have made all the changes you mentioned." ]
1,607,320,860,000
1,608,471,249,000
1,607,525,429,000
CONTRIBUTOR
null
Add Facebook [Curiosity Dialogs](https://github.com/facebookresearch/curiosity) Dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1233/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1233/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1233", "html_url": "https://github.com/huggingface/datasets/pull/1233", "diff_url": "https://github.com/huggingface/datasets/pull/1233.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1233.patch", "merged_at": 1607525429000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1232
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1232/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1232/comments
https://api.github.com/repos/huggingface/datasets/issues/1232/events
https://github.com/huggingface/datasets/pull/1232
758,180,669
MDExOlB1bGxSZXF1ZXN0NTMzMzkyNTc0
1,232
Add Grail QA dataset
{ "login": "mattbui", "id": 46804938, "node_id": "MDQ6VXNlcjQ2ODA0OTM4", "avatar_url": "https://avatars.githubusercontent.com/u/46804938?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mattbui", "html_url": "https://github.com/mattbui", "followers_url": "https://api.github.com/users/mattbui/followers", "following_url": "https://api.github.com/users/mattbui/following{/other_user}", "gists_url": "https://api.github.com/users/mattbui/gists{/gist_id}", "starred_url": "https://api.github.com/users/mattbui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mattbui/subscriptions", "organizations_url": "https://api.github.com/users/mattbui/orgs", "repos_url": "https://api.github.com/users/mattbui/repos", "events_url": "https://api.github.com/users/mattbui/events{/privacy}", "received_events_url": "https://api.github.com/users/mattbui/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,320,005,000
1,607,432,599,000
1,607,432,599,000
CONTRIBUTOR
null
For more information: https://dki-lab.github.io/GrailQA/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1232/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1232/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1232", "html_url": "https://github.com/huggingface/datasets/pull/1232", "diff_url": "https://github.com/huggingface/datasets/pull/1232.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1232.patch", "merged_at": 1607432599000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1231
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1231/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1231/comments
https://api.github.com/repos/huggingface/datasets/issues/1231/events
https://github.com/huggingface/datasets/pull/1231
758,121,398
MDExOlB1bGxSZXF1ZXN0NTMzMzQzMzAz
1,231
Add Urdu Sentiment Corpus (USC)
{ "login": "chaitnayabasava", "id": 44389205, "node_id": "MDQ6VXNlcjQ0Mzg5MjA1", "avatar_url": "https://avatars.githubusercontent.com/u/44389205?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chaitnayabasava", "html_url": "https://github.com/chaitnayabasava", "followers_url": "https://api.github.com/users/chaitnayabasava/followers", "following_url": "https://api.github.com/users/chaitnayabasava/following{/other_user}", "gists_url": "https://api.github.com/users/chaitnayabasava/gists{/gist_id}", "starred_url": "https://api.github.com/users/chaitnayabasava/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chaitnayabasava/subscriptions", "organizations_url": "https://api.github.com/users/chaitnayabasava/orgs", "repos_url": "https://api.github.com/users/chaitnayabasava/repos", "events_url": "https://api.github.com/users/chaitnayabasava/events{/privacy}", "received_events_url": "https://api.github.com/users/chaitnayabasava/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,311,520,000
1,607,364,316,000
1,607,359,403,000
CONTRIBUTOR
null
@lhoestq opened a clean PR containing only relevant files. old PR #1140
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1231/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1231/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1231", "html_url": "https://github.com/huggingface/datasets/pull/1231", "diff_url": "https://github.com/huggingface/datasets/pull/1231.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1231.patch", "merged_at": 1607359403000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1230
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1230/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1230/comments
https://api.github.com/repos/huggingface/datasets/issues/1230/events
https://github.com/huggingface/datasets/pull/1230
758,119,342
MDExOlB1bGxSZXF1ZXN0NTMzMzQxNTg0
1,230
Add Urdu fake news dataset
{ "login": "chaitnayabasava", "id": 44389205, "node_id": "MDQ6VXNlcjQ0Mzg5MjA1", "avatar_url": "https://avatars.githubusercontent.com/u/44389205?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chaitnayabasava", "html_url": "https://github.com/chaitnayabasava", "followers_url": "https://api.github.com/users/chaitnayabasava/followers", "following_url": "https://api.github.com/users/chaitnayabasava/following{/other_user}", "gists_url": "https://api.github.com/users/chaitnayabasava/gists{/gist_id}", "starred_url": "https://api.github.com/users/chaitnayabasava/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chaitnayabasava/subscriptions", "organizations_url": "https://api.github.com/users/chaitnayabasava/orgs", "repos_url": "https://api.github.com/users/chaitnayabasava/repos", "events_url": "https://api.github.com/users/chaitnayabasava/events{/privacy}", "received_events_url": "https://api.github.com/users/chaitnayabasava/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "merging since the CI is fixed on master" ]
1,607,311,190,000
1,607,364,295,000
1,607,360,274,000
CONTRIBUTOR
null
@lhoestq opened a clean PR containing only relevant files. old PR #1125
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1230/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1230/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1230", "html_url": "https://github.com/huggingface/datasets/pull/1230", "diff_url": "https://github.com/huggingface/datasets/pull/1230.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1230.patch", "merged_at": 1607360274000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1229
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1229/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1229/comments
https://api.github.com/repos/huggingface/datasets/issues/1229/events
https://github.com/huggingface/datasets/pull/1229
758,100,707
MDExOlB1bGxSZXF1ZXN0NTMzMzI2OTgw
1,229
Muchocine - Spanish movie reviews dataset
{ "login": "mapmeld", "id": 643918, "node_id": "MDQ6VXNlcjY0MzkxOA==", "avatar_url": "https://avatars.githubusercontent.com/u/643918?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mapmeld", "html_url": "https://github.com/mapmeld", "followers_url": "https://api.github.com/users/mapmeld/followers", "following_url": "https://api.github.com/users/mapmeld/following{/other_user}", "gists_url": "https://api.github.com/users/mapmeld/gists{/gist_id}", "starred_url": "https://api.github.com/users/mapmeld/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mapmeld/subscriptions", "organizations_url": "https://api.github.com/users/mapmeld/orgs", "repos_url": "https://api.github.com/users/mapmeld/repos", "events_url": "https://api.github.com/users/mapmeld/events{/privacy}", "received_events_url": "https://api.github.com/users/mapmeld/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @mapmeld !\r\nhave you had a chance to take a look at my suggestions ?\r\n\r\nFeel free to ping me if you have questions or when you're ready for a review", "@lhoestq unfortunately I don't have any more information about where the dataset comes from", "It's fine, you can just add the sections titles back and leave the content with `[More Information Needed]`\r\n\r\n", "added missing sections, updated the Python code ✅ " ]
1,607,307,809,000
1,608,545,349,000
1,608,545,349,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1229/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1229/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1229", "html_url": "https://github.com/huggingface/datasets/pull/1229", "diff_url": "https://github.com/huggingface/datasets/pull/1229.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1229.patch", "merged_at": 1608545349000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1228
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1228/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1228/comments
https://api.github.com/repos/huggingface/datasets/issues/1228/events
https://github.com/huggingface/datasets/pull/1228
758,049,068
MDExOlB1bGxSZXF1ZXN0NTMzMjg1ODI2
1,228
add opus_100 dataset
{ "login": "vasudevgupta7", "id": 53136577, "node_id": "MDQ6VXNlcjUzMTM2NTc3", "avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vasudevgupta7", "html_url": "https://github.com/vasudevgupta7", "followers_url": "https://api.github.com/users/vasudevgupta7/followers", "following_url": "https://api.github.com/users/vasudevgupta7/following{/other_user}", "gists_url": "https://api.github.com/users/vasudevgupta7/gists{/gist_id}", "starred_url": "https://api.github.com/users/vasudevgupta7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vasudevgupta7/subscriptions", "organizations_url": "https://api.github.com/users/vasudevgupta7/orgs", "repos_url": "https://api.github.com/users/vasudevgupta7/repos", "events_url": "https://api.github.com/users/vasudevgupta7/events{/privacy}", "received_events_url": "https://api.github.com/users/vasudevgupta7/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "done." ]
1,607,296,644,000
1,607,525,640,000
1,607,525,640,000
CONTRIBUTOR
null
This PR will add [opus100 dataset](http://opus.nlpl.eu/opus-100.php).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1228/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1228/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1228", "html_url": "https://github.com/huggingface/datasets/pull/1228", "diff_url": "https://github.com/huggingface/datasets/pull/1228.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1228.patch", "merged_at": 1607525639000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1227
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1227/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1227/comments
https://api.github.com/repos/huggingface/datasets/issues/1227/events
https://github.com/huggingface/datasets/pull/1227
758,049,060
MDExOlB1bGxSZXF1ZXN0NTMzMjg1ODIx
1,227
readme: remove link to Google's responsible AI practices
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/stefan-it/followers", "following_url": "https://api.github.com/users/stefan-it/following{/other_user}", "gists_url": "https://api.github.com/users/stefan-it/gists{/gist_id}", "starred_url": "https://api.github.com/users/stefan-it/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stefan-it/subscriptions", "organizations_url": "https://api.github.com/users/stefan-it/orgs", "repos_url": "https://api.github.com/users/stefan-it/repos", "events_url": "https://api.github.com/users/stefan-it/events{/privacy}", "received_events_url": "https://api.github.com/users/stefan-it/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,296,642,000
1,607,330,119,000
1,607,296,841,000
CONTRIBUTOR
null
...maybe we'll find a company that reallly stands behind responsible AI practices ;)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1227/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1227/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1227", "html_url": "https://github.com/huggingface/datasets/pull/1227", "diff_url": "https://github.com/huggingface/datasets/pull/1227.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1227.patch", "merged_at": 1607296841000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1226
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1226/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1226/comments
https://api.github.com/repos/huggingface/datasets/issues/1226/events
https://github.com/huggingface/datasets/pull/1226
758,036,979
MDExOlB1bGxSZXF1ZXN0NTMzMjc2OTU3
1,226
Add menyo_20k_mt dataset
{ "login": "yvonnegitau", "id": 7923902, "node_id": "MDQ6VXNlcjc5MjM5MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/7923902?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yvonnegitau", "html_url": "https://github.com/yvonnegitau", "followers_url": "https://api.github.com/users/yvonnegitau/followers", "following_url": "https://api.github.com/users/yvonnegitau/following{/other_user}", "gists_url": "https://api.github.com/users/yvonnegitau/gists{/gist_id}", "starred_url": "https://api.github.com/users/yvonnegitau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yvonnegitau/subscriptions", "organizations_url": "https://api.github.com/users/yvonnegitau/orgs", "repos_url": "https://api.github.com/users/yvonnegitau/repos", "events_url": "https://api.github.com/users/yvonnegitau/events{/privacy}", "received_events_url": "https://api.github.com/users/yvonnegitau/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "looks like your PR includes changes about many other files than the ones for menyo 20k mt\r\nCan you create another branch and another PR please ?", "Yes, I will" ]
1,607,292,975,000
1,607,628,134,000
1,607,628,134,000
CONTRIBUTOR
null
Add menyo_20k_mt dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1226/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1226/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1226", "html_url": "https://github.com/huggingface/datasets/pull/1226", "diff_url": "https://github.com/huggingface/datasets/pull/1226.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1226.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1225
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1225/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1225/comments
https://api.github.com/repos/huggingface/datasets/issues/1225/events
https://github.com/huggingface/datasets/pull/1225
758,035,501
MDExOlB1bGxSZXF1ZXN0NTMzMjc1ODcx
1,225
Add Winobias dataset
{ "login": "akshayb7", "id": 29649801, "node_id": "MDQ6VXNlcjI5NjQ5ODAx", "avatar_url": "https://avatars.githubusercontent.com/u/29649801?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akshayb7", "html_url": "https://github.com/akshayb7", "followers_url": "https://api.github.com/users/akshayb7/followers", "following_url": "https://api.github.com/users/akshayb7/following{/other_user}", "gists_url": "https://api.github.com/users/akshayb7/gists{/gist_id}", "starred_url": "https://api.github.com/users/akshayb7/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akshayb7/subscriptions", "organizations_url": "https://api.github.com/users/akshayb7/orgs", "repos_url": "https://api.github.com/users/akshayb7/repos", "events_url": "https://api.github.com/users/akshayb7/events{/privacy}", "received_events_url": "https://api.github.com/users/akshayb7/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Will make another pull request with cleaner history" ]
1,607,292,500,000
1,607,323,559,000
1,607,323,250,000
CONTRIBUTOR
null
Pardon me for different commits with same message. There were conflicts after I rebased master while simultaneously pushing my changes to local repo, hence the duplicate entries.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1225/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1225/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1225", "html_url": "https://github.com/huggingface/datasets/pull/1225", "diff_url": "https://github.com/huggingface/datasets/pull/1225.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1225.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1224
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1224/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1224/comments
https://api.github.com/repos/huggingface/datasets/issues/1224/events
https://github.com/huggingface/datasets/pull/1224
758,022,998
MDExOlB1bGxSZXF1ZXN0NTMzMjY2Njg1
1,224
adding conceptnet5
{ "login": "ontocord", "id": 8900094, "node_id": "MDQ6VXNlcjg5MDAwOTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8900094?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ontocord", "html_url": "https://github.com/ontocord", "followers_url": "https://api.github.com/users/ontocord/followers", "following_url": "https://api.github.com/users/ontocord/following{/other_user}", "gists_url": "https://api.github.com/users/ontocord/gists{/gist_id}", "starred_url": "https://api.github.com/users/ontocord/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ontocord/subscriptions", "organizations_url": "https://api.github.com/users/ontocord/orgs", "repos_url": "https://api.github.com/users/ontocord/repos", "events_url": "https://api.github.com/users/ontocord/events{/privacy}", "received_events_url": "https://api.github.com/users/ontocord/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thank you. I'll make those changes. but I'm having problems trying to push my changes to my fork\r\n", "Hi, I've removed the TODO, and added a README.md. How do I push these changes?\r\n", "Also, what docstring are you recommending?\r\n", "> Hi, I've removed the TODO, and added a README.md. How do I push these changes?\r\n\r\nyou can just commit and push your changes to the same branch as your first commit.", "@ghomasHudson I've tried it but still getting code quality error. I've removed all blank lines, etc. required by flake8. Don't know what else to do", "> @ghomasHudson I've tried it but still getting code quality error. I've removed all blank lines, etc. required by flake8. Don't know what else to do\r\n\r\nDid you run `make style` before committing? When I run it, it fixes some things (e.g. Splitting line 96 which is currently too long).", "I think @yjernite is looking into this. I did \"make style\" but nothing happens", "looks like your PR includes changes about many other files than the ones related to conceptnet5\r\n\r\ncould you create another branch and another PR please ?", "@lhoestq I'm not sure what I did wrong. What did I push that wasn't conceptnet5? How do I see this?\r\n\r\n did this\r\n\r\nmake style\r\nflake8 datasets\r\ngit add datasets/<your_dataset_name>\r\ngit commit\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit pull\r\ngit push -u origin conceptnet5", "Thanks for rebasing and force push :) ", "Yeah! Thank you @lhoestq, @ghomasHudson and @yjernite !" ]
1,607,288,813,000
1,607,531,896,000
1,607,524,637,000
CONTRIBUTOR
null
Adding the conceptnet5 and omcs txt files used to create the conceptnet5 dataset. Conceptne5 is a common sense dataset. More info can be found here: https://github.com/commonsense/conceptnet5/wiki
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1224/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1224/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1224", "html_url": "https://github.com/huggingface/datasets/pull/1224", "diff_url": "https://github.com/huggingface/datasets/pull/1224.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1224.patch", "merged_at": 1607524637000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1223
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1223/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1223/comments
https://api.github.com/repos/huggingface/datasets/issues/1223/events
https://github.com/huggingface/datasets/pull/1223
758,022,208
MDExOlB1bGxSZXF1ZXN0NTMzMjY2MDc4
1,223
🇸🇪 Added Swedish Reviews dataset for sentiment classification in Sw…
{ "login": "timpal0l", "id": 6556710, "node_id": "MDQ6VXNlcjY1NTY3MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6556710?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timpal0l", "html_url": "https://github.com/timpal0l", "followers_url": "https://api.github.com/users/timpal0l/followers", "following_url": "https://api.github.com/users/timpal0l/following{/other_user}", "gists_url": "https://api.github.com/users/timpal0l/gists{/gist_id}", "starred_url": "https://api.github.com/users/timpal0l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timpal0l/subscriptions", "organizations_url": "https://api.github.com/users/timpal0l/orgs", "repos_url": "https://api.github.com/users/timpal0l/repos", "events_url": "https://api.github.com/users/timpal0l/events{/privacy}", "received_events_url": "https://api.github.com/users/timpal0l/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,288,574,000
1,607,424,896,000
1,607,424,896,000
CONTRIBUTOR
null
perhaps: @lhoestq 🤗
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1223/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1223/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1223", "html_url": "https://github.com/huggingface/datasets/pull/1223", "diff_url": "https://github.com/huggingface/datasets/pull/1223.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1223.patch", "merged_at": 1607424896000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1222
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1222/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1222/comments
https://api.github.com/repos/huggingface/datasets/issues/1222/events
https://github.com/huggingface/datasets/pull/1222
758,018,953
MDExOlB1bGxSZXF1ZXN0NTMzMjYzODIx
1,222
Add numeric fused head dataset
{ "login": "ghomasHudson", "id": 13795113, "node_id": "MDQ6VXNlcjEzNzk1MTEz", "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghomasHudson", "html_url": "https://github.com/ghomasHudson", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> Thanks for adding this @ghomasHudson!\r\n> I added some comments for some of the fields.\r\n> \r\n> Also, I'm not sure about this since I haven't used the library yet, but maybe it's worth adding the identification and resolution as two separate datasets?\r\n\r\nThanks for replying @yanaiela - I hope this will make your dataset more accessible to a wider audience - I've added the changes to the model card you suggested.\r\n\r\nIn terms of the identification and resolution tasks, I've currently added them as separate `splits` in huggingface/datasets so you can load identification like this:\r\n\r\n```\r\nimport datasets\r\ndataset = datasets.load_dataset(\"numeric_fused_head\", \"identification\")\r\nprint(dataset[\"train\"][0])\r\n>> {\"tokens\": [\"The\", \"quick\", \"brown\", \"fox\",....], \"start_index\": 11, \"end_index\": 12, \"label\": 0}\r\n```\r\nAnd resolution like this:\r\n\r\n```\r\nimport datasets\r\ndataset = datasets.load_dataset(\"numeric_fused_head\", \"resolution\")\r\nprint(dataset[\"train\"][0])\r\n>> {\"tokens\": [\"The\", \"quick\", \"brown\", \"fox\",....], \"head\": [\"AGE\"], \"anchors_indices\": [12], ...}\r\n```", "I hope so too, thanks!\r\n\r\nRe the splits, that makes sense to me." ]
1,607,287,613,000
1,607,426,276,000
1,607,426,275,000
CONTRIBUTOR
null
Adding the [NFH: Numeric Fused Head](https://nlp.biu.ac.il/~lazary/fh/) dataset. Everything looks sensible and I've included both the identification and resolution tasks. I haven't personally used this dataset in my research so am unable to specify what the default configuration / supervised keys should be. I've filled out the basic info on the model card to the best of my knowledge but it's a little tricky to understand exactly what the fields represent. Dataset author: @yanaiela
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1222/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1222/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1222", "html_url": "https://github.com/huggingface/datasets/pull/1222", "diff_url": "https://github.com/huggingface/datasets/pull/1222.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1222.patch", "merged_at": 1607426275000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1221
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1221/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1221/comments
https://api.github.com/repos/huggingface/datasets/issues/1221/events
https://github.com/huggingface/datasets/pull/1221
758,016,032
MDExOlB1bGxSZXF1ZXN0NTMzMjYxNjkw
1,221
Add HKCanCor
{ "login": "j-chim", "id": 22435209, "node_id": "MDQ6VXNlcjIyNDM1MjA5", "avatar_url": "https://avatars.githubusercontent.com/u/22435209?v=4", "gravatar_id": "", "url": "https://api.github.com/users/j-chim", "html_url": "https://github.com/j-chim", "followers_url": "https://api.github.com/users/j-chim/followers", "following_url": "https://api.github.com/users/j-chim/following{/other_user}", "gists_url": "https://api.github.com/users/j-chim/gists{/gist_id}", "starred_url": "https://api.github.com/users/j-chim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/j-chim/subscriptions", "organizations_url": "https://api.github.com/users/j-chim/orgs", "repos_url": "https://api.github.com/users/j-chim/repos", "events_url": "https://api.github.com/users/j-chim/events{/privacy}", "received_events_url": "https://api.github.com/users/j-chim/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,286,727,000
1,607,531,658,000
1,607,531,658,000
CONTRIBUTOR
null
This PR adds the [Hong Kong Cantonese Corpus](http://compling.hss.ntu.edu.sg/hkcancor/), by [Luke and Wong 2015](http://compling.hss.ntu.edu.sg/hkcancor/data/LukeWong_Hong-Kong-Cantonese-Corpus.pdf). The dummy data included here was manually created, as the original dataset uses a xml-like format (see a copy hosted [here](https://github.com/fcbond/hkcancor/blob/master/sample/d1_v.txt) for example) that requires a few processing steps.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1221/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1221/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1221", "html_url": "https://github.com/huggingface/datasets/pull/1221", "diff_url": "https://github.com/huggingface/datasets/pull/1221.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1221.patch", "merged_at": 1607531658000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1220
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1220/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1220/comments
https://api.github.com/repos/huggingface/datasets/issues/1220/events
https://github.com/huggingface/datasets/pull/1220
758,015,894
MDExOlB1bGxSZXF1ZXN0NTMzMjYxNTgw
1,220
add Korean HateSpeech dataset
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It looks like you forgot to `make style` (I forget it a lot too 🤦 )\r\n+ add dummy data", "hi @cceyda 👋, thanks for the hint! it looks like i've run into some other errors though in `_split_generators` or `_generate_examples`. do you have any idea of what's wrong here? 😅", "I get the same errors on another pr too, so it probably has something to do with circleci, waiting on help.", "the `RemoteDatasetTest ` error on the CI is fixed on master so it's fine", "merging since the CI is fixed on master" ]
1,607,286,689,000
1,607,440,869,000
1,607,425,542,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1220/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1220/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1220", "html_url": "https://github.com/huggingface/datasets/pull/1220", "diff_url": "https://github.com/huggingface/datasets/pull/1220.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1220.patch", "merged_at": 1607425542000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1219
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1219/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1219/comments
https://api.github.com/repos/huggingface/datasets/issues/1219/events
https://github.com/huggingface/datasets/pull/1219
758,013,368
MDExOlB1bGxSZXF1ZXN0NTMzMjU5NzMw
1,219
Add Korean NER dataset
{ "login": "jaketae", "id": 25360440, "node_id": "MDQ6VXNlcjI1MzYwNDQw", "avatar_url": "https://avatars.githubusercontent.com/u/25360440?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jaketae", "html_url": "https://github.com/jaketae", "followers_url": "https://api.github.com/users/jaketae/followers", "following_url": "https://api.github.com/users/jaketae/following{/other_user}", "gists_url": "https://api.github.com/users/jaketae/gists{/gist_id}", "starred_url": "https://api.github.com/users/jaketae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jaketae/subscriptions", "organizations_url": "https://api.github.com/users/jaketae/orgs", "repos_url": "https://api.github.com/users/jaketae/repos", "events_url": "https://api.github.com/users/jaketae/events{/privacy}", "received_events_url": "https://api.github.com/users/jaketae/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,285,946,000
1,640,739,059,000
1,607,423,133,000
MEMBER
null
Supersedes #1177 > This PR adds the [Korean named entity recognition dataset](https://github.com/kmounlp/NER). This dataset has been used in many downstream tasks, such as training [KoBERT](https://github.com/SKTBrain/KoBERT) for NER, as seen in this [KoBERT-CRF implementation](https://github.com/eagle705/pytorch-bert-crf-ner).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1219/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1219/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1219", "html_url": "https://github.com/huggingface/datasets/pull/1219", "diff_url": "https://github.com/huggingface/datasets/pull/1219.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1219.patch", "merged_at": 1607423133000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1218
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1218/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1218/comments
https://api.github.com/repos/huggingface/datasets/issues/1218/events
https://github.com/huggingface/datasets/pull/1218
758,009,113
MDExOlB1bGxSZXF1ZXN0NTMzMjU2NzIz
1,218
Add WMT20 MLQE 3 shared tasks
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for the comments Quentin!\r\nI integrated them", "It should be ok now!\r\nSorry I wasn't attentive enough.\r\n(tests are currently failing, I understand it's from other datasets)", "merging since the CI is fixed on master" ]
1,607,284,752,000
1,608,046,050,000
1,608,046,049,000
MEMBER
null
3 tasks for the WMT 20 MLQE shared tasks -> 3 different datasets (I re-created #1137 because it was too messy). Note that in L199 `task3.py`, I used `logging.warning` to print some missing data in the train set.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1218/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1218/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1218", "html_url": "https://github.com/huggingface/datasets/pull/1218", "diff_url": "https://github.com/huggingface/datasets/pull/1218.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1218.patch", "merged_at": 1608046049000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1217
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1217/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1217/comments
https://api.github.com/repos/huggingface/datasets/issues/1217/events
https://github.com/huggingface/datasets/pull/1217
758,008,321
MDExOlB1bGxSZXF1ZXN0NTMzMjU2MjU4
1,217
adding DataCommons fact checking
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,284,572,000
1,608,135,768,000
1,608,135,768,000
MEMBER
null
Adding the data from: https://datacommons.org/factcheck/ Had to cheat a bit with the dummy data as the test doesn't recognize `.txt.gz`: had to rename uncompressed files with the `.gz` extension manually without actually compressing
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1217/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1217/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1217", "html_url": "https://github.com/huggingface/datasets/pull/1217", "diff_url": "https://github.com/huggingface/datasets/pull/1217.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1217.patch", "merged_at": 1608135768000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1216
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1216/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1216/comments
https://api.github.com/repos/huggingface/datasets/issues/1216/events
https://github.com/huggingface/datasets/pull/1216
758,005,982
MDExOlB1bGxSZXF1ZXN0NTMzMjU0ODE2
1,216
Add limit
{ "login": "j-chim", "id": 22435209, "node_id": "MDQ6VXNlcjIyNDM1MjA5", "avatar_url": "https://avatars.githubusercontent.com/u/22435209?v=4", "gravatar_id": "", "url": "https://api.github.com/users/j-chim", "html_url": "https://github.com/j-chim", "followers_url": "https://api.github.com/users/j-chim/followers", "following_url": "https://api.github.com/users/j-chim/following{/other_user}", "gists_url": "https://api.github.com/users/j-chim/gists{/gist_id}", "starred_url": "https://api.github.com/users/j-chim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/j-chim/subscriptions", "organizations_url": "https://api.github.com/users/j-chim/orgs", "repos_url": "https://api.github.com/users/j-chim/repos", "events_url": "https://api.github.com/users/j-chim/events{/privacy}", "received_events_url": "https://api.github.com/users/j-chim/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "My bad, didn't see this on the open dataset list. Closing this since it overlaps with PR#1256" ]
1,607,283,978,000
1,607,413,931,000
1,607,413,931,000
CONTRIBUTOR
null
This PR adds [LiMiT](https://github.com/ilmgut/limit_dataset), a dataset for literal motion classification/extraction by [Manotas et al., 2020](https://www.aclweb.org/anthology/2020.findings-emnlp.88.pdf).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1216/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1216/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1216", "html_url": "https://github.com/huggingface/datasets/pull/1216", "diff_url": "https://github.com/huggingface/datasets/pull/1216.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1216.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1215
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1215/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1215/comments
https://api.github.com/repos/huggingface/datasets/issues/1215/events
https://github.com/huggingface/datasets/pull/1215
758,002,885
MDExOlB1bGxSZXF1ZXN0NTMzMjUyNjUx
1,215
Add irc disentanglement
{ "login": "dhruvjoshi1998", "id": 32560035, "node_id": "MDQ6VXNlcjMyNTYwMDM1", "avatar_url": "https://avatars.githubusercontent.com/u/32560035?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dhruvjoshi1998", "html_url": "https://github.com/dhruvjoshi1998", "followers_url": "https://api.github.com/users/dhruvjoshi1998/followers", "following_url": "https://api.github.com/users/dhruvjoshi1998/following{/other_user}", "gists_url": "https://api.github.com/users/dhruvjoshi1998/gists{/gist_id}", "starred_url": "https://api.github.com/users/dhruvjoshi1998/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dhruvjoshi1998/subscriptions", "organizations_url": "https://api.github.com/users/dhruvjoshi1998/orgs", "repos_url": "https://api.github.com/users/dhruvjoshi1998/repos", "events_url": "https://api.github.com/users/dhruvjoshi1998/events{/privacy}", "received_events_url": "https://api.github.com/users/dhruvjoshi1998/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "looks like this PR includes changes about many files other than the ones for irc_disentanglement\r\n\r\nCould you please create a new branch and create another PR please ?", "closing in favor of #1586 " ]
1,607,283,046,000
1,608,135,505,000
1,608,135,505,000
CONTRIBUTOR
null
added files for irc disentanglement dataset was unable to test dummy data as a result of vpn/proxy issues
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1215/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1215/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1215", "html_url": "https://github.com/huggingface/datasets/pull/1215", "diff_url": "https://github.com/huggingface/datasets/pull/1215.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1215.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1214
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1214/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1214/comments
https://api.github.com/repos/huggingface/datasets/issues/1214/events
https://github.com/huggingface/datasets/pull/1214
758,002,786
MDExOlB1bGxSZXF1ZXN0NTMzMjUyNTgx
1,214
adding medical-questions-pairs dataset
{ "login": "tuner007", "id": 46425391, "node_id": "MDQ6VXNlcjQ2NDI1Mzkx", "avatar_url": "https://avatars.githubusercontent.com/u/46425391?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tuner007", "html_url": "https://github.com/tuner007", "followers_url": "https://api.github.com/users/tuner007/followers", "following_url": "https://api.github.com/users/tuner007/following{/other_user}", "gists_url": "https://api.github.com/users/tuner007/gists{/gist_id}", "starred_url": "https://api.github.com/users/tuner007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tuner007/subscriptions", "organizations_url": "https://api.github.com/users/tuner007/orgs", "repos_url": "https://api.github.com/users/tuner007/repos", "events_url": "https://api.github.com/users/tuner007/events{/privacy}", "received_events_url": "https://api.github.com/users/tuner007/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,283,012,000
1,607,524,973,000
1,607,524,973,000
CONTRIBUTOR
null
This dataset consists of 3048 similar and dissimilar medical question pairs hand-generated and labeled by Curai's doctors. Dataset : https://github.com/curai/medical-question-pair-dataset Paper : https://drive.google.com/file/d/1CHPGBXkvZuZc8hpr46HeHU6U6jnVze-s/view
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1214/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1214/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1214", "html_url": "https://github.com/huggingface/datasets/pull/1214", "diff_url": "https://github.com/huggingface/datasets/pull/1214.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1214.patch", "merged_at": 1607524973000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1213
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1213/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1213/comments
https://api.github.com/repos/huggingface/datasets/issues/1213/events
https://github.com/huggingface/datasets/pull/1213
757,983,884
MDExOlB1bGxSZXF1ZXN0NTMzMjM4NzEz
1,213
add taskmaster3
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "(you were unlucky, my rule of thumb for reducing the dummy data is to check whether they're above 50KB and you're at 52KB ^^')", "> (you were unlucky, my rule of thumb for reducing the dummy data is to check whether they're above 50KB and you're at 52KB ^^')\r\n\r\nOops :(\r\n\r\nThanks for the suggestion, will reduce the size" ]
1,607,277,363,000
1,607,511,910,000
1,607,511,629,000
MEMBER
null
Adding Taskmaster-3 dataset https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020. The dataset structure almost same as original dataset with these two changes 1. In original dataset, each `apis` has a `args` filed which is a `dict` with variable keys, which represent the name and value of the args. Here converted that to a `list` of `dict` with keys `arg_name` and `arg_value`. For ex. ```python args = {"name.movie": "Mulan", "name.theater": ": "Mountain AMC 16"} ``` becomes ```python [ { "arg_name": "name.movie", "arg_value": "Mulan" }, { "arg_name": "name.theater", "arg_value": "Mountain AMC 16" } ] ``` 2. Each `apis` has a `response` which is also a `dict` with variable keys representing response name/type and it's value. As above converted it to `list` of `dict` with keys `response_name` and `response_value`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1213/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1213/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1213", "html_url": "https://github.com/huggingface/datasets/pull/1213", "diff_url": "https://github.com/huggingface/datasets/pull/1213.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1213.patch", "merged_at": 1607511629000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1212
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1212/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1212/comments
https://api.github.com/repos/huggingface/datasets/issues/1212/events
https://github.com/huggingface/datasets/pull/1212
757,978,795
MDExOlB1bGxSZXF1ZXN0NTMzMjM1MTky
1,212
Add Sanskrit Classic texts in datasets
{ "login": "parmarsuraj99", "id": 9317265, "node_id": "MDQ6VXNlcjkzMTcyNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/9317265?v=4", "gravatar_id": "", "url": "https://api.github.com/users/parmarsuraj99", "html_url": "https://github.com/parmarsuraj99", "followers_url": "https://api.github.com/users/parmarsuraj99/followers", "following_url": "https://api.github.com/users/parmarsuraj99/following{/other_user}", "gists_url": "https://api.github.com/users/parmarsuraj99/gists{/gist_id}", "starred_url": "https://api.github.com/users/parmarsuraj99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/parmarsuraj99/subscriptions", "organizations_url": "https://api.github.com/users/parmarsuraj99/orgs", "repos_url": "https://api.github.com/users/parmarsuraj99/repos", "events_url": "https://api.github.com/users/parmarsuraj99/events{/privacy}", "received_events_url": "https://api.github.com/users/parmarsuraj99/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "merging since the CI is fixed on master" ]
1,607,275,891,000
1,607,367,848,000
1,607,367,848,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1212/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1212/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1212", "html_url": "https://github.com/huggingface/datasets/pull/1212", "diff_url": "https://github.com/huggingface/datasets/pull/1212.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1212.patch", "merged_at": 1607367848000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1211
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1211/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1211/comments
https://api.github.com/repos/huggingface/datasets/issues/1211/events
https://github.com/huggingface/datasets/pull/1211
757,973,719
MDExOlB1bGxSZXF1ZXN0NTMzMjMxNDY3
1,211
Add large spanish corpus
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,274,410,000
1,607,520,996,000
1,607,520,996,000
MEMBER
null
Adds a collection of Spanish corpora that can be useful for pretraining language models. Following a nice suggestion from @yjernite we provide the user with three main ways to preprocess / load either * the whole corpus (17GB!) * one specific sub-corpus * the whole corpus, but return a single split. this is useful if you want to cache the whole preprocessing step once and interact with individual sub-corpora See the dataset card for more details. Ready for review!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1211/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1211/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1211", "html_url": "https://github.com/huggingface/datasets/pull/1211", "diff_url": "https://github.com/huggingface/datasets/pull/1211.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1211.patch", "merged_at": 1607520996000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1210
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1210/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1210/comments
https://api.github.com/repos/huggingface/datasets/issues/1210/events
https://github.com/huggingface/datasets/pull/1210
757,966,959
MDExOlB1bGxSZXF1ZXN0NTMzMjI2NDQ2
1,210
Add XSUM Hallucination Annotations Dataset
{ "login": "vineeths96", "id": 50873201, "node_id": "MDQ6VXNlcjUwODczMjAx", "avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vineeths96", "html_url": "https://github.com/vineeths96", "followers_url": "https://api.github.com/users/vineeths96/followers", "following_url": "https://api.github.com/users/vineeths96/following{/other_user}", "gists_url": "https://api.github.com/users/vineeths96/gists{/gist_id}", "starred_url": "https://api.github.com/users/vineeths96/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vineeths96/subscriptions", "organizations_url": "https://api.github.com/users/vineeths96/orgs", "repos_url": "https://api.github.com/users/vineeths96/repos", "events_url": "https://api.github.com/users/vineeths96/events{/privacy}", "received_events_url": "https://api.github.com/users/vineeths96/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq All necessary modifications have been done." ]
1,607,272,819,000
1,608,471,296,000
1,608,137,831,000
CONTRIBUTOR
null
Adding Google [XSum Hallucination Annotations](https://github.com/google-research-datasets/xsum_hallucination_annotations) dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1210/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1210/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1210", "html_url": "https://github.com/huggingface/datasets/pull/1210", "diff_url": "https://github.com/huggingface/datasets/pull/1210.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1210.patch", "merged_at": 1608137831000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1209
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1209/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1209/comments
https://api.github.com/repos/huggingface/datasets/issues/1209/events
https://github.com/huggingface/datasets/pull/1209
757,965,934
MDExOlB1bGxSZXF1ZXN0NTMzMjI1NzMw
1,209
[AfriBooms] Dataset exists already
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "It's so cool seeing all these datasets fly by and see how they are still of interest. I did my internship at the research group of Liesbeth Augustinus et al. They're a very kind group of people!", "merging since the CI is fixed on master" ]
1,607,272,513,000
1,607,359,944,000
1,607,359,943,000
MEMBER
null
When trying to add "AfriBooms": https://docs.google.com/spreadsheets/d/12ShVow0M6RavnzbBEabm5j5dv12zBaf0y-niwEPPlo4/edit#gid=1386399609 I noticed that the dataset exists already as a config of Universal Dependencies (universal_dependencies.py). I checked and the data exactly matches so that the new data link does not give any new data. This PR improves the config's description a bit by linking to the paper.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1209/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1209/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1209", "html_url": "https://github.com/huggingface/datasets/pull/1209", "diff_url": "https://github.com/huggingface/datasets/pull/1209.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1209.patch", "merged_at": 1607359943000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1208
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1208/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1208/comments
https://api.github.com/repos/huggingface/datasets/issues/1208/events
https://github.com/huggingface/datasets/pull/1208
757,961,368
MDExOlB1bGxSZXF1ZXN0NTMzMjIyMzQ4
1,208
Add HKCanCor
{ "login": "j-chim", "id": 22435209, "node_id": "MDQ6VXNlcjIyNDM1MjA5", "avatar_url": "https://avatars.githubusercontent.com/u/22435209?v=4", "gravatar_id": "", "url": "https://api.github.com/users/j-chim", "html_url": "https://github.com/j-chim", "followers_url": "https://api.github.com/users/j-chim/followers", "following_url": "https://api.github.com/users/j-chim/following{/other_user}", "gists_url": "https://api.github.com/users/j-chim/gists{/gist_id}", "starred_url": "https://api.github.com/users/j-chim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/j-chim/subscriptions", "organizations_url": "https://api.github.com/users/j-chim/orgs", "repos_url": "https://api.github.com/users/j-chim/repos", "events_url": "https://api.github.com/users/j-chim/events{/privacy}", "received_events_url": "https://api.github.com/users/j-chim/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,271,283,000
1,607,286,197,000
1,607,286,114,000
CONTRIBUTOR
null
(Apologies, didn't manage the branches properly and the PR got too messy. Going to open a new PR with everything in order)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1208/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1208/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1208", "html_url": "https://github.com/huggingface/datasets/pull/1208", "diff_url": "https://github.com/huggingface/datasets/pull/1208.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1208.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1207
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1207/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1207/comments
https://api.github.com/repos/huggingface/datasets/issues/1207/events
https://github.com/huggingface/datasets/pull/1207
757,953,830
MDExOlB1bGxSZXF1ZXN0NTMzMjE3MDA4
1,207
Add msr_genomics_kbcomp Dataset
{ "login": "manandey", "id": 6687858, "node_id": "MDQ6VXNlcjY2ODc4NTg=", "avatar_url": "https://avatars.githubusercontent.com/u/6687858?v=4", "gravatar_id": "", "url": "https://api.github.com/users/manandey", "html_url": "https://github.com/manandey", "followers_url": "https://api.github.com/users/manandey/followers", "following_url": "https://api.github.com/users/manandey/following{/other_user}", "gists_url": "https://api.github.com/users/manandey/gists{/gist_id}", "starred_url": "https://api.github.com/users/manandey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manandey/subscriptions", "organizations_url": "https://api.github.com/users/manandey/orgs", "repos_url": "https://api.github.com/users/manandey/repos", "events_url": "https://api.github.com/users/manandey/events{/privacy}", "received_events_url": "https://api.github.com/users/manandey/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,269,205,000
1,607,356,517,000
1,607,356,511,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1207/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1207/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1207", "html_url": "https://github.com/huggingface/datasets/pull/1207", "diff_url": "https://github.com/huggingface/datasets/pull/1207.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1207.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1206/comments
https://api.github.com/repos/huggingface/datasets/issues/1206/events
https://github.com/huggingface/datasets/pull/1206
757,952,992
MDExOlB1bGxSZXF1ZXN0NTMzMjE2NDYw
1,206
Adding Enriched WebNLG dataset
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Nice :) \r\n\r\ncould you add the tags and also remove all the dummy data files that are not zipped ? The diff currently shows 800 files changes xD", "Aaaaand it's rebase time - the new one is at #1264 !", "closing this one since a new PR was created" ]
1,607,268,980,000
1,607,506,832,000
1,607,506,832,000
MEMBER
null
This pull requests adds the `en` and `de` versions of the [Enriched WebNLG](https://github.com/ThiagoCF05/webnlg) dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1206/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1206/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1206", "html_url": "https://github.com/huggingface/datasets/pull/1206", "diff_url": "https://github.com/huggingface/datasets/pull/1206.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1206.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1205
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1205/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1205/comments
https://api.github.com/repos/huggingface/datasets/issues/1205/events
https://github.com/huggingface/datasets/pull/1205
757,942,403
MDExOlB1bGxSZXF1ZXN0NTMzMjA4NDI1
1,205
add lst20 with manual download
{ "login": "cstorm125", "id": 15519308, "node_id": "MDQ6VXNlcjE1NTE5MzA4", "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cstorm125", "html_url": "https://github.com/cstorm125", "followers_url": "https://api.github.com/users/cstorm125/followers", "following_url": "https://api.github.com/users/cstorm125/following{/other_user}", "gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}", "starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions", "organizations_url": "https://api.github.com/users/cstorm125/orgs", "repos_url": "https://api.github.com/users/cstorm125/repos", "events_url": "https://api.github.com/users/cstorm125/events{/privacy}", "received_events_url": "https://api.github.com/users/cstorm125/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The pytest suite doesn't allow manual downloads so we just make sure that the `datasets-cli test` command to run without errors instead", "@lhoestq Changes made. Thank you for the review. I've made some same mistakes for https://github.com/huggingface/datasets/pull/1253 too. Will fix them before review." ]
1,607,266,150,000
1,607,531,590,000
1,607,531,590,000
CONTRIBUTOR
null
passed on local: ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_lst20 ``` Not sure how to test: ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_lst20 ``` ``` LST20 Corpus is a dataset for Thai language processing developed by National Electronics and Computer Technology Center (NECTEC), Thailand. It offers five layers of linguistic annotation: word boundaries, POS tagging, named entities, clause boundaries, and sentence boundaries. At a large scale, it consists of 3,164,002 words, 288,020 named entities, 248,181 clauses, and 74,180 sentences, while it is annotated with 16 distinct POS tags. All 3,745 documents are also annotated with one of 15 news genres. Regarding its sheer size, this dataset is considered large enough for developing joint neural models for NLP. Manually download at https://aiforthai.in.th/corpus.php ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1205/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1205/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1205", "html_url": "https://github.com/huggingface/datasets/pull/1205", "diff_url": "https://github.com/huggingface/datasets/pull/1205.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1205.patch", "merged_at": 1607531590000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1204
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1204/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1204/comments
https://api.github.com/repos/huggingface/datasets/issues/1204/events
https://github.com/huggingface/datasets/pull/1204
757,939,475
MDExOlB1bGxSZXF1ZXN0NTMzMjA2MzE3
1,204
adding meta_woz dataset
{ "login": "pacman100", "id": 13534540, "node_id": "MDQ6VXNlcjEzNTM0NTQw", "avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pacman100", "html_url": "https://github.com/pacman100", "followers_url": "https://api.github.com/users/pacman100/followers", "following_url": "https://api.github.com/users/pacman100/following{/other_user}", "gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}", "starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pacman100/subscriptions", "organizations_url": "https://api.github.com/users/pacman100/orgs", "repos_url": "https://api.github.com/users/pacman100/repos", "events_url": "https://api.github.com/users/pacman100/events{/privacy}", "received_events_url": "https://api.github.com/users/pacman100/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,265,253,000
1,608,131,125,000
1,608,131,124,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1204/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1204/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1204", "html_url": "https://github.com/huggingface/datasets/pull/1204", "diff_url": "https://github.com/huggingface/datasets/pull/1204.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1204.patch", "merged_at": 1608131124000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1203
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1203/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1203/comments
https://api.github.com/repos/huggingface/datasets/issues/1203/events
https://github.com/huggingface/datasets/pull/1203
757,935,170
MDExOlB1bGxSZXF1ZXN0NTMzMjAzMTc0
1,203
Add Neural Code Search Dataset
{ "login": "vinaykudari", "id": 34424769, "node_id": "MDQ6VXNlcjM0NDI0NzY5", "avatar_url": "https://avatars.githubusercontent.com/u/34424769?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vinaykudari", "html_url": "https://github.com/vinaykudari", "followers_url": "https://api.github.com/users/vinaykudari/followers", "following_url": "https://api.github.com/users/vinaykudari/following{/other_user}", "gists_url": "https://api.github.com/users/vinaykudari/gists{/gist_id}", "starred_url": "https://api.github.com/users/vinaykudari/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vinaykudari/subscriptions", "organizations_url": "https://api.github.com/users/vinaykudari/orgs", "repos_url": "https://api.github.com/users/vinaykudari/repos", "events_url": "https://api.github.com/users/vinaykudari/events{/privacy}", "received_events_url": "https://api.github.com/users/vinaykudari/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> Really good thanks !\r\n> \r\n> I left a few comments\r\n\r\nThanks, resolved them :) ", "looks like this PR includes changes about many other files than the ones for Code Search\r\n\r\ncan you create another branch and another PR please ?", "> looks like this PR includes changes about many other files than the ones for Code Search\r\n> \r\n> can you create another branch and another PR please ?\r\n\r\nOkay sure" ]
1,607,263,959,000
1,607,532,015,000
1,607,532,015,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1203/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1203/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1203", "html_url": "https://github.com/huggingface/datasets/pull/1203", "diff_url": "https://github.com/huggingface/datasets/pull/1203.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1203.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1202
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1202/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1202/comments
https://api.github.com/repos/huggingface/datasets/issues/1202/events
https://github.com/huggingface/datasets/pull/1202
757,934,408
MDExOlB1bGxSZXF1ZXN0NTMzMjAyNjE0
1,202
Medical question pairs
{ "login": "tuner007", "id": 46425391, "node_id": "MDQ6VXNlcjQ2NDI1Mzkx", "avatar_url": "https://avatars.githubusercontent.com/u/46425391?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tuner007", "html_url": "https://github.com/tuner007", "followers_url": "https://api.github.com/users/tuner007/followers", "following_url": "https://api.github.com/users/tuner007/following{/other_user}", "gists_url": "https://api.github.com/users/tuner007/gists{/gist_id}", "starred_url": "https://api.github.com/users/tuner007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tuner007/subscriptions", "organizations_url": "https://api.github.com/users/tuner007/orgs", "repos_url": "https://api.github.com/users/tuner007/repos", "events_url": "https://api.github.com/users/tuner007/events{/privacy}", "received_events_url": "https://api.github.com/users/tuner007/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,263,747,000
1,607,276,488,000
1,607,276,488,000
CONTRIBUTOR
null
This dataset consists of 3048 similar and dissimilar medical question pairs hand-generated and labeled by Curai's doctors. Dataset : https://github.com/curai/medical-question-pair-dataset Paper : https://drive.google.com/file/d/1CHPGBXkvZuZc8hpr46HeHU6U6jnVze-s/view **No splits added**
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1202/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1202/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1202", "html_url": "https://github.com/huggingface/datasets/pull/1202", "diff_url": "https://github.com/huggingface/datasets/pull/1202.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1202.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1201
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1201/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1201/comments
https://api.github.com/repos/huggingface/datasets/issues/1201/events
https://github.com/huggingface/datasets/pull/1201
757,927,941
MDExOlB1bGxSZXF1ZXN0NTMzMTk3OTI2
1,201
adding medical-questions-pairs
{ "login": "tuner007", "id": 46425391, "node_id": "MDQ6VXNlcjQ2NDI1Mzkx", "avatar_url": "https://avatars.githubusercontent.com/u/46425391?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tuner007", "html_url": "https://github.com/tuner007", "followers_url": "https://api.github.com/users/tuner007/followers", "following_url": "https://api.github.com/users/tuner007/following{/other_user}", "gists_url": "https://api.github.com/users/tuner007/gists{/gist_id}", "starred_url": "https://api.github.com/users/tuner007/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tuner007/subscriptions", "organizations_url": "https://api.github.com/users/tuner007/orgs", "repos_url": "https://api.github.com/users/tuner007/repos", "events_url": "https://api.github.com/users/tuner007/events{/privacy}", "received_events_url": "https://api.github.com/users/tuner007/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,261,812,000
1,607,261,984,000
1,607,261,972,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1201/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1201/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1201", "html_url": "https://github.com/huggingface/datasets/pull/1201", "diff_url": "https://github.com/huggingface/datasets/pull/1201.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1201.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1200
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1200/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1200/comments
https://api.github.com/repos/huggingface/datasets/issues/1200/events
https://github.com/huggingface/datasets/pull/1200
757,926,823
MDExOlB1bGxSZXF1ZXN0NTMzMTk3MDk0
1,200
Update ADD_NEW_DATASET.md
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,261,492,000
1,607,329,959,000
1,607,329,959,000
CONTRIBUTOR
null
Windows needs special treatment again: unfortunately adding `torch` to the requirements does not work well (crashing the installation). Users should first install torch manually and then continue with the other commands. This issue arises all the time when adding torch as a dependency, but because so many novice users seem to participate in adding datasets, it may be useful to add an explicit note for Windows users to ensure that they do not run into issues.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1200/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1200/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1200", "html_url": "https://github.com/huggingface/datasets/pull/1200", "diff_url": "https://github.com/huggingface/datasets/pull/1200.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1200.patch", "merged_at": 1607329959000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1199
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1199/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1199/comments
https://api.github.com/repos/huggingface/datasets/issues/1199/events
https://github.com/huggingface/datasets/pull/1199
757,909,237
MDExOlB1bGxSZXF1ZXN0NTMzMTg0Nzk3
1,199
Turkish NER dataset, script works fine, couldn't generate dummy data
{ "login": "merveenoyan", "id": 53175384, "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/merveenoyan", "html_url": "https://github.com/merveenoyan", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "repos_url": "https://api.github.com/users/merveenoyan/repos", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "the .DUMP file looks like a txt with one example per line so adding `--match_text_files *.DUMP --n_lines 50` to the dummy generation command might work .", "We can close this PR since a new PR was open at #1268 " ]
1,607,256,003,000
1,608,135,204,000
1,608,135,204,000
CONTRIBUTOR
null
I've written the script (Turkish_NER.py) that includes dataset. The dataset is a zip inside another zip, and it's extracted as .DUMP file. However, after preprocessing I only get .arrow file. After I ran the script with no error messages, I get .arrow file of dataset, LICENSE and dataset_info.json.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1199/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1199/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1199", "html_url": "https://github.com/huggingface/datasets/pull/1199", "diff_url": "https://github.com/huggingface/datasets/pull/1199.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1199.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1198
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1198/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1198/comments
https://api.github.com/repos/huggingface/datasets/issues/1198/events
https://github.com/huggingface/datasets/pull/1198
757,903,453
MDExOlB1bGxSZXF1ZXN0NTMzMTgwNjAz
1,198
Add ALT
{ "login": "chameleonTK", "id": 6429850, "node_id": "MDQ6VXNlcjY0Mjk4NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6429850?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chameleonTK", "html_url": "https://github.com/chameleonTK", "followers_url": "https://api.github.com/users/chameleonTK/followers", "following_url": "https://api.github.com/users/chameleonTK/following{/other_user}", "gists_url": "https://api.github.com/users/chameleonTK/gists{/gist_id}", "starred_url": "https://api.github.com/users/chameleonTK/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chameleonTK/subscriptions", "organizations_url": "https://api.github.com/users/chameleonTK/orgs", "repos_url": "https://api.github.com/users/chameleonTK/repos", "events_url": "https://api.github.com/users/chameleonTK/events{/privacy}", "received_events_url": "https://api.github.com/users/chameleonTK/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "the `RemoteDatasetTest ` erros in the CI are fixed on master so it's fine", "used `Translation ` feature type and fixed few typos as you suggested.", "Sorry, I made a mistake. please see new PR here. https://github.com/huggingface/datasets/pull/1436" ]
1,607,253,930,000
1,607,573,892,000
1,607,573,892,000
CONTRIBUTOR
null
ALT dataset -- https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1198/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1198/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1198", "html_url": "https://github.com/huggingface/datasets/pull/1198", "diff_url": "https://github.com/huggingface/datasets/pull/1198.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1198.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1197
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1197/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1197/comments
https://api.github.com/repos/huggingface/datasets/issues/1197/events
https://github.com/huggingface/datasets/pull/1197
757,900,160
MDExOlB1bGxSZXF1ZXN0NTMzMTc4MTIz
1,197
add taskmaster-2
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,252,718,000
1,607,354,563,000
1,607,354,563,000
MEMBER
null
Adding taskmaster-2 dataset. https://github.com/google-research-datasets/Taskmaster/tree/master/TM-2-2020
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1197/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1197/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1197", "html_url": "https://github.com/huggingface/datasets/pull/1197", "diff_url": "https://github.com/huggingface/datasets/pull/1197.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1197.patch", "merged_at": 1607354563000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1196
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1196/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1196/comments
https://api.github.com/repos/huggingface/datasets/issues/1196/events
https://github.com/huggingface/datasets/pull/1196
757,894,920
MDExOlB1bGxSZXF1ZXN0NTMzMTc0NjU2
1,196
Add IWSLT'15 English-Vietnamese machine translation Data
{ "login": "Nilanshrajput", "id": 28673745, "node_id": "MDQ6VXNlcjI4NjczNzQ1", "avatar_url": "https://avatars.githubusercontent.com/u/28673745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nilanshrajput", "html_url": "https://github.com/Nilanshrajput", "followers_url": "https://api.github.com/users/Nilanshrajput/followers", "following_url": "https://api.github.com/users/Nilanshrajput/following{/other_user}", "gists_url": "https://api.github.com/users/Nilanshrajput/gists{/gist_id}", "starred_url": "https://api.github.com/users/Nilanshrajput/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Nilanshrajput/subscriptions", "organizations_url": "https://api.github.com/users/Nilanshrajput/orgs", "repos_url": "https://api.github.com/users/Nilanshrajput/repos", "events_url": "https://api.github.com/users/Nilanshrajput/events{/privacy}", "received_events_url": "https://api.github.com/users/Nilanshrajput/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks ! feel free to ping me once you've added the tags in the dataset card :) ", "merging since the CI is fixed on master" ]
1,607,250,991,000
1,607,711,211,000
1,607,711,211,000
CONTRIBUTOR
null
Preprocessed Dataset from IWSLT'15 English-Vietnamese machine translation: English-Vietnamese. from https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1196/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1196/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1196", "html_url": "https://github.com/huggingface/datasets/pull/1196", "diff_url": "https://github.com/huggingface/datasets/pull/1196.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1196.patch", "merged_at": 1607711211000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1195
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1195/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1195/comments
https://api.github.com/repos/huggingface/datasets/issues/1195/events
https://github.com/huggingface/datasets/pull/1195
757,889,045
MDExOlB1bGxSZXF1ZXN0NTMzMTcwMjY2
1,195
addition of py_ast
{ "login": "reshinthadithyan", "id": 36307201, "node_id": "MDQ6VXNlcjM2MzA3MjAx", "avatar_url": "https://avatars.githubusercontent.com/u/36307201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/reshinthadithyan", "html_url": "https://github.com/reshinthadithyan", "followers_url": "https://api.github.com/users/reshinthadithyan/followers", "following_url": "https://api.github.com/users/reshinthadithyan/following{/other_user}", "gists_url": "https://api.github.com/users/reshinthadithyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/reshinthadithyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/reshinthadithyan/subscriptions", "organizations_url": "https://api.github.com/users/reshinthadithyan/orgs", "repos_url": "https://api.github.com/users/reshinthadithyan/repos", "events_url": "https://api.github.com/users/reshinthadithyan/events{/privacy}", "received_events_url": "https://api.github.com/users/reshinthadithyan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @reshinthadithyan !\r\n\r\nAs mentioned on the Slack, it would be better in this case to parse the file lines into the following feature structure:\r\n```python\r\n\"ast\": datasets.Sequence(\r\n {\r\n \"type\": datasets.Value(\"string\"),\r\n \"value\": datasets.Value(\"string\"),\r\n \"children\": datasets.Sequence(datasets.Value(\"int32\")),\r\n },\r\n)\r\n```\r\n\r\nHere are a few more things to fix before we can move forward:\r\n- the class name needs to be the CamelCase equivalent of the script name, so here it will have to be `PyAst`\r\n- the `README.md` needs to have the tags at the top\r\n- The homepage/info list at the top should be in the same format as the template (added a suggestion)\r\n- You should add the dataset tags and field description to the README as described here: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nGood luck, let us know if you need any help!", "Hello @yjernite, changes have been made as we talked. Hope this would suffice. Thanks. Feel free to point out any room to improvement.", "Good progress! Here's what still needs to be done:\r\n- first, you need to rebase to master for the tests to pass :)\r\n- the information in your `Data Fields` paragraph should go into `Data Instances`. Data fields should describe the fields one by one, as in e.g. https://github.com/huggingface/datasets/tree/master/datasets/eli5#data-fields\r\n- you still need to add the YAML tags obtained with the tagging app\r\n\r\nShould be good to go after that!", "Hello @yjernite, changes as talked are being done.", "Looks like this PR includes changes about many other files than the ones for py_ast\r\n\r\nCould you create another branch and another PR please ?" ]
1,607,248,852,000
1,607,408,364,000
1,607,408,364,000
CONTRIBUTOR
null
The dataset consists of parsed Parsed ASTs that were used to train and evaluate the DeepSyn tool. The Python programs are collected from GitHub repositories by removing duplicate files, removing project forks (copy of another existing repository) ,keeping only programs that parse and have at most 30'000 nodes in the AST and we aim to remove obfuscated files
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1195/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1195/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1195", "html_url": "https://github.com/huggingface/datasets/pull/1195", "diff_url": "https://github.com/huggingface/datasets/pull/1195.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1195.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1194
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1194/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1194/comments
https://api.github.com/repos/huggingface/datasets/issues/1194/events
https://github.com/huggingface/datasets/pull/1194
757,880,647
MDExOlB1bGxSZXF1ZXN0NTMzMTY0MDcz
1,194
Add msr_text_compression
{ "login": "jeromeku", "id": 2455711, "node_id": "MDQ6VXNlcjI0NTU3MTE=", "avatar_url": "https://avatars.githubusercontent.com/u/2455711?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jeromeku", "html_url": "https://github.com/jeromeku", "followers_url": "https://api.github.com/users/jeromeku/followers", "following_url": "https://api.github.com/users/jeromeku/following{/other_user}", "gists_url": "https://api.github.com/users/jeromeku/gists{/gist_id}", "starred_url": "https://api.github.com/users/jeromeku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeromeku/subscriptions", "organizations_url": "https://api.github.com/users/jeromeku/orgs", "repos_url": "https://api.github.com/users/jeromeku/repos", "events_url": "https://api.github.com/users/jeromeku/events{/privacy}", "received_events_url": "https://api.github.com/users/jeromeku/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "the `RemoteDatasetTest ` error in the CI is fixed on master so it's fine" ]
1,607,245,571,000
1,607,511,225,000
1,607,511,225,000
CONTRIBUTOR
null
Add [MSR Abstractive Text Compression Dataset](https://msropendata.com/datasets/f8ce2ec9-7fbd-48f7-a8bb-2d2279373563)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1194/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1194/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1194", "html_url": "https://github.com/huggingface/datasets/pull/1194", "diff_url": "https://github.com/huggingface/datasets/pull/1194.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1194.patch", "merged_at": 1607511225000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1193
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1193/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1193/comments
https://api.github.com/repos/huggingface/datasets/issues/1193/events
https://github.com/huggingface/datasets/pull/1193
757,840,830
MDExOlB1bGxSZXF1ZXN0NTMzMTM1NDAy
1,193
add taskmaster-1
{ "login": "patil-suraj", "id": 27137566, "node_id": "MDQ6VXNlcjI3MTM3NTY2", "avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patil-suraj", "html_url": "https://github.com/patil-suraj", "followers_url": "https://api.github.com/users/patil-suraj/followers", "following_url": "https://api.github.com/users/patil-suraj/following{/other_user}", "gists_url": "https://api.github.com/users/patil-suraj/gists{/gist_id}", "starred_url": "https://api.github.com/users/patil-suraj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patil-suraj/subscriptions", "organizations_url": "https://api.github.com/users/patil-suraj/orgs", "repos_url": "https://api.github.com/users/patil-suraj/repos", "events_url": "https://api.github.com/users/patil-suraj/events{/privacy}", "received_events_url": "https://api.github.com/users/patil-suraj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,227,797,000
1,607,354,604,000
1,607,353,719,000
MEMBER
null
Adding Taskmaster-1 dataset https://github.com/google-research-datasets/Taskmaster/tree/master/TM-1-2019
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1193/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1193/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1193", "html_url": "https://github.com/huggingface/datasets/pull/1193", "diff_url": "https://github.com/huggingface/datasets/pull/1193.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1193.patch", "merged_at": 1607353719000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1192
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1192/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1192/comments
https://api.github.com/repos/huggingface/datasets/issues/1192/events
https://github.com/huggingface/datasets/pull/1192
757,839,671
MDExOlB1bGxSZXF1ZXN0NTMzMTM0NjI3
1,192
Add NewsPH_NLI dataset
{ "login": "anaerobeth", "id": 3663322, "node_id": "MDQ6VXNlcjM2NjMzMjI=", "avatar_url": "https://avatars.githubusercontent.com/u/3663322?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anaerobeth", "html_url": "https://github.com/anaerobeth", "followers_url": "https://api.github.com/users/anaerobeth/followers", "following_url": "https://api.github.com/users/anaerobeth/following{/other_user}", "gists_url": "https://api.github.com/users/anaerobeth/gists{/gist_id}", "starred_url": "https://api.github.com/users/anaerobeth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anaerobeth/subscriptions", "organizations_url": "https://api.github.com/users/anaerobeth/orgs", "repos_url": "https://api.github.com/users/anaerobeth/repos", "events_url": "https://api.github.com/users/anaerobeth/events{/privacy}", "received_events_url": "https://api.github.com/users/anaerobeth/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,227,231,000
1,607,355,583,000
1,607,355,583,000
CONTRIBUTOR
null
This PR adds the NewsPH-NLI Dataset, the first benchmark dataset for sentence entailment in the low-resource Filipino language. Constructed through exploting the structure of news articles. Contains 600,000 premise-hypothesis pairs, in 70-15-15 split for training, validation, and testing. Link to the paper: https://arxiv.org/pdf/2010.11574.pdf Link to the dataset/repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1192/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1192/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1192", "html_url": "https://github.com/huggingface/datasets/pull/1192", "diff_url": "https://github.com/huggingface/datasets/pull/1192.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1192.patch", "merged_at": 1607355583000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1191
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1191/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1191/comments
https://api.github.com/repos/huggingface/datasets/issues/1191/events
https://github.com/huggingface/datasets/pull/1191
757,836,654
MDExOlB1bGxSZXF1ZXN0NTMzMTMyNTg1
1,191
Added Translator Human Parity Data For a Chinese-English news transla…
{ "login": "leoxzhao", "id": 7915719, "node_id": "MDQ6VXNlcjc5MTU3MTk=", "avatar_url": "https://avatars.githubusercontent.com/u/7915719?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leoxzhao", "html_url": "https://github.com/leoxzhao", "followers_url": "https://api.github.com/users/leoxzhao/followers", "following_url": "https://api.github.com/users/leoxzhao/following{/other_user}", "gists_url": "https://api.github.com/users/leoxzhao/gists{/gist_id}", "starred_url": "https://api.github.com/users/leoxzhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leoxzhao/subscriptions", "organizations_url": "https://api.github.com/users/leoxzhao/orgs", "repos_url": "https://api.github.com/users/leoxzhao/repos", "events_url": "https://api.github.com/users/leoxzhao/events{/privacy}", "received_events_url": "https://api.github.com/users/leoxzhao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Can you run `make style` to format the code and fix the CI please ?", "> Can you run `make style` to format the code and fix the CI please ?\r\n\r\nI ran `make style` before this PR and just a few minutes ago. No changes to the code. Not sure why the CI is failing.", "Also, I attempted to see if I can get the source Chinese sentences from `wmt17` dataset. But this call `data = load_dataset('wmt17', \"zh-en\")` failed with this error: `FileNotFoundError: Couldn't find file at https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-zh.tar.gz`. I think it should be possible and fairly straightforward to get the pairing source sentences from it. I just can not test it right now.", "The `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine", "merging since the CI is fixed on master" ]
1,607,225,653,000
1,607,520,165,000
1,607,520,165,000
CONTRIBUTOR
null
…tion system from Open dataset list for Dataset sprint, Microsoft Datasets tab.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1191/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1191/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1191", "html_url": "https://github.com/huggingface/datasets/pull/1191", "diff_url": "https://github.com/huggingface/datasets/pull/1191.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1191.patch", "merged_at": 1607520165000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1190
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1190/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1190/comments
https://api.github.com/repos/huggingface/datasets/issues/1190/events
https://github.com/huggingface/datasets/pull/1190
757,833,698
MDExOlB1bGxSZXF1ZXN0NTMzMTMwNTM0
1,190
Add Fake News Detection in Filipino dataset
{ "login": "anaerobeth", "id": 3663322, "node_id": "MDQ6VXNlcjM2NjMzMjI=", "avatar_url": "https://avatars.githubusercontent.com/u/3663322?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anaerobeth", "html_url": "https://github.com/anaerobeth", "followers_url": "https://api.github.com/users/anaerobeth/followers", "following_url": "https://api.github.com/users/anaerobeth/following{/other_user}", "gists_url": "https://api.github.com/users/anaerobeth/gists{/gist_id}", "starred_url": "https://api.github.com/users/anaerobeth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anaerobeth/subscriptions", "organizations_url": "https://api.github.com/users/anaerobeth/orgs", "repos_url": "https://api.github.com/users/anaerobeth/repos", "events_url": "https://api.github.com/users/anaerobeth/events{/privacy}", "received_events_url": "https://api.github.com/users/anaerobeth/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! I'm the author of this paper (surprised to see our datasets have been added already).\r\n\r\nThat paper link only leads to the conference index, here's a link to the actual paper: https://www.aclweb.org/anthology/2020.lrec-1.316/\r\n\r\nWould it be fine if I also edited your gsheet entry to reflect this change?", "Hi Jan, please go ahead and update. I see you are also in the sprint slack channel. Let me know if what else needs updating. Thanks.\r\n" ]
1,607,224,335,000
1,607,355,567,000
1,607,355,567,000
CONTRIBUTOR
null
This PR adds the Fake News Filipino Dataset, a low-resource fake news detection corpora in Filipino. Contains 3,206 expertly-labeled news samples, half of which are real and half of which are fake. Link to the paper: http://www.lrec-conf.org/proceedings/lrec2020/index.html Link to the dataset/repo: https://github.com/jcblaisecruz02/Tagalog-fake-news
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1190/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1190/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1190", "html_url": "https://github.com/huggingface/datasets/pull/1190", "diff_url": "https://github.com/huggingface/datasets/pull/1190.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1190.patch", "merged_at": 1607355567000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1189
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1189/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1189/comments
https://api.github.com/repos/huggingface/datasets/issues/1189/events
https://github.com/huggingface/datasets/pull/1189
757,831,035
MDExOlB1bGxSZXF1ZXN0NTMzMTI4NjY1
1,189
Add Dengue dataset in Filipino
{ "login": "anaerobeth", "id": 3663322, "node_id": "MDQ6VXNlcjM2NjMzMjI=", "avatar_url": "https://avatars.githubusercontent.com/u/3663322?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anaerobeth", "html_url": "https://github.com/anaerobeth", "followers_url": "https://api.github.com/users/anaerobeth/followers", "following_url": "https://api.github.com/users/anaerobeth/following{/other_user}", "gists_url": "https://api.github.com/users/anaerobeth/gists{/gist_id}", "starred_url": "https://api.github.com/users/anaerobeth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anaerobeth/subscriptions", "organizations_url": "https://api.github.com/users/anaerobeth/orgs", "repos_url": "https://api.github.com/users/anaerobeth/repos", "events_url": "https://api.github.com/users/anaerobeth/events{/privacy}", "received_events_url": "https://api.github.com/users/anaerobeth/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,607,223,047,000
1,607,355,538,000
1,607,355,538,000
CONTRIBUTOR
null
This PR adds the Dengue Dataset, a benchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets. Link to the paper: https://ieeexplore.ieee.org/document/8459963 Link to the dataset/repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1189/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1189/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1189", "html_url": "https://github.com/huggingface/datasets/pull/1189", "diff_url": "https://github.com/huggingface/datasets/pull/1189.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1189.patch", "merged_at": 1607355538000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1188
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1188/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1188/comments
https://api.github.com/repos/huggingface/datasets/issues/1188/events
https://github.com/huggingface/datasets/pull/1188
757,827,407
MDExOlB1bGxSZXF1ZXN0NTMzMTI2MTcw
1,188
adding hind_encorp dataset
{ "login": "rahul-art", "id": 56379013, "node_id": "MDQ6VXNlcjU2Mzc5MDEz", "avatar_url": "https://avatars.githubusercontent.com/u/56379013?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rahul-art", "html_url": "https://github.com/rahul-art", "followers_url": "https://api.github.com/users/rahul-art/followers", "following_url": "https://api.github.com/users/rahul-art/following{/other_user}", "gists_url": "https://api.github.com/users/rahul-art/gists{/gist_id}", "starred_url": "https://api.github.com/users/rahul-art/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rahul-art/subscriptions", "organizations_url": "https://api.github.com/users/rahul-art/orgs", "repos_url": "https://api.github.com/users/rahul-art/repos", "events_url": "https://api.github.com/users/rahul-art/events{/privacy}", "received_events_url": "https://api.github.com/users/rahul-art/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "help needed in dummy data", "extension of the file is .plaintext so dummy data generation is failing\r\n", "you can add the `--match_text_file \"*.plaintext\"` flag when generating the dummy data\r\n\r\nalso it looks like the PR is empty, is this expected ?", "yes it is expected because I made all my changes in PR #1186 then I again run code and open PR #1188 to see if this time test passes or not only so there is no code change from #1186 to #1188 \r\ni tried --match_text_file \"*.plaintext\" this time it is also not generating dummy data don't know why", "well this PR includes no code change at all, can you make sure you added your changes in this one ?", "feel free to ping me when you have added the files so I can take a look and help you with the dummy data", "how to do that i dont know did i have to open new PR\r\n", " actually all my changes are visible in #1186 but don't know how to show same changes here", "these are a the which i did in #1186 and same in #1188 \r\n![1](https://user-images.githubusercontent.com/56379013/101646577-b4864500-3a5d-11eb-8a5a-91b1b441040a.png)\r\n![2](https://user-images.githubusercontent.com/56379013/101646965-32e2e700-3a5e-11eb-94d9-276e602c6ded.png)\r\n![4](https://user-images.githubusercontent.com/56379013/101646989-38d8c800-3a5e-11eb-92bb-d9c4cb2c3595.png)\r\n![5](https://user-images.githubusercontent.com/56379013/101647017-41c99980-3a5e-11eb-87cf-5268e79df19d.png)\r\n![6](https://user-images.githubusercontent.com/56379013/101647038-48581100-3a5e-11eb-8d05-f67834fcaa7b.png)\r\n\r\n![8](https://user-images.githubusercontent.com/56379013/101647080-55750000-3a5e-11eb-8455-8936a35b35c2.png)\r\n![9](https://user-images.githubusercontent.com/56379013/101647084-55750000-3a5e-11eb-988e-ae87f0b252a0.png)\r\n![10](https://user-images.githubusercontent.com/56379013/101647182-6f164780-3a5e-11eb-8af3-f0b0186483c9.png)\r\n![11](https://user-images.githubusercontent.com/56379013/101647230-7c333680-3a5e-11eb-9aeb-2b4ce65965e0.png)\r\n![13](https://user-images.githubusercontent.com/56379013/101647257-848b7180-3a5e-11eb-871c-2fd77b047320.png)\r\n![14](https://user-images.githubusercontent.com/56379013/101647268-89502580-3a5e-11eb-9e2a-b9f7ff1fc95e.png)\r\nthese same codes are in both #1186 and #1188 so because it is already present from PR #1186 because of that it is showing zeor code change in #1188 because it is already present from #1186 how i can show or highlight those changes\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n", "well for me https://github.com/huggingface/datasets/pull/1188/files is blank", "This PR tries to merge the master branch of you fork into this repo, however I can't find changes with your files inside your master branch.\r\n\r\nMaybe you can fork again the repo and try to create another PR ?", "@lhoestq i opened a new pr #1438 but this time it fails many circl ci tests", "Closing this one since a new PR was created" ]
1,607,221,125,000
1,607,708,441,000
1,607,708,441,000
CONTRIBUTOR
null
adding Hindi_Encorp05 dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1188/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1188/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1188", "html_url": "https://github.com/huggingface/datasets/pull/1188", "diff_url": "https://github.com/huggingface/datasets/pull/1188.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1188.patch", "merged_at": null }
true