Datasets:

Languages:
Indonesian
ArXiv:
License:
holylovenia commited on
Commit
f27ea83
1 Parent(s): a9f4d11

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -11,7 +11,7 @@ tags:
11
  ---
12
 
13
 
14
- The TyDIQA_ID-NLI dataset is derived from the TyDIQA_ID question answering dataset, utilizing named entity recognition (NER), chunking tags, Regex, and embedding similarity techniques to determine its contradiction sets. Collected through this process, the dataset comprises various columns beyond premise, hypothesis, and label, including properties aligned with NER and chunking tags. This dataset is designed to facilitate Natural Language Inference (NLI) tasks and contains information extracted from diverse sources to provide comprehensive coverage. Each data instance encapsulates premise, hypothesis, label, and additional properties pertinent to NLI evaluation.
15
 
16
 
17
  ## Languages
@@ -21,25 +21,25 @@ ind
21
  ## Supported Tasks
22
 
23
  Textual Entailment
24
-
25
  ## Dataset Usage
26
  ### Using `datasets` library
27
  ```
28
- from datasets import load_dataset
29
- dset = datasets.load_dataset("SEACrowd/tydiqa_id_nli", trust_remote_code=True)
30
  ```
31
  ### Using `seacrowd` library
32
  ```import seacrowd as sc
33
  # Load the dataset using the default config
34
- dset = sc.load_dataset("tydiqa_id_nli", schema="seacrowd")
35
  # Check all available subsets (config names) of the dataset
36
- print(sc.available_config_names("tydiqa_id_nli"))
37
  # Load the dataset using a specific config
38
- dset = sc.load_dataset_by_config_name(config_name="<config_name>")
39
  ```
40
-
41
- More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
42
-
43
 
44
  ## Dataset Homepage
45
 
 
11
  ---
12
 
13
 
14
+ The TyDIQA_ID-NLI dataset is derived from the TyDIQA_IDquestion answering dataset, utilizing namedentity recognition (NER), chunking tags,Regex, and embedding similarity techniquesto determine its contradiction sets.Collected through this process,the dataset comprises various columns beyondpremise, hypothesis, and label, includingproperties aligned with NER and chunking tags.This dataset is designed to facilitate NaturalLanguage Inference (NLI) tasks and containsinformation extracted from diverse sourcesto provide comprehensive coverage. Each datainstance encapsulates premise, hypothesis, label,and additional properties pertinent to NLI evaluation.
15
 
16
 
17
  ## Languages
 
21
  ## Supported Tasks
22
 
23
  Textual Entailment
24
+
25
  ## Dataset Usage
26
  ### Using `datasets` library
27
  ```
28
+ from datasets import load_dataset
29
+ dset = datasets.load_dataset("SEACrowd/tydiqa_id_nli", trust_remote_code=True)
30
  ```
31
  ### Using `seacrowd` library
32
  ```import seacrowd as sc
33
  # Load the dataset using the default config
34
+ dset = sc.load_dataset("tydiqa_id_nli", schema="seacrowd")
35
  # Check all available subsets (config names) of the dataset
36
+ print(sc.available_config_names("tydiqa_id_nli"))
37
  # Load the dataset using a specific config
38
+ dset = sc.load_dataset_by_config_name(config_name="<config_name>")
39
  ```
40
+
41
+ More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).
42
+
43
 
44
  ## Dataset Homepage
45