harshit-gupta commited on
Commit
0aae9b8
1 Parent(s): d63bad1

Further Updated README

Browse files

-> Added more details regarding the dataset
-> Added relevant links to the GitHub Repo
-> Removed Unnecessary fields

Files changed (1) hide show
  1. README.md +47 -5
README.md CHANGED
@@ -62,26 +62,46 @@ ficle_data = load_dataset("tathagataraha/ficle")
62
 
63
  ## Dataset Description
64
 
65
- * **GitHub Repo:**
66
  * **Paper:**
67
  * **Point of Contact:**
68
 
69
  ### Dataset Summary
70
 
 
 
 
71
  ### Languages
72
 
73
  The FICLE Dataset contains only English.
74
 
75
  ## Dataset Structure
76
 
77
- ### Data Instances
78
-
79
  ### Data Fields
80
 
81
- * `content`:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
 
83
 
84
  ### Data Splits
 
 
 
 
 
85
 
86
  The dataset is split into `train`, `validation`, and `test`.
87
  * `train`: 6.44k rows
@@ -92,12 +112,34 @@ The dataset is split into `train`, `validation`, and `test`.
92
 
93
  ### Curation Rationale
94
 
95
- ### Source Data
 
96
 
97
  ### Data Collection and Preprocessing
98
 
 
 
 
 
 
 
 
 
 
 
99
  ### Annotations
100
 
 
 
 
 
 
 
 
 
 
 
 
101
  ### Personal and Sensitive Information
102
 
103
  ## Considerations for Using the Data
 
62
 
63
  ## Dataset Description
64
 
65
+ * **GitHub Repo:** https://github.com/blitzprecision/FICLE
66
  * **Paper:**
67
  * **Point of Contact:**
68
 
69
  ### Dataset Summary
70
 
71
+ The FICLE dataset is a derivative of the FEVER dataset, which is a collection of 185,445 claims generated by modifying sentences obtained from Wikipedia.
72
+ These claims were then verified without knowledge of the original sentences they were derived from. Each sample in the FEVER dataset consists of a claim sentence, a context sentence extracted from a Wikipedia URL as evidence, and a type label indicating whether the claim is supported, refuted, or lacks sufficient information.
73
+
74
  ### Languages
75
 
76
  The FICLE Dataset contains only English.
77
 
78
  ## Dataset Structure
79
 
 
 
80
  ### Data Fields
81
 
82
+ * `Claim (string)`:
83
+ * `Context (string)`:
84
+ * `Source (string)`:
85
+ * `Source Indices (string)`:
86
+ * `Relation (string)`:
87
+ * `Relation Indices (string)`:
88
+ * `Target (string)`:
89
+ * `Target Indices (string)`:
90
+ * `Inconsistent Claim Component (string)`:
91
+ * `Inconsistent Context-Span (string)`:
92
+ * `Inconsistent Context-Span Indices (string)`:
93
+ * `Inconsistency Type (string)`:
94
+ * `Fine-grained Inconsistent Entity-Type (string)`:
95
+ * `Coarse Inconsistent Entity-Type (string)`:
96
+
97
 
98
 
99
  ### Data Splits
100
+ The FICLE dataset comprises a total of 8,055 samples in the English language, each representing different instances of inconsistencies.
101
+ These inconsistencies are categorized into five types: Taxonomic Relations (4,842 samples), Negation (1,630 samples), Set Based (642 samples), Gradable (526 samples), and Simple (415 samples).
102
+
103
+ Within the dataset, there are six possible components that contribute to the inconsistencies found in the claim sentences.
104
+ These components are distributed as follows: Target-Head (3,960 samples), Target-Modifier (1,529 samples), Relation-Head (951 samples), Relation-Modifier (1,534 samples), Source-Head (45 samples), and Source-Modifier (36 samples).
105
 
106
  The dataset is split into `train`, `validation`, and `test`.
107
  * `train`: 6.44k rows
 
112
 
113
  ### Curation Rationale
114
 
115
+ We propose a linguistically enriched dataset to help detect inconsistencies and explain them.
116
+ To this end, the broad requirements are to locate where the inconsistency is present between a claim and a context and to have a classification scheme for better explainability.
117
 
118
  ### Data Collection and Preprocessing
119
 
120
+ The FICLE dataset is derived from the FEVER dataset, using the following-
121
+ ing processing steps. FEVER (Fact Extraction and VERification) consists of
122
+ 185,445 claims were generated by altering sentences extracted from Wikipedia and
123
+ subsequently verified without knowledge of the sentence they were derived from.
124
+ Every sample in the FEVER dataset contains the claim sentence, evidence (or
125
+ context) sentence from a Wikipedia URL, and a type label (‘supports’, ‘refutes’, or
126
+ ‘not enough info’). Out of these, we leverage only the samples with the ‘refutes’ label
127
+ to build our dataset.
128
+
129
+
130
  ### Annotations
131
 
132
+ You can see the annotation guidelines [here](https://github.com/blitzprecision/FICLE/blob/main/ficle_annotation_guidelines.pdf).
133
+
134
+ In order to provide detailed explanations for inconsistencies, extensive annotations were conducted for each sample in the FICLE dataset. The annotation process involved two iterations, with each iteration focusing on different aspects of the dataset.
135
+ In the first iteration, the annotations were primarily "syntactic-oriented." These fields included identifying the inconsistent claim fact triple, marking inconsistent context spans, and categorizing the six possible inconsistent claim components.
136
+ The second iteration of annotations concentrated on "semantic-oriented" aspects. Annotators labeled semantic fields for each sample, such as the type of inconsistency, coarse inconsistent entity types, and fine-grained inconsistent entity types.
137
+ This stage aimed to capture the semantic nuances and provide a deeper understanding of the inconsistencies present in the dataset.
138
+
139
+ The annotation process was carried out by a group of four annotators, two of whom are also authors of the dataset. The annotators possess a strong command of the English language and hold Bachelor's degrees in Computer Science, specializing in computational linguistics.
140
+ Their expertise in the field ensured accurate and reliable annotations. The annotators' ages range from 20 to 22 years, indicating their familiarity with contemporary language usage and computational linguistic concepts.
141
+
142
+
143
  ### Personal and Sensitive Information
144
 
145
  ## Considerations for Using the Data