Datasets:

Languages:
Hindi
ArXiv:
License:
dipteshkanojia commited on
Commit
9fe01f1
1 Parent(s): eed7af5
Files changed (1) hide show
  1. README.md +31 -1
README.md CHANGED
@@ -70,7 +70,7 @@ Hindi
70
 
71
  ### Data Instances
72
 
73
- {'id': '0', 'tokens': ['*M0>@(', '8./', '.G', ' !<@8>', 'K', 'O', '2?', 'O', 'G', '(>.', '8G', '>(>', '>$>', '%>', 'd'], 'ner_tags': [0, 0, 0, 3, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0]}
74
 
75
  ### Data Fields
76
 
@@ -85,6 +85,36 @@ Hindi
85
  | original | 76025 | 10861 | 21722|
86
  | collapsed | 76025 | 10861 | 21722|
87
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88
 
89
  ## Dataset Creation
90
 
 
70
 
71
  ### Data Instances
72
 
73
+ {'id': '0', 'tokens': ['प्राचीन', 'समय', 'में', 'उड़ीसा', 'को', 'O', 'कलिंग', 'O', 'के', 'नाम', 'से', 'जाना', 'जाता', 'था', ''], 'ner_tags': [0, 0, 0, 3, 0, 0, 3, 0, 0, 0, 0, 0, 0, 0, 0]}
74
 
75
  ### Data Fields
76
 
 
85
  | original | 76025 | 10861 | 21722|
86
  | collapsed | 76025 | 10861 | 21722|
87
 
88
+ ## About
89
+
90
+ This repository contains the Hindi Named Entity Recognition dataset (HiNER) published at the Langauge Resources and Evaluation conference (LREC) in 2022.
91
+
92
+ ### Recent Updates
93
+ * Version 0.0.5: HiNER initial release
94
+
95
+ ## Usage
96
+
97
+ You should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip:
98
+
99
+ ```code
100
+ pip install datasets
101
+ ```
102
+
103
+ To use the original dataset with all the tags, please use:<br/>
104
+
105
+ ```python
106
+ from datasets import load_dataset
107
+ hiner = load_dataset('cfilt/HiNER-original')
108
+ ```
109
+
110
+ To use the collapsed dataset with only PER, LOC, and ORG tags, please use:<br/>
111
+
112
+ ```python
113
+ from datasets import load_dataset
114
+ hiner = load_dataset('cfilt/HiNER-collapsed')
115
+ ```
116
+ However, the CoNLL format dataset files can also be found on this Git repository under the [data](data/) folder.
117
+ This dataset can be imported by executing:
118
 
119
  ## Dataset Creation
120