Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 10,949 Bytes
88d819e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3f83c64
88d819e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
---
license: apache-2.0
configs:
- config_name: default
  data_files:
  - split: train
    path: "train.public.merged.json"
  - split: validation
    path: "valid.public.merged.json"
  - split: test
    path: "test.public.merged.json"
  - split: academic
    path: "Academic.public.merged.json"
  - split: ood
    path: "OOD.public.merged.json"
- config_name: paper
  data_files:
  - split: parts
    path:
    - "part_*.public.merged.json"
  - split: academic
    path: "Academic.public.merged.json"
  - split: ood
    path: "OOD.public.merged.json"
---

# TweetNERD - End to End Entity Linking Benchmark for Tweets

[![Dataset DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.5013186.svg)](https://doi.org/10.5281/zenodo.5013186) [![arXiv](https://img.shields.io/badge/arXiv-2210.08129-b31b1b.svg)](https://arxiv.org/abs/2210.08129) [![Poster](https://img.shields.io/badge/Poster-Neurips2022-b31b1b.svg)](./Neurips_2022_Poster.pdf) [![Slides](https://img.shields.io/badge/Slides-Neurips2022-b31b1b.svg)](./Neurips_2022_Slides.pdf) [![YouTube Video Views](https://img.shields.io/youtube/views/H5ypIHterWQ?style=social)](https://www.youtube.com/watch?v=H5ypIHterWQ)


This is the *hydrated version* of dataset described in the paper **TweetNERD - End to End Entity Linking Benchmark for Tweets** (to be released soon). It includes the Tweet text based on the Twitter API.

> Named Entity Recognition and Disambiguation (NERD) systems are foundational for information retrieval, question answering, event detection, and other natural language processing (NLP) applications. We introduce TweetNERD, a dataset of 340K+ Tweets across 2010-2021, for benchmarking NERD systems on Tweets. This is the largest and most temporally diverse open sourced dataset benchmark for NERD on Tweets and can be used to facilitate research in this area.


TweetNERD dataset is released under [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) LICENSE.

The license only applies to the data files present in this dataset. See **Data usage policy** below. 


## Usage

We provide the dataset split across the following tab seperated files:

* **OOD.public.merged.tsv**: OOD split of the data in the paper. 
* **Academic.public.merged.tsv**: Academic split of the data described in the paper. 
* `part_*.public.merged.tsv`: Remaining data split into parts in no particular order.

Official train test splits:
* `train.public.merged.tsv`: Train split as described in paper based on `part_*` splits.
* `valid.public.merged.tsv`: Validation split as described in paper based on `part_*` splits.
* `test.public.merged.tsv`: Test split as described in paper based on `part_*` splits.



Each file is tab seperated and has has the following format:

| tweet_id            | phrase      | start   | end   | entityId    |   score |
|---------------------|-------------|---------|-------|-------------|---------|
| 22                  | [twttr]     | [20]    | [25]  | [Q918]      | [3]     |
| 21                  | [twttr]     | [20]    | [25]  | [Q918]      | [3]     |
| 1457198399032287235 | [Diwali]    | [30]    | [38]  | [Q10244]    | [3]     |
| 1232456079247736833 | [NO_PHRASE] | [-1]    | [-1]  | [NO_ENTITY] | [-1]    |

For tweets which don't have any entity, their column values for `phrase, start, end, entityId, score` are set `NO_PHRASE, -1, -1, NO_ENTITY, -1` respectively. 

Description of file columns is as follows:


| Column   | Type   | Missing Value | Description                                                                                           |
|----------|--------|---------------|-------------------------------------------------------------------------------------------------------|
| tweet_id | string |               | ID of the Tweet                                                                                       |
| phrase   | string | NO_PHRASE     | entity phrase                                                                                         |
| start    | int    | -1            | start offset of the phrase in text using `UTF-16BE` encoding                                          |
| end      | int    | -1            | end offset of the phrase in the text using `UTF-16BE` encoding                                        |
| entityId | string | NO_ENTITY     | Entity ID. If not missing can be NOT FOUND, AMBIGUOUS, or Wikidata ID of format Q{numbers}, e.g. Q918 |
| score    | int    | -1            | Number of annotators who agreed on the phrase, start, end, entityId information                       |

In order to use the dataset you need to utilize the `tweet_id` column and get the Tweet text using the [Twitter API](https://developer.twitter.com/en/docs/twitter-api) (See **Data usage policy** section below).



## Data stats

| Split    |   Number of Rows |   Number unique tweets | Number hydrated tweets |
|:---------|-----------------:|-----------------------:|-----------------------:|
| OOD      |            34102 |                  25000 |                  20937 |
| Academic |            51685 |                  30119 |                  28694 |
| part_0   |            11830 |                  10000 |                   6633 |
| part_1   |            35681 |                  25799 |                  19181 |
| part_2   |            34256 |                  25000 |                  19876 |
| part_3   |            36478 |                  25000 |                  20611 |
| part_4   |            37518 |                  24999 |                  20567 |
| part_5   |            36626 |                  25000 |                  20667 |
| part_6   |            34001 |                  24984 |                  20948 |
| part_7   |            34125 |                  24981 |                  20612 |
| part_8   |            32556 |                  25000 |                  20610 |
| part_9   |            32657 |                  25000 |                  21000 |
| part_10  |            32442 |                  25000 |                  20597 |
| part_11  |            32033 |                  24972 |                  20583 |
|----------|------------------|------------------------|------------------------|
| train    |           349252 |                 255490 |                 207278 |
| valid    |             6822 |                   5000 |                   4128 |
| test     |            34129 |                  25000 |                  20274 |


File Stats are as follows:

| part     	| output_file                 	| orig_rows 	| unique_tweet_ids 	| final_rows 	|
|----------:|------------------------------:|------------:|------------------:|------------:|
| Academic 	| Academic.public.merged.json 	| 51685     	| 30119            	| 28694      	|
| OOD      	| OOD.public.merged.json      	| 34102     	| 25000            	| 20937      	|
| part_0   	| part_0.public.merged.json   	| 11830     	| 10000            	| 6633       	|
| part_1   	| part_1.public.merged.json   	| 35681     	| 25799            	| 19181      	|
| part_10  	| part_10.public.merged.json  	| 32442     	| 25000            	| 20597      	|
| part_11  	| part_11.public.merged.json  	| 32033     	| 24972            	| 20583      	|
| part_2   	| part_2.public.merged.json   	| 34256     	| 25000            	| 19876      	|
| part_3   	| part_3.public.merged.json   	| 36478     	| 25000            	| 20611      	|
| part_4   	| part_4.public.merged.json   	| 37518     	| 24999            	| 20567      	|
| part_5   	| part_5.public.merged.json   	| 36626     	| 25000            	| 20667      	|
| part_6   	| part_6.public.merged.json   	| 34001     	| 24984            	| 20948      	|
| part_7   	| part_7.public.merged.json   	| 34125     	| 24981            	| 20612      	|
| part_8   	| part_8.public.merged.json   	| 32556     	| 25000            	| 20610      	|
| part_9   	| part_9.public.merged.json   	| 32657     	| 25000            	| 21000      	|
| test     	| test.public.merged.json     	| 34129     	| 25000            	| 20274      	|
| train    	| train.public.merged.json    	| 349252    	| 255490           	| 207278     	|
| valid    	| valid.public.merged.json    	| 6822      	| 5000             	| 4128       	|

## Data usage policy

Use of this dataset is subject to you obtaining lawful access to the [Twitter API](https://developer.twitter.com/en/docs/twitter-api), which requires you to agree to the [Developer Terms Policies and Agreements](https://developer.twitter.com/en/developer-terms/).


Cite as:

> Mishra, Shubhanshu, Saini, Aman, Makki, Raheleh, Mehta, Sneha, Haghighi, Aria, & Mollahosseini, Ali. (2022). TweetNERD - End to End Entity Linking Benchmark for Tweets (0.0.0) [Data set]. Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks (Neurips), New Orleans, LA, USA. Zenodo. https://doi.org/10.5281/zenodo.6617192
> Mishra, S., Saini, A., Makki, R., Mehta, S., Haghighi, A., & Mollahosseini, A. (2022). TweetNERD -- End to End Entity Linking Benchmark for Tweets (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2210.08129


Bibtex:

```
@inproceedings{TweetNERD,
  doi = {10.48550/ARXIV.2210.08129},
  
  url = {https://arxiv.org/abs/2210.08129},
  
  author = {Mishra, Shubhanshu and Saini, Aman and Makki, Raheleh and Mehta, Sneha and Haghighi, Aria and Mollahosseini, Ali},
  
  keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Information Retrieval (cs.IR), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2.7, 68T50, 68T07},
  
  title = {{TweetNERD} -- {End to End Entity Linking Benchmark for Tweets}},
  
  publisher = {arXiv},
  
  year = {2022},
    booktitle = "Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 2 (NeurIPS Datasets and Benchmarks 2022)",

  copyright = {Creative Commons Attribution 4.0 International}
}

@dataset{mishra_shubhanshu_2022_6617192,
  author       = {Mishra, Shubhanshu and
                  Saini, Aman and
                  Makki, Raheleh and
                  Mehta, Sneha and
                  Haghighi, Aria and
                  Mollahosseini, Ali},
  title        = {{TweetNERD - End to End Entity Linking Benchmark
                   for Tweets}},
  month        = jun,
  year         = 2022,
  note         = {{Data usage policy  Use of this dataset is subject
                   to you obtaining lawful access to the [Twitter
                   API](https://developer.twitter.com/en/docs
                   /twitter-api), which requires you to agree to the
                   [Developer Terms Policies and
                   Agreements](https://developer.twitter.com/en
                   /developer-terms/).}},
  publisher    = {Zenodo},
  version      = {0.0.0},
  doi          = {10.5281/zenodo.6617192},
  url          = {https://doi.org/10.5281/zenodo.6617192}
}
```