Datasets:
manueltonneau
commited on
Commit
•
4b748e1
1
Parent(s):
db6b869
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
- es
|
5 |
+
- pt
|
6 |
+
size_categories:
|
7 |
+
- 10K<n<100K
|
8 |
+
task_categories:
|
9 |
+
- text-classification
|
10 |
+
---
|
11 |
+
# Dataset Card for Twitter Labor Market
|
12 |
+
|
13 |
+
<!-- Provide a quick summary of the dataset. -->
|
14 |
+
|
15 |
+
Twitter Labor Market is a dataset of tweets annotated for employment inference of Twitter users as well as job offer detection in tweets. It contains 28,034 annotated tweets, including 8,876 tweets from US-based users, 12,202 tweets from Mexican users and 7,156 tweets from Brazilian users. For a complete description of the data, please refer to the [reference paper](https://aclanthology.org/2022.acl-long.453/).
|
16 |
+
|
17 |
+
## Source Data
|
18 |
+
|
19 |
+
This dataset was sourced from a large dataset of tweets posted between January 2007 and December 2020. These tweets correspond to the the timelines of the users with at least one tweet in the Twitter Decahose and with an inferred profile location in the United States, Brazil, and Mexico.
|
20 |
+
|
21 |
+
|
22 |
+
## Annotation
|
23 |
+
|
24 |
+
We recruited a team of MTurk crowdworkers for each country. Each tweet was annotated as belonging or not to five binary classes:
|
25 |
+
- `is_unemployed`: whether the tweet indicates that her author is unemployed (`1` if yes and `0` if no)
|
26 |
+
- `is_hired_1mo`: whether the tweet indicates that her author was hired in the past month (`1` if yes and `0` if no)
|
27 |
+
- `lost_job_1mo`: whether the tweet indicates that her author lost their job in the past month (`1` if yes and `0` if no)
|
28 |
+
- `job_search`: whether the tweet indicates that her author is looking for a job (`1` if yes and `0` if no)
|
29 |
+
- `job_offer`: whether the tweet contains a job offer (`1` if yes and `0` if no)
|
30 |
+
|
31 |
+
Each tweet was labeled by two annotators. To create a label for a given tweet, we required that at least two annotators provided the same answer and dropped disagreements.
|
32 |
+
|
33 |
+
|
34 |
+
## BibTeX entry and citation information
|
35 |
+
|
36 |
+
Please cite the [reference paper](https://aclanthology.org/2022.acl-long.453/) if you use this dataset.
|
37 |
+
|
38 |
+
```bibtex
|
39 |
+
@inproceedings{tonneau-etal-2022-multilingual,
|
40 |
+
title = "Multilingual Detection of Personal Employment Status on {T}witter",
|
41 |
+
author = "Tonneau, Manuel and
|
42 |
+
Adjodah, Dhaval and
|
43 |
+
Palotti, Joao and
|
44 |
+
Grinberg, Nir and
|
45 |
+
Fraiberger, Samuel",
|
46 |
+
editor = "Muresan, Smaranda and
|
47 |
+
Nakov, Preslav and
|
48 |
+
Villavicencio, Aline",
|
49 |
+
booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
|
50 |
+
month = may,
|
51 |
+
year = "2022",
|
52 |
+
address = "Dublin, Ireland",
|
53 |
+
publisher = "Association for Computational Linguistics",
|
54 |
+
url = "https://aclanthology.org/2022.acl-long.453",
|
55 |
+
doi = "10.18653/v1/2022.acl-long.453",
|
56 |
+
pages = "6564--6587",
|
57 |
+
abstract = "Detecting disclosures of individuals{'} employment status on social media can provide valuable information to match job seekers with suitable vacancies, offer social protection, or measure labor market flows. However, identifying such personal disclosures is a challenging task due to their rarity in a sea of social media content and the variety of linguistic forms used to describe them. Here, we examine three Active Learning (AL) strategies in real-world settings of extreme class imbalance, and identify five types of disclosures about individuals{'} employment status (e.g. job loss) in three languages using BERT-based classification models. Our findings show that, even under extreme imbalance settings, a small number of AL iterations is sufficient to obtain large and significant gains in precision, recall, and diversity of results compared to a supervised baseline with the same number of labels. We also find that no AL strategy consistently outperforms the rest. Qualitative analysis suggests that AL helps focus the attention mechanism of BERT on core terms and adjust the boundaries of semantic expansion, highlighting the importance of interpretable models to provide greater control and visibility into this dynamic learning process.",
|
58 |
+
}
|
59 |
+
|
60 |
+
```
|