albertvillanova
HF staff
Convert dataset sizes from base 2 to base 10 in the dataset card (#1)
c040ed0
metadata
language:
- en
paperswithcode_id: sentiment140
pretty_name: Sentiment140
train-eval-index:
- config: sentiment140
task: text-classification
task_id: multi_class_classification
splits:
train_split: train
eval_split: test
col_mapping:
text: text
sentiment: target
metrics:
- type: accuracy
name: Accuracy
- type: f1
name: F1 macro
args:
average: macro
- type: f1
name: F1 micro
args:
average: micro
- type: f1
name: F1 weighted
args:
average: weighted
- type: precision
name: Precision macro
args:
average: macro
- type: precision
name: Precision micro
args:
average: micro
- type: precision
name: Precision weighted
args:
average: weighted
- type: recall
name: Recall macro
args:
average: macro
- type: recall
name: Recall micro
args:
average: micro
- type: recall
name: Recall weighted
args:
average: weighted
dataset_info:
features:
- name: text
dtype: string
- name: date
dtype: string
- name: user
dtype: string
- name: sentiment
dtype: int32
- name: query
dtype: string
config_name: sentiment140
splits:
- name: test
num_bytes: 73365
num_examples: 498
- name: train
num_bytes: 225742946
num_examples: 1600000
download_size: 81363704
dataset_size: 225816311
Dataset Card for "sentiment140"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: http://help.sentiment140.com/home
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 81.36 MB
- Size of the generated dataset: 225.82 MB
- Total amount of disk used: 307.18 MB
Dataset Summary
Sentiment140 consists of Twitter messages with emoticons, which are used as noisy labels for sentiment classification. For more detailed information please refer to the paper.
Supported Tasks and Leaderboards
Languages
Dataset Structure
Data Instances
sentiment140
- Size of downloaded dataset files: 81.36 MB
- Size of the generated dataset: 225.82 MB
- Total amount of disk used: 307.18 MB
An example of 'train' looks as follows.
{
"date": "23-04-2010",
"query": "NO_QUERY",
"sentiment": 3,
"text": "train message",
"user": "train user"
}
Data Fields
The data fields are the same among all splits.
sentiment140
text
: astring
feature.date
: astring
feature.user
: astring
feature.sentiment
: aint32
feature.query
: astring
feature.
Data Splits
name | train | test |
---|---|---|
sentiment140 | 1600000 | 498 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@article{go2009twitter,
title={Twitter sentiment classification using distant supervision},
author={Go, Alec and Bhayani, Richa and Huang, Lei},
journal={CS224N project report, Stanford},
volume={1},
number={12},
pages={2009},
year={2009}
}
Contributions
Thanks to @patrickvonplaten, @thomwolf for adding this dataset.