Datasets:
File size: 7,062 Bytes
985fbdf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 |
---
annotations_creators:
- expert-generated
language_creators:
- found
languages:
- en
licenses:
- cc-by-nc-sa-3-0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- multi-class-classification
- sentiment-classification
---
# Dataset Card for financial_phrasebank
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Kaggle](https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-news) [ResearchGate](https://www.researchgate.net/publication/251231364_FinancialPhraseBank-v10)
- **Repository:**
- **Paper:** [Arxiv](https://arxiv.org/abs/1307.5336)
- **Leaderboard:** [Kaggle](https://www.kaggle.com/ankurzing/sentiment-analysis-for-financial-news/code) [PapersWithCode](https://paperswithcode.com/sota/sentiment-analysis-on-financial-phrasebank) =
- **Point of Contact:**
### Dataset Summary
Polar sentiment dataset of sentences from financial news. The dataset consists of 4840 sentences from English language financial news categorised by sentiment. The dataset is divided by agreement rate of 5-8 annotators.
### Supported Tasks and Leaderboards
Sentiment Classification
### Languages
English
## Dataset Structure
### Data Instances
```
{ "sentence": "Pharmaceuticals group Orion Corp reported a fall in its third-quarter earnings that were hit by larger expenditures on R&D and marketing .",
"label": "negative"
}
```
### Data Fields
- sentence: a tokenized line from the dataset
- label: a label corresponding to the class as a string: 'positive', 'negative' or 'neutral'
### Data Splits
There's no train/validation/test split.
However the dataset is available in four possible configurations depending on the percentage of agreement of annotators:
`sentences_50agree`; Number of instances with >=50% annotator agreement: 4846
`sentences_66agree`: Number of instances with >=66% annotator agreement: 4217
`sentences_75agree`: Number of instances with >=75% annotator agreement: 3453
`sentences_allagree`: Number of instances with 100% annotator agreement: 2264
## Dataset Creation
### Curation Rationale
The key arguments for the low utilization of statistical techniques in
financial sentiment analysis have been the difficulty of implementation for
practical applications and the lack of high quality training data for building
such models. Especially in the case of finance and economic texts, annotated
collections are a scarce resource and many are reserved for proprietary use
only. To resolve the missing training data problem, we present a collection of
∼ 5000 sentences to establish human-annotated standards for benchmarking
alternative modeling techniques.
The objective of the phrase level annotation task was to classify each example
sentence into a positive, negative or neutral category by considering only the
information explicitly available in the given sentence. Since the study is
focused only on financial and economic domains, the annotators were asked to
consider the sentences from the view point of an investor only; i.e. whether
the news may have positive, negative or neutral influence on the stock price.
As a result, sentences which have a sentiment that is not relevant from an
economic or financial perspective are considered neutral.
### Source Data
#### Initial Data Collection and Normalization
The corpus used in this paper is made out of English news on all listed
companies in OMX Helsinki. The news has been downloaded from the LexisNexis
database using an automated web scraper. Out of this news database, a random
subset of 10,000 articles was selected to obtain good coverage across small and
large companies, companies in different industries, as well as different news
sources. Following the approach taken by Maks and Vossen (2010), we excluded
all sentences which did not contain any of the lexicon entities. This reduced
the overall sample to 53,400 sentences, where each has at least one or more
recognized lexicon entity. The sentences were then classified according to the
types of entity sequences detected. Finally, a random sample of ∼5000 sentences
was chosen to represent the overall news database.
#### Who are the source language producers?
The source data was written by various financial journalists.
### Annotations
#### Annotation process
This release of the financial phrase bank covers a collection of 4840
sentences. The selected collection of phrases was annotated by 16 people with
adequate background knowledge on financial markets.
Given the large number of overlapping annotations (5 to 8 annotations per
sentence), there are several ways to define a majority vote based gold
standard. To provide an objective comparison, we have formed 4 alternative
reference datasets based on the strength of majority agreement:
#### Who are the annotators?
Three of the annotators were researchers and the remaining 13 annotators were
master's students at Aalto University School of Business with majors primarily
in finance, accounting, and economics.
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
All annotators were from the same institution and so interannotator agreement
should be understood with this taken into account.
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
License: Creative Commons Attribution 4.0 International License (CC-BY)
### Citation Information
```
@article{Malo2014GoodDO,
title={Good debt or bad debt: Detecting semantic orientations in economic texts},
author={P. Malo and A. Sinha and P. Korhonen and J. Wallenius and P. Takala},
journal={Journal of the Association for Information Science and Technology},
year={2014},
volume={65}
}
```
### Contributions
Thanks to [@frankier](https://github.com/frankier) for adding this dataset.
|