File size: 3,279 Bytes
28629e5 96a4dba 28629e5 18de400 211f834 28629e5 96a4dba 18de400 96a4dba 28629e5 4c31f0c 96a4dba 946a5e5 764e564 946a5e5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 |
---
language:
- en
license: mit
size_categories:
- n<1K
task_categories:
- question-answering
dataset_info:
features:
- name: answer
dtype: float64
- name: question
dtype: string
- name: context
dtype: string
splits:
- name: test
num_bytes: 341222
num_examples: 91
download_size: 155341
dataset_size: 341222
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
tags:
- finance
---
# FinQA: Financial Question Answering Dataset
## Description
The FinQA dataset is designed to facilitate research and development in the area of question answering (QA) using financial texts.
It consists of a subset of QA pairs from a larger dataset, originally created through a collaboration between researchers from the University of Pennsylvania,
J.P. Morgan, and Amazon.The original dataset includes 8,281 QA pairs built against publicly available earnings reports of S&P 500 companies from 1999 to 2019
([FinQA: A Dataset of Numerical Reasoning over Financial Data.](https://arxiv.org/pdf/2305.05862)).
This subset, specifically curated by Aiera, consists of 91 QA pairs. Each entry in the dataset includes a `context`, a `question`, and an `answer`, with each
component manually verified for accuracy and formatting consistency. A walkthrough of the curation process is available on medium
[here](https://medium.com/@jacqueline.garrahan/lessons-in-benchmarking-finqa-0a5e810b8d15).
## Dataset Structure
### Columns
- `context`: The segment of the financial text (earnings report) where the answer can be found.
- `question`: A question posed regarding the financial text, requiring numerical reasoning or specific information extraction.
- `answer`: The correct answer to the question, extracted or inferred from the context.
### Data Format
The dataset is formatted in a simple tabular structure where each row represents a unique QA pair, consisting of context, question, and answer columns.
## Use Cases
This dataset is particularly useful for:
- Evaluating models on financial document comprehension and numerical reasoning.
- Developing AI applications aimed at automating financial analysis, such as automated report summarization or advisory tools.
- Enhancing understanding of financial documents through AI-driven interactive tools.
## Accessing the Dataset
You can load this dataset via the HuggingFace Datasets library using the following Python code:
```python
from datasets import load_dataset
dataset = load_dataset("Aiera/finqa-verified")
```
A guide for evaluating using EleutherAI's [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) is available on [github](https://github.com/aiera-inc/aiera-benchmark-tasks).
## Additional Resources
For more insights into the dataset and the methodologies used in its creation, refer to the blog post: [Lessons in Benchmarking FinQA](https://medium.com/@jacqueline.garrahan/lessons-in-benchmarking-finqa-0a5e810b8d15).
## Acknowledgments
This dataset is the result of collaborative efforts by researchers at the University of Pennsylvania, J.P. Morgan, and Amazon, with further contributions and verifications by Aiera. We express our gratitude to all parties involved in the creation and maintenance of this resource.
|