Update README.md
Browse files
README.md
CHANGED
@@ -28,3 +28,53 @@ configs:
|
|
28 |
tags:
|
29 |
- finance
|
30 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
tags:
|
29 |
- finance
|
30 |
---
|
31 |
+
|
32 |
+
# FinQA: Financial Question Answering Dataset
|
33 |
+
|
34 |
+
## Description
|
35 |
+
|
36 |
+
The FinQA dataset is designed to facilitate research and development in the area of question answering (QA) using financial texts.
|
37 |
+
It consists of a subset of QA pairs from a larger dataset, originally created through a collaboration between researchers from the University of Pennsylvania,
|
38 |
+
J.P. Morgan, and Amazon.The original dataset includes 8,281 QA pairs built against publicly available earnings reports of S&P 500 companies from 1999 to 2019
|
39 |
+
([FinQA: A Dataset of Numerical Reasoning over Financial Data.](https://arxiv.org/pdf/2305.05862)).
|
40 |
+
|
41 |
+
This subset, specifically curated by Aiera, consists of 91 QA pairs. Each entry in the dataset includes a `context`, a `question`, and an `answer`, with each
|
42 |
+
component manually verified for accuracy and formatting consistency. A walkthrough of the curation process is available on medium
|
43 |
+
[here](https://medium.com/@jacqueline.garrahan/lessons-in-benchmarking-finqa-0a5e810b8d15).
|
44 |
+
|
45 |
+
## Dataset Structure
|
46 |
+
|
47 |
+
### Columns
|
48 |
+
|
49 |
+
- `context`: The segment of the financial text (earnings report) where the answer can be found.
|
50 |
+
- `question`: A question posed regarding the financial text, requiring numerical reasoning or specific information extraction.
|
51 |
+
- `answer`: The correct answer to the question, extracted or inferred from the context.
|
52 |
+
|
53 |
+
### Data Format
|
54 |
+
|
55 |
+
The dataset is formatted in a simple tabular structure where each row represents a unique QA pair, consisting of context, question, and answer columns.
|
56 |
+
|
57 |
+
## Use Cases
|
58 |
+
|
59 |
+
This dataset is particularly useful for:
|
60 |
+
- Evaluating models on financial document comprehension and numerical reasoning.
|
61 |
+
- Developing AI applications aimed at automating financial analysis, such as automated report summarization or advisory tools.
|
62 |
+
- Enhancing understanding of financial documents through AI-driven interactive tools.
|
63 |
+
|
64 |
+
## Accessing the Dataset
|
65 |
+
|
66 |
+
You can load this dataset via the HuggingFace Datasets library using the following Python code:
|
67 |
+
|
68 |
+
```python
|
69 |
+
from datasets import load_dataset
|
70 |
+
|
71 |
+
dataset = load_dataset("Aiera/finqa-verified")
|
72 |
+
```
|
73 |
+
|
74 |
+
## Additional Resources
|
75 |
+
|
76 |
+
For more insights into the dataset and the methodologies used in its creation, refer to the blog post: [Lessons in Benchmarking FinQA](https://medium.com/@jacqueline.garrahan/lessons-in-benchmarking-finqa-0a5e810b8d15).
|
77 |
+
|
78 |
+
## Acknowledgments
|
79 |
+
|
80 |
+
This dataset is the result of collaborative efforts by researchers at the University of Pennsylvania, J.P. Morgan, and Amazon, with further contributions and verifications by Aiera. We express our gratitude to all parties involved in the creation and maintenance of this resource.
|