Update README.md
Browse files
README.md
CHANGED
@@ -84,4 +84,17 @@ We provide in-depth evaluation of LongForm models and baselines in the paper. We
|
|
84 |
‡: We can just release the difference between LongForm-LLaMA-7B and pretrained LLaMA-7B publicly due to restrictions of LLaMA models.
|
85 |
|
86 |
## Limitations
|
87 |
-
The LongForm dataset and models mainly focus on long text generation and have limitations regarding structured prediction tasks in NLP. Additionally, we observe that LongForm models may present hallucination problems similar to those found in LLMs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
84 |
‡: We can just release the difference between LongForm-LLaMA-7B and pretrained LLaMA-7B publicly due to restrictions of LLaMA models.
|
85 |
|
86 |
## Limitations
|
87 |
+
The LongForm dataset and models mainly focus on long text generation and have limitations regarding structured prediction tasks in NLP. Additionally, we observe that LongForm models may present hallucination problems similar to those found in LLMs.
|
88 |
+
|
89 |
+
|
90 |
+
## Citation
|
91 |
+
```
|
92 |
+
@misc{koksal2023longform,
|
93 |
+
title={LongForm: Optimizing Instruction Tuning for Long Text Generation with Corpus Extraction},
|
94 |
+
author={Abdullatif Köksal and Timo Schick and Anna Korhonen and Hinrich Schütze},
|
95 |
+
year={2023},
|
96 |
+
eprint={2304.08460},
|
97 |
+
archivePrefix={arXiv},
|
98 |
+
primaryClass={cs.CL}
|
99 |
+
}
|
100 |
+
```
|