danielpreotiuc
commited on
Commit
•
b07d250
1
Parent(s):
1558abe
Update README.md
Browse files
README.md
CHANGED
@@ -3,7 +3,7 @@ license: apache-2.0
|
|
3 |
---
|
4 |
|
5 |
# KeyBART
|
6 |
-
KeyBART as described in Learning Rich Representations of Keyphrase from Text (https://
|
7 |
|
8 |
We provide some examples on Downstream Evaluations setups and and also how it can be used for Text-to-Text Generation in a zero-shot setting.
|
9 |
|
@@ -102,5 +102,25 @@ Output: language model;keyphrase generation;new pre-training objective;pre-train
|
|
102 |
|
103 |
```
|
104 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
105 |
|
106 |
Please direct all questions to dpreotiucpie@bloomberg.net
|
|
|
3 |
---
|
4 |
|
5 |
# KeyBART
|
6 |
+
KeyBART as described in "Learning Rich Representations of Keyphrase from Text" published in the Findings of NAACL 2022 (https://aclanthology.org/2022.findings-naacl.67.pdf), pre-trains a BART-based architecture to produce a concatenated sequence of keyphrases in the CatSeqD format.
|
7 |
|
8 |
We provide some examples on Downstream Evaluations setups and and also how it can be used for Text-to-Text Generation in a zero-shot setting.
|
9 |
|
|
|
102 |
|
103 |
```
|
104 |
|
105 |
+
## Citation
|
106 |
+
|
107 |
+
Please cite this work using the following BibTeX entry:
|
108 |
+
|
109 |
+
@inproceedings{kulkarni-etal-2022-learning,
|
110 |
+
title = "Learning Rich Representation of Keyphrases from Text",
|
111 |
+
author = "Kulkarni, Mayank and
|
112 |
+
Mahata, Debanjan and
|
113 |
+
Arora, Ravneet and
|
114 |
+
Bhowmik, Rajarshi",
|
115 |
+
booktitle = "Findings of the Association for Computational Linguistics: NAACL 2022",
|
116 |
+
month = jul,
|
117 |
+
year = "2022",
|
118 |
+
address = "Seattle, United States",
|
119 |
+
publisher = "Association for Computational Linguistics",
|
120 |
+
url = "https://aclanthology.org/2022.findings-naacl.67",
|
121 |
+
doi = "10.18653/v1/2022.findings-naacl.67",
|
122 |
+
pages = "891--906",
|
123 |
+
abstract = "In this work, we explore how to train task-specific language models aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer language models (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (upto 8.16 points in F1) over SOTA, when the LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (upto 4.33 points in F1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained language models on named entity recognition (NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks.",
|
124 |
+
}
|
125 |
|
126 |
Please direct all questions to dpreotiucpie@bloomberg.net
|