newsbart-base / README.md
xiaowu0162's picture
Create README.md
e9ab532

Paper: Pre-trained Language Models for Keyphrase Generation: A Thorough Empirical Study

@article{https://doi.org/10.48550/arxiv.2212.10233,
  doi = {10.48550/ARXIV.2212.10233},
  url = {https://arxiv.org/abs/2212.10233},
  author = {Wu, Di and Ahmad, Wasi Uddin and Chang, Kai-Wei},
  keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
  title = {Pre-trained Language Models for Keyphrase Generation: A Thorough Empirical Study},
  publisher = {arXiv},
  year = {2022}, 
  copyright = {Creative Commons Attribution 4.0 International}
}

Pre-training Corpus: RealNews

Pre-training Details:

  • Resume from facebook/bart-base
  • Batch size: 2048
  • Total steps: 250k
  • Learning rate: 3e-4
  • LR schedule: polynomial with 10k warmup steps
  • Masking ratio: 30%, Poisson lambda = 3.5