Safetensors
llama
osanseviero's picture
Add proper library tag
e276a9a verified
|
raw
history blame
3.62 kB
metadata
license: llama3
datasets:
  - princeton-nlp/prolong-data-64K
  - princeton-nlp/prolong-data-512K
  - HuggingFaceH4/ultrachat_200k
base_model:
  - princeton-nlp/Llama-3-8B-ProLong-512k-Base
library_name: transformers

princeton_nlp/Llama-3-8B-ProLong-512k-Instruct

[Paper] [HF Collection] [Code]

ProLong (Princeton long-context language models) is a family of long-context models that are continued trained and supervised fine-tuned from Llama-3-8B, with a maximum context window of 512K tokens. Our main ProLong model is one of the best-performing long-context models at the 10B scale (evaluated by HELMET).

To train this strong long-context model, we conduct thorough ablations on the long-context pre-training data, SFT data, and numerous other design choices. We demonstrate our findings in our paper, How to Train Long-Context Language Models (Effectively).

Authors: Tianyu Gao*, Alexander Wettig*, Howard Yen, Danqi Chen (* equal contribution)

Contact: {tianyug, awettig}@princeton.edu

The ProLong Models

Model card

Here are some quick facts about our main ProLong model: princeton-nlp/Llama-3-8B-ProLong-512k-Instruct.

image

ProLong performance on HELMET averaged over 32K, 64K, and 128K lengths. All models are instruct models.

image

ProLong training recipe.

Citation

@article{gao2024prolong,
    title={Enabling Large Language Models to Generate Text with Citations},
    author={Gao, Tianyu and Wettig, Alexander and Yen, Howard and Chen, Danqi},
    year={2024},
}