Edit model card

Model Card: Nous-Yarn-Llama-2-13b-128k

Preprint (arXiv)
GitHub

Model Description

Nous-Yarn-Llama-2-13b-128k is a state-of-the-art language model for long context, further pretrained on long context data for 600 steps.
This model is the Flash Attention 2 patched version of the original model: https://huggingface.co/conceptofmind/Yarn-Llama-2-13b-128k

Note that this model requires the Flash Attention library in order to function correctly, see the Model Usage section for installation instructions.

Model Training

Starting from the base Llama 2 models, this model was further pretrained on a subset of the PG19 dataset, allowing it to effectively utilize up to 128k tokens of context.

Collaborators

The authors would like to thank Stability AI, Carper AI, and Eleuther AI for their generous support of significant computing resources that enabled the training of these models and the completion of this research. We would also like to thank Jonathan Tow and Dakota Mahan directly for their help in advising on the use of the Stability AI compute cluster. Additionally, we would like to thank a16z, and PygmalionAI, for providing resources to run evaluations and experiments on the models.

Usage and Prompt Format

Install FA2 and Rotary Extensions:

pip install flash-attn --no-build-isolation
pip install git+https://github.com/HazyResearch/flash-attention.git#subdirectory=csrc/rotary

There are no specific prompt formats as this is a pretrained base model.

Benchmark Results

TODO

Future Plans

We plan to continue training when we have more compute and to improve the dataset and/or instruct tune the models in order to improve the long context performance even further.

Model Usage

The model is available for download on HuggingFace.

Downloads last month
25
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for NousResearch/Yarn-Llama-2-13b-128k

Finetunes
1 model
Quantizations
3 models

Dataset used to train NousResearch/Yarn-Llama-2-13b-128k

Spaces using NousResearch/Yarn-Llama-2-13b-128k 2

Collection including NousResearch/Yarn-Llama-2-13b-128k