Update README.md
Browse files
README.md
CHANGED
@@ -15,8 +15,8 @@ fp16 weights can be found here: https://huggingface.co/bhenrym14/airophin-13b-pn
|
|
15 |
## Overview
|
16 |
|
17 |
This is a finetune of Llama-2-13b, intended to extend the useful context window to 16384 tokens. There are two training phases:
|
18 |
-
1. It is first trained on a long-context (>7000 to 8192 token range, GPT4 only) subset of [dolphin](https://huggingface.co/datasets/ehartford/dolphin), an orca-like dataset. This amounts to roughly 110mm tokens, seen twice over two epochs. Airoboros-like training prompt was used. This took ~45 hours.
|
19 |
-
2. The model was then finetuned on [Jon Durbin's Airoboros
|
20 |
|
21 |
**This is a QLoRA fine-tune**.
|
22 |
|
|
|
15 |
## Overview
|
16 |
|
17 |
This is a finetune of Llama-2-13b, intended to extend the useful context window to 16384 tokens. There are two training phases:
|
18 |
+
1. It is first trained on a long-context (>7000 to 8192 token range, GPT4 only) subset of [dolphin](https://huggingface.co/datasets/ehartford/dolphin), an orca-like dataset. This amounts to roughly 110mm tokens, seen twice over two epochs. Airoboros-like training prompt was used, with partial NTK scaling applied. This took ~45 hours.
|
19 |
+
2. The model was then finetuned on [Jon Durbin's Airoboros GPT4 1.4.1](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1) for 3 epochs. This took ~17 hours.
|
20 |
|
21 |
**This is a QLoRA fine-tune**.
|
22 |
|