Text Generation
Transformers
llama
Inference Endpoints
bhenrym14 commited on
Commit
15219c9
1 Parent(s): 1d4a561

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -15,8 +15,8 @@ fp16 weights can be found here: https://huggingface.co/bhenrym14/airophin-13b-pn
15
  ## Overview
16
 
17
  This is a finetune of Llama-2-13b, intended to extend the useful context window to 16384 tokens. There are two training phases:
18
- 1. It is first trained on a long-context (>7000 to 8192 token range, GPT4 only) subset of [dolphin](https://huggingface.co/datasets/ehartford/dolphin), an orca-like dataset. This amounts to roughly 110mm tokens, seen twice over two epochs. Airoboros-like training prompt was used. This took ~45 hours.
19
- 2. The model was then finetuned on [Jon Durbin's Airoboros 13B GPT4 1.4](https://huggingface.co/jondurbin/airoboros-13b-gpt4-1.4) for 3 epochs. This took ~17 hours.
20
 
21
  **This is a QLoRA fine-tune**.
22
 
 
15
  ## Overview
16
 
17
  This is a finetune of Llama-2-13b, intended to extend the useful context window to 16384 tokens. There are two training phases:
18
+ 1. It is first trained on a long-context (>7000 to 8192 token range, GPT4 only) subset of [dolphin](https://huggingface.co/datasets/ehartford/dolphin), an orca-like dataset. This amounts to roughly 110mm tokens, seen twice over two epochs. Airoboros-like training prompt was used, with partial NTK scaling applied. This took ~45 hours.
19
+ 2. The model was then finetuned on [Jon Durbin's Airoboros GPT4 1.4.1](https://huggingface.co/datasets/jondurbin/airoboros-gpt4-1.4.1) for 3 epochs. This took ~17 hours.
20
 
21
  **This is a QLoRA fine-tune**.
22