bhenrym14 commited on
Commit
147af85
·
1 Parent(s): 5db64ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -16,7 +16,7 @@ This is [Jon Durbin's Airoboros 13B GPT4 1.4](https://huggingface.co/jondurbin/a
16
  - Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4
17
  - **This is a QLoRA fine-tune**. The original 13b model is a full fine-tune.
18
 
19
- Otherwise, I emulated the training process as closely as possible (rank 64 QLoRA). It was trained on 1x RTX 6000 Ada for ~18 hours.
20
 
21
  ## How to Use
22
  The easiest way is to use [oobabooga text-generation-webui](https://github.com/oobabooga/text-generation-webui) with ExLlama. You'll need to set max_seq_len to 8192 and compress_pos_emb to 4.
 
16
  - Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4
17
  - **This is a QLoRA fine-tune**. The original 13b model is a full fine-tune.
18
 
19
+ It was trained on 1x RTX 6000 Ada for ~18 hours.
20
 
21
  ## How to Use
22
  The easiest way is to use [oobabooga text-generation-webui](https://github.com/oobabooga/text-generation-webui) with ExLlama. You'll need to set max_seq_len to 8192 and compress_pos_emb to 4.