File size: 3,827 Bytes
f3bdaa9
8aa9469
be76671
f3bdaa9
d52d0ed
2bb7b95
d52d0ed
6311fcc
d52d0ed
6311fcc
d52d0ed
 
 
 
 
9f2a0a4
 
72e1fb5
d52d0ed
72e1fb5
d52d0ed
72e1fb5
d52d0ed
72e1fb5
d52d0ed
880e74b
 
 
72e1fb5
 
 
 
 
d52d0ed
 
 
 
 
 
72e1fb5
d52d0ed
 
 
 
 
 
 
 
 
 
 
 
72e1fb5
d52d0ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72e1fb5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
license: other
inference: false
---

# gpt4-x-vicuna-13B-GGML

These files are GGML format model files of [NousResearch's gpt4-x-vicuna-13b](https://huggingface.co/NousResearch/gpt4-x-vicuna-13b).

GGML files are for CPU inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).

## Repositories available

* [4bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-GPTQ).
* [4bit and 5bit GGML models for CPU inference](https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-GGML).
* [float16 HF model for unquantised and 8bit GPU inference](https://huggingface.co/TheBloke/gpt4-x-vicuna-13B-HF).

## THE FILES IN MAIN BRANCH REQUIRES LATEST LLAMA.CPP (May 19th 2023 - commit 2d5db48)!

llama.cpp recently made another breaking change to its quantisation methods - https://github.com/ggerganov/llama.cpp/pull/1508

I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 19th or later (commit `2d5db48` or later) to use them.

For files compatible with the previous version of llama.cpp, please see branch `previous_llama_ggmlv2`.

## Provided files
| Name | Quant method | Bits | Size | RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
`gpt4-x-vicuna-13B.ggmlv3.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10GB | 4-bit. |
`gpt4-x-vicuna-13B.ggmlv3.q4_1.bin` | q4_1 | 4bit | 8.95GB | 10GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.|
`gpt4-x-vicuna-13B.ggmlv3.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11GB | 5-bit. Higher accuracy, higher resource usage and slower inference.  |
`gpt4-x-vicuna-13B.ggmlv3.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12GB | 5-bit. Even higher accuracy, higher resource usage and slower inference. |
`gpt4-x-vicuna-13B.ggmlv3.q8_0.bin` | q8_0 | 8bit | 16GB | 18GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use.|

## How to run in `llama.cpp`

I use the following command line; adjust for your tastes and needs:

```
./main -t 12 -m gpt4-x-vicuna-13B.ggmlv3.q4_2.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Write a story about llamas
### Response:"
```
Change `-t 12` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.

If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`

## How to run in `text-generation-webui`

Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).

Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.

# Original model card

As a base model used https://huggingface.co/eachadea/vicuna-13b-1.1

Finetuned on Teknium's GPTeacher dataset, unreleased Roleplay v2 dataset, GPT-4-LLM dataset, and Nous Research Instruct Dataset

Approx 180k instructions, all from GPT-4, all cleaned of any OpenAI censorship/"As an AI Language Model" etc.

Base model still has OpenAI censorship. Soon, a new version will be released with cleaned vicuna from https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltere

Trained on 8 A100-80GB GPUs for 5 epochs following Alpaca deepspeed training code.

Nous Research Instruct Dataset will be released soon.

GPTeacher, Roleplay v2 by https://huggingface.co/teknium

Wizard LM by https://github.com/nlpxucan

Nous Research Instruct Dataset by https://huggingface.co/karan4d and https://huggingface.co/huemin

Compute provided by our project sponsor https://redmond.ai/