File size: 6,990 Bytes
f3c7f0e 1f56672 4e356c6 d5cd39b 1f56672 f3c7f0e e69267c 748baf3 e69267c 5c70aa1 6a4336e 5c70aa1 748baf3 5c70aa1 748baf3 5c70aa1 de3eb73 eb73cec 748baf3 057d1c7 748baf3 05b36bd 748baf3 05b36bd 748baf3 9847327 748baf3 057d1c7 748baf3 034e79a 748baf3 057d1c7 748baf3 de3eb73 05b36bd de3eb73 748baf3 e69267c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
---
license: other
library_name: transformers
pipeline_tag: text-generation
datasets:
- RyokoAI/ShareGPT52K
- Hello-SimpleAI/HC3
tags:
- koala
- ShareGPT
- llama
- gptq
inference: false
---
# Koala: A Dialogue Model for Academic Research
This repo contains the weights of the Koala 7B model produced at Berkeley. It is the result of combining the diffs from https://huggingface.co/young-geng/koala with the original Llama 7B model.
This version has then been quantized to 4-bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
## My Koala repos
I have the following Koala model repositories available:
**13B models:**
* [Unquantized 13B model in HF format](https://huggingface.co/TheBloke/koala-13B-HF)
* [GPTQ quantized 4bit 13B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g)
* [GPTQ quantized 4bit 13B model in GGML format for `llama.cpp`](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g-GGML)
**7B models:**
* [Unquantized 7B model in HF format](https://huggingface.co/TheBloke/koala-7B-HF)
* [Unquantized 7B model in GGML format for llama.cpp](https://huggingface.co/TheBloke/koala-7b-ggml-unquantized)
* [GPTQ quantized 4bit 7B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g)
* [GPTQ quantized 4bit 7B model in GGML format for `llama.cpp`](https://huggingface.co/TheBloke/koala-7B-GPTQ-4bit-128g-GGML)
## GETTING GIBBERISH OUTPUT?
Please read the sections below carefully. Gibberish output is expected if you are using the `safetensors` file without the latest GPTQ-for-LLaMa code.
Your options are either to update GPTQ-for-LLaMa under `text-generation-webui/repositories` to a more recent version, or use the other file provided, `koala-7B-4bit-128g.no-act-order.ooba.pt` which will work immediately.
Unfortunately right now it is a bit more complext to update GPTQ-for-LLaMa because the most recent code has breaking changes which are not supported by `text-generation-webui`.
Therefore it's currently recommended to use `koala-7B-4bit-128g.no-act-order.ooba.pt`.
## Provided files
Three model files are provided. You don't need all three - choose the one that suits your needs best!
Details of the files provided:
* `koala-7B-4bit-128g.safetensors`
* newer `safetensors` format, with improved file security, created with the latest [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) code.
* Command to create:
* `python3 llama.py koala-7B-HF c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors koala-7B-4bit-128g.safetensors`
* `koala-7B-4bit-128g.no-act-order.ooba.pt`
* `pt` format file, created with [oobabooga's older CUDA fork of GPTQ-for-LLaMa](https://github.com/oobabooga/GPTQ-for-LLaMa).
* This file is included primarily for Windows users, as it can be used without needing to compile the latest GPTQ-for-LLaMa code.
* It should hopefully therefore work with one-click-installers on Windows, which include the older GPTQ-for-LLaMa code.
* The older GPTQ code does not support all the latest features, so the quality may be fractionally lower.
* Command to create:
* `python3 llama.py koala-7B-HF c4 --wbits 4 --true-sequential --groupsize 128 --save koala-7B-4bit-128g.no-act-order.ooba.pt`
## How to run in `text-generation-webui`
File `koala-7B-4bit-128g.no-act-order.ooba.pt` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
The other two model files were created with the latest GPTQ code, and require that the latest GPTQ-for-LLaMa is used inside the UI.
Here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
```
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa
git clone https://github.com/oobabooga/text-generation-webui
mkdir -p text-generation-webui/repositories
ln -s GPTQ-for-LLaMa text-generation-webui/repositories/GPTQ-for-LLaMa
```
Then install this model into `text-generation-webui/models` and launch the UI as follows:
```
cd text-generation-webui
python server.py --model koala-7B-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
```
The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
If you are on Windows, or cannot use the Triton branch of GPTQ for any other reason, you can instead use the CUDA branch:
```
git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda
cd GPTQ-for-LLaMa
python setup_cuda.py install
```
Then link that into `text-generation-webui/repositories` as described above.
Or just use `koala-7B-4bit-128g.no-act-order.ooba.pt` as mentioned above.
## How the Koala delta weights were merged
The Koala delta weights were originally merged using the following commands, producing [koala-7B-HF](https://huggingface.co/TheBloke/koala-7B-HF):
```
git clone https://github.com/young-geng/EasyLM
git clone https://huggingface.co/nyanko7/LLaMA-7B
mkdir koala_diffs && cd koala_diffs && wget https://huggingface.co/young-geng/koala/resolve/main/koala_7b_diff_v2
cd EasyLM
PYTHON_PATH="${PWD}:$PYTHONPATH" python \
-m EasyLM.models.llama.convert_torch_to_easylm \
--checkpoint_dir=/content/LLaMA-7B \
--output_file=/content/llama-7B-LM \
--streaming=True
PYTHON_PATH="${PWD}:$PYTHONPATH" python \
-m EasyLM.scripts.diff_checkpoint --recover_diff=True \
--load_base_checkpoint='params::/content/llama-7B-LM' \
--load_target_checkpoint='params::/content/koala_diffs/koala_7b_diff_v2' \
--output_file=/content/koala_7b.diff.weights \
--streaming=True
PYTHON_PATH="${PWD}:$PYTHONPATH" python \
-m EasyLM.models.llama.convert_easylm_to_hf --model_size=7b \
--output_dir=/content/koala-7B-HF \
--load_checkpoint='params::/content/koala_7b.diff.weights' \
--tokenizer_path=/content/LLaMA-7B/tokenizer.model
```
## Further info
Check out the following links to learn more about the Berkeley Koala model.
* [Blog post](https://bair.berkeley.edu/blog/2023/04/03/koala/)
* [Online demo](https://koala.lmsys.org/)
* [EasyLM: training and serving framework on GitHub](https://github.com/young-geng/EasyLM)
* [Documentation for running Koala locally](https://github.com/young-geng/EasyLM/blob/main/docs/koala.md)
## License
The model weights are intended for academic research only, subject to the
[model License of LLaMA](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md),
[Terms of Use of the data generated by OpenAI](https://openai.com/policies/terms-of-use),
and [Privacy Practices of ShareGPT](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb).
Any other usage of the model weights, including but not limited to commercial usage, is strictly prohibited.
|