--- license: other inference: false datasets: - gozfarb/ShareGPT_Vicuna_unfiltered --- # VicUnlocked-30B-LoRA GPTQ This is GPTQ format quantised 4bit models of [Neko Institute of Science's VicUnLocked 30B LoRA](https://huggingface.co/Neko-Institute-of-Science/VicUnLocked-30b-LoRA). The files in this repo are the result of merging the above LoRA with the original LLaMA 30B, then quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa). ## Repositories available * [4-bit, 5-bit and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/VicUnlocked-30B-LoRA-GGML). * [4bit's GPTQ 4-bit model for GPU inference](https://huggingface.co/TheBloke/VicUnlocked-30B-LoRA-GPTQ). * [float16 HF format model for GPU inference and further conversions](https://huggingface.co/TheBloke/VicUnlocked-30B-LoRA-HF). ## How to easily download and use this model in text-generation-webui Open the text-generation-webui UI as normal. 1. Click the **Model tab**. 2. Under **Download custom model or LoRA**, enter `TheBloke/VicUnlocked-30B-LoRA-GPTQ`. 3. Click **Download**. 4. Wait until it says it's finished downloading. 5. Click the **Refresh** icon next to **Model** in the top left. 6. In the **Model drop-down**: choose the model you just downloaded, `VicUnlocked-30B-LoRA-GPTQ`. 7. If you see an error in the bottom right, ignore it - it's temporary. 8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = None`, `model_type = Llama` 9. Click **Save settings for this model** in the top right. 10. Click **Reload the Model** in the top right. 11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt! ## Provided files **Compatible file - VicUnlocked-30B-LoRA-GPTQ-4bit.act-order.safetensors** In the `main` branch - the default one - you will find `VicUnlocked-30B-LoRA-GPTQ-4bit-128g.compat.no-act-order.safetensors` This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility It was created without groupsize so as to minimise VRAM requirements. It is created with the `--act-order` parameter to improve inference quality. * `VicUnlocked-30B-LoRA-GPTQ-4bit-128g.compat.no-act-order.safetensors` * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches * Works with AutoGPTQ. * Works with text-generation-webui one-click-installers * Parameters: Groupsize = None. act-order. * Command used to create the GPTQ: ``` llama.py /workspace/vicunlocked-30b/HF wikitext2 --wbits 4 --true-sequential --act-order --save_safetensors /workspace/vicunlocked-30b/gptq/VicUnlocked-30B-GPTQ-4bit.act-order.safetensors ``` # Original model card # Convert tools https://github.com/practicaldreamer/vicuna_to_alpaca # Training tool https://github.com/oobabooga/text-generation-webui ATM I'm using 2023.05.04v0 of the dataset and training full context. # Notes: So I will only be training 1 epoch, as full context 30b takes so long to train. This 1 epoch will take me 8 days lol but luckily these LoRA feels fully functinal at epoch 1 as shown on my 13b one. Also I will be uploading checkpoints almost everyday. I could train another epoch if there's enough want for it. Update: Since I will not be training over 1 epoch @Aeala is training for the full 3 https://huggingface.co/Aeala/VicUnlocked-alpaca-half-30b-LoRA but it's half ctx if you care about that. Also @Aeala's just about done. Update: Training Finished at Epoch 1, These 8 days sure felt long. I only have one A6000 lads there's only so much I can do. Also RIP gozfarb IDK what happened to him. # How to test? 1. Download LLaMA-30B-HF if you have not: https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF 2. Make a folder called VicUnLocked-30b-LoRA in the loras folder. 3. Download adapter_config.json and adapter_model.bin into VicUnLocked-30b-LoRA. 4. Load ooba: ```python server.py --listen --model LLaMA-30B-HF --load-in-8bit --chat --lora VicUnLocked-30b-LoRA``` 5. Select instruct and chose Vicuna-v1.1 template. # Training Log https://wandb.ai/neko-science/VicUnLocked/runs/vx8yzwi7