Information

Alpaca 30B 4-bit working with GPTQ versions used in Oobabooga's Text Generation Webui and KoboldAI.

Quantized using --true-sequential and --act-order optimizations.

This was made using Chansung's 30B Alpaca Lora: https://huggingface.co/chansung/alpaca-lora-30b

Update 04.06.2023

This is a more recent merge of Chansung's Alpaca Lora which was updated using the clean alpaca dataset as of 04/06/2023 with refined training parameters

Training Parameters

Benchmarks

Wikitext2: 4.608365058898926 Ptb-New: 8.69663143157959 C4-New: 6.624773979187012 Note: This version does not use --groupsize 128, therefore evaluations are minimally higher. However, this version allows fitting the whole model at full context using only 24GB VRAM.