license: apache-2.0 | |
tags: | |
- 2bit | |
- llama | |
- XVERSE | |
You can run it on 4G mem GPU,quantize base QuIP# method, a weights-only quantization method that is able to achieve near fp16 performance using only 2 bits per weight. | |
url:https://github.com/Cornell-RelaxML/quip-sharp/tree/release20231203 |