XVERSE-7B_2bit / README.md
Minami-su's picture
Update README.md
97a9f85
|
raw
history blame
298 Bytes
metadata
license: apache-2.0
tags:
  - 2bit
  - llama
  - XVERSE

You can run it on 4G mem GPU,quantize base QuIP# method, a weights-only quantization method that is able to achieve near fp16 performance using only 2 bits per weight.

url:https://github.com/Cornell-RelaxML/quip-sharp/tree/release20231203