Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
- text-generation-inference
|
5 |
+
---
|
6 |
+
|
7 |
+
This is an upscaled fp16 variant of the original Mistral-7b base model by Microsoft after it has been loaded with nf4 4-bit quantization via bitsandbytes.
|
8 |
+
The main idea here is to upscale the linear4bit layers to fp16 so that the quantization/dequantization cost doesn't have to paid for each forward pass at inference time.
|
9 |
+
|
10 |
+
_Note: The quantization operation to nf4 is not lossless, so the model weights for the linear layers are lossy, which means that this model will not work as well as the official base model._
|
11 |
+
|
12 |
+
To use this model, you can just load it via `transformers` in fp16:
|
13 |
+
|
14 |
+
```python
|
15 |
+
import torch
|
16 |
+
from transformers import AutoModelForCausalLM
|
17 |
+
|
18 |
+
model = AutoModelForCausalLM.from_pretrained(
|
19 |
+
"arnavgrg/mistral-7b-nf4-fp16-upscaled",
|
20 |
+
device_map="auto",
|
21 |
+
torch_dtype=torch.float16,
|
22 |
+
)
|
23 |
+
```
|