Update README.md
Browse files
README.md
CHANGED
@@ -5,6 +5,7 @@ language:
|
|
5 |
---
|
6 |
|
7 |
[EXL2](https://github.com/turboderp/exllamav2/tree/master#exllamav2) Quantization of [Putri's Megamix-A1](https://huggingface.co/gradientputri/Megamix-A1-13B).
|
|
|
8 |
GGUF quants from [Sao10K](https://huggingface.co/Sao10K) here: [MegaMix-L2-13B-GGUF](https://huggingface.co/Sao10K/MegaMix-L2-13B-GGUF)
|
9 |
|
10 |
## Model details
|
@@ -13,7 +14,8 @@ Quantized at 5.33bpw
|
|
13 |
|
14 |
## Prompt Format
|
15 |
|
16 |
-
Alpaca format:
|
|
|
17 |
```
|
18 |
### Instruction:
|
19 |
|
|
|
5 |
---
|
6 |
|
7 |
[EXL2](https://github.com/turboderp/exllamav2/tree/master#exllamav2) Quantization of [Putri's Megamix-A1](https://huggingface.co/gradientputri/Megamix-A1-13B).
|
8 |
+
|
9 |
GGUF quants from [Sao10K](https://huggingface.co/Sao10K) here: [MegaMix-L2-13B-GGUF](https://huggingface.co/Sao10K/MegaMix-L2-13B-GGUF)
|
10 |
|
11 |
## Model details
|
|
|
14 |
|
15 |
## Prompt Format
|
16 |
|
17 |
+
I'm using Alpaca format:
|
18 |
+
|
19 |
```
|
20 |
### Instruction:
|
21 |
|