Update README.md
Browse files
README.md
CHANGED
|
@@ -19,7 +19,7 @@ tags:
|
|
| 19 |
|
| 20 |
Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks.
|
| 21 |
|
| 22 |
-
[GGUF](https://huggingface.co/xxx777xxxASD/
|
| 23 |
|
| 24 |
### ChaoticSoliloquy-4x8B
|
| 25 |
```
|
|
@@ -41,4 +41,10 @@ experts:
|
|
| 41 |
- [openlynn/Llama-3-Soliloquy-8B](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B)
|
| 42 |
- [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1)
|
| 43 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
## Prompt format: Llama 3
|
|
|
|
| 19 |
|
| 20 |
Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks.
|
| 21 |
|
| 22 |
+
[GGUF, Exl2](https://huggingface.co/collections/xxx777xxxASD/chaoticsoliloquy-4x8b-6628a759b5a60d8d3f51ed62)
|
| 23 |
|
| 24 |
### ChaoticSoliloquy-4x8B
|
| 25 |
```
|
|
|
|
| 41 |
- [openlynn/Llama-3-Soliloquy-8B](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B)
|
| 42 |
- [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1)
|
| 43 |
|
| 44 |
+
## Vision
|
| 45 |
+
|
| 46 |
+
[llama3_mmproj](https://huggingface.co/ChaoticNeutrals/Llava_1.5_Llama3_mmproj)
|
| 47 |
+

|
| 48 |
+
|
| 49 |
+
|
| 50 |
## Prompt format: Llama 3
|