|
--- |
|
license: llama3 |
|
language: |
|
- en |
|
tags: |
|
- moe |
|
--- |
|
|
|
<br/><br/> |
|
6bpw/h6 exl2 quantization of [xxx777xxxASD/ChaoticSoliloquy-4x8B](https://huggingface.co/xxx777xxxASD/ChaoticSoliloquy-4x8B) using default exllamav2 calibration dataset. |
|
|
|
--- |
|
|
|
**ORIGINAL CARD:** |
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/jgyhmI451GRXri5hEj3lh.png) |
|
(Maybe i'll change the waifu picture later) |
|
|
|
Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks. |
|
|
|
[GGUF](https://huggingface.co/xxx777xxxASD/ChaoticSoliloquy-4x8B-GGUF) |
|
|
|
### ChaoticSoliloquy-4x8B |
|
``` |
|
base_model: jeiku_Chaos_RP_l3_8B |
|
gate_mode: random |
|
dtype: bfloat16 |
|
experts_per_token: 2 |
|
experts: |
|
- source_model: ChaoticNeutrals_Poppy_Porpoise-v0.6-L3-8B |
|
- source_model: jeiku_Chaos_RP_l3_8B |
|
- source_model: openlynn_Llama-3-Soliloquy-8B |
|
- source_model: Sao10K_L3-Solana-8B-v1 |
|
``` |
|
|
|
## Models used |
|
|
|
- [ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B) |
|
- [jeiku/Chaos_RP_l3_8B](https://huggingface.co/jeiku/Chaos_RP_l3_8B) |
|
- [openlynn/Llama-3-Soliloquy-8B](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B) |
|
- [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1) |
|
|
|
## Prompt format: Llama 3 |