File size: 1,352 Bytes
6d498c4
 
 
 
 
 
 
a8f6811
 
 
 
 
 
 
 
 
6d498c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
license: llama3
language:
- en
tags:
- moe
---

<br/><br/>
6bpw/h6 exl2 quantization of [xxx777xxxASD/ChaoticSoliloquy-4x8B](https://huggingface.co/xxx777xxxASD/ChaoticSoliloquy-4x8B) using default exllamav2 calibration dataset.

---

**ORIGINAL CARD:**


![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/jgyhmI451GRXri5hEj3lh.png)
(Maybe i'll change the waifu picture later)

Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks.

[GGUF](https://huggingface.co/xxx777xxxASD/ChaoticSoliloquy-4x8B-GGUF)

### ChaoticSoliloquy-4x8B
```
base_model: jeiku_Chaos_RP_l3_8B
gate_mode: random
dtype: bfloat16
experts_per_token: 2
experts:
  - source_model: ChaoticNeutrals_Poppy_Porpoise-v0.6-L3-8B
  - source_model: jeiku_Chaos_RP_l3_8B
  - source_model: openlynn_Llama-3-Soliloquy-8B
  - source_model: Sao10K_L3-Solana-8B-v1
```

## Models used

- [ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B)
- [jeiku/Chaos_RP_l3_8B](https://huggingface.co/jeiku/Chaos_RP_l3_8B)
- [openlynn/Llama-3-Soliloquy-8B](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B)
- [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1)

## Prompt format: Llama 3