File size: 857 Bytes
c4cb329
11141a4
 
 
9b67400
11141a4
9b67400
 
 
11141a4
 
c4cb329
9b67400
 
 
 
c4cb329
 
 
 
 
 
9b67400
c4cb329
9b67400
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
base_model: Alsebay/NaruMOE-3x7B-v2
inference: false
library_name: transformers
license: cc-by-nc-4.0
merged_models:
- Alsebay/NarumashiRTS-V2
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- Nitral-AI/KukulStanta-7B
pipeline_tag: text-generation
quantized_by: Suparious
tags:
- moe
- merge
- roleplay
- Roleplay
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
---
# Alsebay/NaruMOE-3x7B-v2 AWQ

- Model creator: [Alsebay](https://huggingface.co/Alsebay)
- Original model: [NaruMOE-3x7B-v2](https://huggingface.co/Alsebay/NaruMOE-3x7B-v2)

## Model Summary

A MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter).

Adapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in.

Worse than V1 in logic, but better in expression.