base_model: Alsebay/NaruMOE-3x7B-v2 | |
inference: false | |
library_name: transformers | |
license: cc-by-nc-4.0 | |
merged_models: | |
- Alsebay/NarumashiRTS-V2 | |
- SanjiWatsuki/Kunoichi-DPO-v2-7B | |
- Nitral-AI/KukulStanta-7B | |
pipeline_tag: text-generation | |
quantized_by: Suparious | |
tags: | |
- moe | |
- merge | |
- roleplay | |
- Roleplay | |
- 4-bit | |
- AWQ | |
- text-generation | |
- autotrain_compatible | |
- endpoints_compatible | |
# Alsebay/NaruMOE-3x7B-v2 AWQ | |
- Model creator: [Alsebay](https://huggingface.co/Alsebay) | |
- Original model: [NaruMOE-3x7B-v2](https://huggingface.co/Alsebay/NaruMOE-3x7B-v2) | |
## Model Summary | |
A MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter). | |
Adapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in. | |
Worse than V1 in logic, but better in expression. | |