InfinityKumon-2x7B / README.md
R136a1's picture
Update README.md
1cf1b3a verified
|
raw
history blame
816 Bytes
---
license: apache-2.0
tags:
- safetensors
- mixtral
- not-for-all-audiences
- nsfw
language:
- en
---
## InfinityKumon-2x7B
![InfinityKumon-2x7B](https://cdn.discordapp.com/attachments/843160171676565508/1222560876578476103/00000-3033963009.png?ex=6616a98b&is=6604348b&hm=6434f8a16f22a3515728ab38bf7230a01448b00e6136729d42d75ae0374e5802&)
Another MoE merge from [Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B) and [grimjim/kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B).
The reason? Because I like InfinityRP-v1-7B so much and wondering if I can improve it even more by merging 2 great models into MoE.
### Prompt format:
Alpaca or ChatML
Switch: [FP16](https://huggingface.co/R136a1/InfinityKumon-2x7B) - [GGUF](https://huggingface.co/R136a1/InfinityKumon-2x7B-GGUF)