File size: 2,974 Bytes
2d08961 069cf1d 2d08961 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
---
license: llama3
library_name: transformers
tags:
- nsfw
- not-for-all-audiences
- llama-3
- text-generation-inference
---
# Llama-Salad-4x8B-V2
Changes in V2:
- Swapped Tess-2.0-Llama-3-8B for Llama-3-8B-Synthia-v3.5
- Swapped L3-8B-Stheno-v3.1 for Llama-3-Soliloquy-8B-v2
- Removed Llama3-OpenBioLLM-8B and added opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5
V2 has improvements in all areas from V1; it's not a massive improvement, but I can confidently say it's a direct upgrade. Llama-3-8B-Synthia-v3.5 is better than Tess-2.0-Llama-3-8B in every way; Llama-3-Soliloquy-8B-v2 is more intelligent than L3-8B-Stheno-v3.1 and has less bias towards NSFW content; and the inclusion of opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5 has greatly improved its storytelling and narration abilities.
I really like the model selection in this one, so I don't know how much more I can improve if I make another 4x8B merge. If I were to make a V3, swapping Meta-Llama-3-8B-Instruct would likely be the only change. I will try my hand at making an 8x8B merge in the future, but I still need to find some models to fill the gaps; making sure there's no routing conflicts between 8 different models at once will be the biggest challenge.
# Details
- **License**: [llama3](https://llama.meta.com/llama3/license/)
- **Instruct Format**: [llama-3](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/)
- **Context Size**: 8K
## Models Used
- [Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
- [Llama-3-8B-Synthia-v3.5](https://huggingface.co/migtissera/Llama-3-8B-Synthia-v3.5)
- [Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2)
- [opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5](https://huggingface.co/dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5)
## Merge Config
```yaml
base_model: NousResearch/Meta-Llama-3-8B-Instruct
gate_mode: hidden
dtype: bfloat16
experts_per_token: 2
experts:
- source_model: NousResearch/Meta-Llama-3-8B-Instruct
positive_prompts:
- "summarize"
- "paraphrase"
- "explain"
- "define"
- "translate"
- "multilingual"
- "chat"
- "conversation"
- source_model: migtissera/Llama-3-8B-Synthia-v3.5
positive_prompts:
- "programming language"
- "JavaScript"
- "Python programming language"
- "Rust programming language"
- "CSS markup styling language"
- "math"
- "code"
- "step-by-step"
- "logical reasoning"
- source_model: openlynn/Llama-3-Soliloquy-8B-v2
positive_prompts:
- "roleplay"
- "erotic roleplay"
- "characters"
- "scene"
- "opinion"
- source_model: dreamgen-preview/opus-v1.2-llama-3-8b-instruct-run3.5-epoch2.5
positive_prompts:
- "creative writing"
- "storytelling"
- "narration"
- "narrative setting"
- "narrative plot"
- "narrative exposition"
- "narrative theme"
- "narrative climax"
``` |