File size: 5,168 Bytes
4a81f98 e20892e ae418f7 8e8d60f 4a81f98 163365d 4a81f98 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
---
base_model:
- SicariusSicariiStuff/Dusk_Rainbow
- ArliAI/ArliAI-Llama-3-8B-Formax-v1.0
- Sao10K/L3-8B-Stheno-v3.2
- Nitral-AI/Hathor_Sofit-L3-8B-v1
- TheDrummer/Llama-3SOME-8B-v2
- hf-100/Llama-3-Spellbound-Instruct-8B-0.3
license: llama3
library_name: transformers
tags:
- nsfw
- not-for-all-audiences
- llama-3
- text-generation-inference
- mergekit
- merge
---
I made a really stupid mistake and uploaded two models instead of one. I uploaded the files for both and was going to decide which one to release today, but I got up at 4-5am, immediately got on my PC, and then just set both to public after writing more of the model card. Hopefully no one downloaded it, but if you did, then I'm sorry for the inconvenience.
# Llama-3-8B-Stroganoff-4.0
Since V3, I tested a lot of old models, looked at some new ones, and used every merge method available in mergekit. This one is from experiments I was doing on model order, which is why all the models use the same parameters, but it was good enough that I decided to upload it. If you've been doing merges yourself, then most or all of the following information will be redundant, but some of it was not at all apparent to me, so I hope it will help others looking for more information.
Ties is not better than Task-Arithmetic, and Task-Arithmetic is not better than Ties; they both have certain advantages that make them better in different situations. Ties aims to reduce model interference by keeping weights that agree with each other and zeroing out the rest. If you try to use Ties with a bunch of models that do different things, then some aspects of the models might get erased if it doesn't have a strong enough presence. The order of the models does not matter with a Ties merge because all of the merging happens in one step, and changing the model order will produce identical hashes, assuming you're not using Dare or Della, which adds randomness to the merge.
Task-Arithmetic is a linear merge that first subtracts the base model from the fine-tuned models and then merges them in pairs starting at the top of the list before finally merging the result back on top of the base model. The order of the models does matter with a Task-Arithmetic merge, and changing the model order will produce different hashes. A Task-Arithmetic merge keeps more of the individuality of the component models, with the last to be merged having the strongest effect on the resulting model. Task-Arithmetic can be unpredictable at times, as changing the order of the models can produce significantly different results, but it can be effective at combining the strengths of different models once you find the right order.
Dare, Della, and Breadcrumbs are all enhancements to Ties and Task-Arithmetic that aim to improve the resulting merge by zeroing out certain weights. While they all remove weights before merging takes place, they each do it a bit differently. Dare assigns a flat dropout rate, meaning all weights have an equal chance of being dropped; Della scales the dropout rate based on the magnitude of change from the base model, with the largest changes having the smallest dropout rate; and Breadcrumbs first removes any outliers and then begins zeroing out weights until it reaches the target density, starting with the smallest changes. I've done direct comparisons between Dare and Della with all the same parameters, and Della has consistently outperformed Dare. I haven't tested breadcrumbs much, but the idea behind it seems solid.
# Quantization Formats
**GGUF**
- Static:
- https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-4.0-GGUF
- Imatrix:
- https://huggingface.co/mradermacher/Llama-3-8B-Stroganoff-4.0-i1-GGUF
# Details
- **License**: [llama3](https://llama.meta.com/llama3/license/)
- **Instruct Format**: [llama-3](https://llama.meta.com/docs/model-cards-and-prompt-formats/meta-llama-3/) or ChatML
- **Context Size**: 8K
## Models Used
- [Dusk_Rainbow](https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow)
- [ArliAI-Llama-3-8B-Formax-v1.0](https://huggingface.co/Nitral-AI/ArliAI/ArliAI-Llama-3-8B-Formax-v1.0)
- [L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
- [Hathor_Sofit-L3-8B-v1](https://huggingface.co/Nitral-AI/Hathor_Sofit-L3-8B-v1)
- [Llama-3SOME-8B-v2](https://huggingface.co/TheDrummer/Llama-3SOME-8B-v2)
- [Llama-3-Spellbound-Instruct-8B-0.3](https://huggingface.co/hf-100/Llama-3-Spellbound-Instruct-8B-0.3)
## Merge Config
```yaml
merge_method: della_linear
dtype: bfloat16
parameters:
normalize: true
int8_mask: true
tokenizer_source: union
base_model: SicariusSicariiStuff/Dusk_Rainbow
models:
- model: ArliAI/ArliAI-Llama-3-8B-Formax-v1.0
parameters:
density: 0.55
weight: 1
- model: Sao10K/L3-8B-Stheno-v3.2
parameters:
density: 0.55
weight: 1
- model: Nitral-AI/Hathor_Sofit-L3-8B-v1
parameters:
density: 0.55
weight: 1
- model: TheDrummer/Llama-3SOME-8B-v2
parameters:
density: 0.55
weight: 1
- model: hf-100/Llama-3-Spellbound-Instruct-8B-0.3
parameters:
density: 0.55
weight: 1
``` |