File size: 3,163 Bytes
1d2ffad 65b9724 2f3eb91 1d2ffad 65b9724 7ca097a 0b2957b b5db3fc 9696df1 65b9724 0b6d68c ffdfe3b 601af45 65b9724 5830e38 65b9724 107ece1 65b9724 36fd7b4 0475128 65b9724 c952ea9 65b9724 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
---
license: other
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- text-generation-inference
---
# Obsolete, see: https://huggingface.co/brucethemoose/CaPlatTessDolXaBoros-Yi-34B-200K-DARE-Ties-HighDensity
***
**NousResearch/Nous-Capybara-34B**, **migtissera/Tess-M-v1.3** and **bhenrym14/airoboros-3_1-yi-34b-200k** merged with a new, experimental implementation of "dare ties" via mergekit. See:
> Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch
https://github.com/yule-BUAA/MergeLM
https://github.com/cg123/mergekit/tree/dare'
Merged with the following config, and the tokenizer from chargoddard's Yi-Llama:
```
models:
- model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
# no parameters necessary for base model
- model: /home/alpha/Storage/Models/Raw/migtissera_Tess-M-v1.3
parameters:
weight: 0.41
density: 0.50
- model: /home/alpha//Storage/Models/Raw/bhenrym14_airoboros-3_1-yi-34b-200k
parameters:
weight: 0.18
density: 0.46
- model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B
parameters:
weight: 0.41
density: 0.50
merge_method: dare_ties
base_model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
parameters:
int8_mask: true
dtype: bfloat16
```
dare_ties is testing with better perplexity than a regular ties merge with the same merge configuration. Model weights that add up to one also seem optimal from testing. And high context results seem... better than the previous dare merge with Tess 1.2.
I chose not to include other finetunes, such as Dolphin, because they aren't trained on the 200K base. If any other 200K finetunes pop up, let me know.
***
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
Being a Yi model, try disabling the BOS token and/or running a lower temperature with MinP (and no other samplers) if output doesn't seem right. Yi tends to run "hot" by default.
Sometimes the model "spells out" the stop token as `</s>` like Capybara, so you may need to add `</s>` as an additional stopping condition. It also might respond to the llama-2 chat format.
***
24GB GPUs can run Yi-34B-200K models at **45K-75K context** with exllamav2. I go into more detail in this [post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/), and recommend exl2 quantizations on data similar to the desired task, such as these targeted at story writing: [4.0bpw](https://huggingface.co/brucethemoose/CapyTessBorosYi-34B-200K-DARE-Ties-exl2-4bpw-fiction) / [3.1bpw](https://huggingface.co/brucethemoose/CapyTessBorosYi-34B-200K-DARE-Ties-exl2-3.1bpw-fiction)
***
Credits:
https://github.com/cg123/mergekit/tree/dare
https://huggingface.co/NousResearch/Nous-Capybara-34B/
https://huggingface.co/bhenrym14/airoboros-3_1-yi-34b-200k
https://huggingface.co/migtissera/Tess-M-v1.3
https://huggingface.co/chargoddard/Yi-34B-200K-Llama
https://huggingface.co/01-ai/Yi-34B-200K |