brucethemoose's picture
Adding Evaluation Results (#3)
9458cea verified
---
language:
- en
license: other
library_name: transformers
tags:
- mergekit
- merge
- Yi
license_name: yi-license
license_link: https://huggingface.co/01-ai/Yi-34B/blob/main/LICENSE
base_model: []
model-index:
- name: Yi-34B-200K-DARE-merge-v7
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.09
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v7
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.99
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v7
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 77.3
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v7
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 58.9
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v7
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 83.11
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v7
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=brucethemoose/Yi-34B-200K-DARE-merge-v7
name: Open LLM Leaderboard
---
# Possibly made obsolete by: https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-megamerge-v8
# Yi 34B 200K DARE Merge v7
A merge of several Yi 34B 200K models using the new DARE Ties method via mergekit. The goal is to create a merge model that excels at 32K+ context performance.
## Prompt template: Orca-Vicuna
```
SYSTEM: {system_message}
USER: {prompt}
ASSISTANT:
```
It might recognize ChatML, and possibly Alpaca-like formats. Raw prompting as described here is also effective: https://old.reddit.com/r/LocalLLaMA/comments/18zqy4s/the_secret_to_writing_quality_stories_with_llms/
## Running
Being a Yi model, try running a lower temperature with 0.02-0.06 MinP, a little repetition penalty, maybe mirostat with a low tau, and no other samplers. Yi tends to run "hot" by default, and it really needs a low temperature + MinP to cull the huge vocabulary.
24GB GPUs can efficiently run Yi-34B-200K models at **45K-90K context** with exllamav2, and performant UIs like [exui](https://github.com/turboderp/exui). I go into more detail in this [post](https://old.reddit.com/r/LocalLLaMA/comments/1896igc/how_i_run_34b_models_at_75k_context_on_24gb_fast/). 16GB GPUs can still run the high context with aggressive quantization.
To load/train this in full-context backends like transformers, you *must* change `max_position_embeddings` in config.json to a lower value than 200,000, otherwise you will OOM! I do not recommend running high context without context-efficient backends like exllamav2 or unsloth.
## Testing Notes
See: https://huggingface.co/brucethemoose/Yi-34B-200K-DARE-merge-v5#testing-notes
A "4k" merge model was created to try and extend the context of SUS Chat and DPO-bagel before adding them to the merge: https://huggingface.co/brucethemoose/SUS-Bagel-200K-DARE-Test
In addition, the weight gradients are biased towards Vicuna-format models in the first few layers to try and "emphasize" the Orca-Vicuna prompt template. How sucessful this is remains to be seen.
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama as a base.
### Models Merged
The following models were included in the merge:
* https://huggingface.co/kyujinpy/PlatYi-34B-200k-Q-FastChat
* https://huggingface.co/jondurbin/bagel-34b-v0.2
* https://huggingface.co/NousResearch/Nous-Capybara-34B
* https://huggingface.co/migtissera/Tess-M-Creative-v1.0
* https://huggingface.co/brucethemoose/SUS-Bagel-200K-DARE-Test
* https://huggingface.co/Mihaiii/Pallas-0.5
* https://huggingface.co/bhenrym14/airoboros-3_1-yi-34b-200k
* https://huggingface.co/adamo1139/Yi-34B-200K-AEZAKMI-v2
* https://huggingface.co/migtissera/Tess-34B-v1.4
* https://huggingface.co/SUSTech/SUS-Chat-34B
* https://huggingface.co/jondurbin/bagel-dpo-34b-v0.2
* https://huggingface.co/chargoddard/Yi-34B-200K-Llama
* https://huggingface.co/chargoddard/Yi-34B-Llama
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
# No parameters necessary for base model
- model: /home/alpha/Storage/Models/Raw/migtissera_Tess-34B-v1.4
parameters:
weight: [0.23, 0.125, 0.125, 0.125, 0.125, 0.125]
density: 0.59
- model: /home/alpha/Models/Raw/Mihaiii_Pallas-0.5
parameters:
weight: [0.23, 0.125, 0.125, 0.125, 0.125, 0.125]
density: 0.59
- model: /home/alpha//Storage/Models/Raw/bhenrym14_airoboros-3_1-yi-34b-200k
parameters:
weight: [0.02, 0.106, 0.106, 0.106, 0.106, 0.106]
density: 0.59
- model: /home/alpha/Storage/Models/Raw/jondurbin_bagel-34b-v0.2
#Only the SFT in the main merge since the DPO version seems to have no long context ability at all
parameters:
weight: [0.02, 0.100, 0.100, 0.100, 0.100, 0.100]
density: 0.4
- model: /home/alpha/Storage/Models/Raw/kyujinpy_PlatYi-34B-200k-Q-FastChat
parameters:
weight: [0.02, 0.100, 0.100, 0.100, 0.100, 0.100]
density: 0.59
#- model: /home/alpha/Storage/Models/Raw/ehartford_dolphin-2.2-yi-34b-200k
# Dolphin 200K seems to be funky according to multiple leaderboards and perplexity tests?
# parameters:
# weight: 0.15
# density: 0.6
- model: /home/alpha/Models/Raw/adamo1139_Yi-34B-200K-AEZAKMI-v2
parameters:
weight: [0.02, 0.110, 0.110, 0.110, 0.110, 0.110]
density: 0.59
- model: /home/alpha/Storage/Models/Raw/Nous-Capybara-34B
parameters:
weight: [0.22, 0.126, 0.126, 0.126, 0.126, 0.126]
density: 0.59
- model: /home/alpha/Storage/Models/Raw/4kmerge
parameters:
weight: [0.02, 0.108, 0.108, 0.108, 0.108, 0.108]
density: 0.5
- model: /home/alpha/Models/Raw/migtissera_Tess-M-Creative-v1.0
parameters:
weight: [0.22, 0.100, 0.100, 0.100, 0.100, 0.10]
density: 0.59
merge_method: dare_ties
tokenizer_source: union
base_model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
parameters:
int8_mask: true
dtype: bfloat16
```
The following config was used for the "4kmerge" model:
```yaml
models:
- model: /home/alpha/Models/Raw/chargoddard_Yi-34B-Llama
# No parameters necessary for base model
- model: /home/alpha/Storage/Models/Raw/chargoddard_Yi-34B-200K-Llama
parameters:
weight: 0.5
density: 1
- model: /home/alpha/Models/Raw/SUSTech_SUS-Chat-34B
parameters:
weight: 0.2
density: 0.12
- model: /home/alpha/Models/Raw/jondurbin_bagel-dpo-34b-v0.2
parameters:
weight: 0.2
density: 0.15
- model: /home/alpha/Models/Raw/jondurbin_bagel-34b-v0.2
parameters:
weight: 0.1
density: 0.12
merge_method: dare_ties
tokenizer_source: union
base_model: /home/alpha/Models/Raw/chargoddard_Yi-34B-Llama
parameters:
int8_mask: true
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_brucethemoose__Yi-34B-200K-DARE-merge-v7)
| Metric |Value|
|---------------------------------|----:|
|Avg. |73.12|
|AI2 Reasoning Challenge (25-Shot)|68.09|
|HellaSwag (10-Shot) |85.99|
|MMLU (5-Shot) |77.30|
|TruthfulQA (0-shot) |58.90|
|Winogrande (5-shot) |83.11|
|GSM8k (5-shot) |65.35|