|
--- |
|
license: cc-by-nc-2.0 |
|
tags: |
|
- merge |
|
- mergekit |
|
- lazymergekit |
|
- SanjiWatsuki/Kunoichi-DPO-v2-7B |
|
- eren23/ogno-monarch-jaskier-merge-7b |
|
base_model: |
|
- SanjiWatsuki/Kunoichi-DPO-v2-7B |
|
- eren23/ogno-monarch-jaskier-merge-7b |
|
model-index: |
|
- name: kuno-royale-7B |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: AI2 Reasoning Challenge (25-Shot) |
|
type: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
args: |
|
num_few_shot: 25 |
|
metrics: |
|
- type: acc_norm |
|
value: 71.76 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-7B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: HellaSwag (10-Shot) |
|
type: hellaswag |
|
split: validation |
|
args: |
|
num_few_shot: 10 |
|
metrics: |
|
- type: acc_norm |
|
value: 88.2 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-7B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU (5-Shot) |
|
type: cais/mmlu |
|
config: all |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 65.13 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-7B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: TruthfulQA (0-shot) |
|
type: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: mc2 |
|
value: 71.12 |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-7B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: Winogrande (5-shot) |
|
type: winogrande |
|
config: winogrande_xl |
|
split: validation |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 82.32 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-7B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GSM8k (5-shot) |
|
type: gsm8k |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 69.9 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=core-3/kuno-royale-7B |
|
name: Open LLM Leaderboard |
|
--- |
|
|
|
# kuno-royale-7B |
|
|
|
[v2 is probably better](https://huggingface.co/core-3/kuno-royale-v2-7b) 🤷 |
|
|
|
|Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | |
|
|-------------------|---------|-----|-----------|------|------------|------------|-------| |
|
| eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO | 76.45 | 73.12 | 89.09 | 64.80 | 77.45 | 84.77 | 69.45 | |
|
| [core-3/kuno-royale-v2-7b](https://huggingface.co/core-3/kuno-royale-v2-7b) | 74.80 | 72.01 | 88.15 | 65.07 | 71.10 | 82.24 | 70.20 | |
|
| **core-3/kuno-royale-7B** | **74.74** | **71.76** | **88.20** | **65.13** | **71.12** | **82.32** | **69.90** |
|
| SanjiWatsuki/Kunoichi-DPO-v2-7B | 72.46 | 69.62 | 87.44 | 64.94 | 66.06 | 80.82 | 65.88 | |
|
| SanjiWatsuki/Kunoichi-7B | 72.13 | 68.69 | 87.10 | 64.90 | 64.04 | 81.06 | 67.02 | |
|
|
|
## Original LazyMergekit Card: |
|
|
|
kuno-royale-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): |
|
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) |
|
* [eren23/ogno-monarch-jaskier-merge-7b](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b) |
|
|
|
## 🧩 Configuration |
|
|
|
```yaml |
|
slices: |
|
- sources: |
|
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B |
|
layer_range: [0, 32] |
|
- model: eren23/ogno-monarch-jaskier-merge-7b |
|
layer_range: [0, 32] |
|
merge_method: slerp |
|
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B |
|
parameters: |
|
t: |
|
- filter: self_attn |
|
value: [0, 0.5, 0.3, 0.7, 1] |
|
- filter: mlp |
|
value: [1, 0.5, 0.7, 0.3, 0] |
|
- value: 0.5 |
|
dtype: bfloat16 |
|
``` |
|
|
|
## 💻 Usage |
|
|
|
```python |
|
!pip install -qU transformers accelerate |
|
|
|
from transformers import AutoTokenizer |
|
import transformers |
|
import torch |
|
|
|
model = "core-3/kuno-royale-7B" |
|
messages = [{"role": "user", "content": "What is a large language model?"}] |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model) |
|
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
pipeline = transformers.pipeline( |
|
"text-generation", |
|
model=model, |
|
torch_dtype=torch.float16, |
|
device_map="auto", |
|
) |
|
|
|
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
|
print(outputs[0]["generated_text"]) |
|
``` |
|
|
|
|