|
--- |
|
language: |
|
- en |
|
- ru |
|
license: llama2 |
|
tags: |
|
- merge |
|
- mergekit |
|
- nsfw |
|
- not-for-all-audiences |
|
model-index: |
|
- name: Gembo-v1-70b |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: AI2 Reasoning Challenge (25-Shot) |
|
type: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
args: |
|
num_few_shot: 25 |
|
metrics: |
|
- type: acc_norm |
|
value: 71.25 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/Gembo-v1-70b |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: HellaSwag (10-Shot) |
|
type: hellaswag |
|
split: validation |
|
args: |
|
num_few_shot: 10 |
|
metrics: |
|
- type: acc_norm |
|
value: 86.98 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/Gembo-v1-70b |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU (5-Shot) |
|
type: cais/mmlu |
|
config: all |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 70.85 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/Gembo-v1-70b |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: TruthfulQA (0-shot) |
|
type: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: mc2 |
|
value: 63.25 |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/Gembo-v1-70b |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: Winogrande (5-shot) |
|
type: winogrande |
|
config: winogrande_xl |
|
split: validation |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 80.51 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/Gembo-v1-70b |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GSM8k (5-shot) |
|
type: gsm8k |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 50.19 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChuckMcSneed/Gembo-v1-70b |
|
name: Open LLM Leaderboard |
|
--- |
|
![logo-gembo.png](logo-gembo.png) |
|
This is my first "serious"(with practical use cases) experimental merge. Judge harshly. Mainly made for RP, but should be okay as an assistant. Turned out quite good, considering the amount of LORAs I merged into it. |
|
|
|
# Observations |
|
- GPTisms and repetition: put temperature and rep. pen. higher, make GPTisms stop sequences |
|
- A bit different than the ususal stuff; I'd say that it has so much slop in it that it unslops itself |
|
- Lightly censored |
|
- Fairly neutral, can be violent if you ask it really good, Goliath is a bit better at it |
|
- Has a bit of optimism baked in, but it's not very severe |
|
- Doesn't know when to stop, can be quite verbose or just stop almost immediately(maybe wants to use LimaRP settings idk) |
|
- Sometimes can't handle ' |
|
- Second model that tried to be funny unprompted to me(First one was Goliath) |
|
- Moderately intelligent |
|
- Quite creative |
|
|
|
# Naming |
|
Internal name of this model was euryale-guano-saiga-med-janboros-kim-lima-wiz-tony-d30-s40, but I decided to keep it short, and since it was iteration G in my files, I called it "Gembo". |
|
|
|
# Quants |
|
Thanks for GGUF quants, [@Artefact2](https://huggingface.co/Artefact2)! |
|
- [GGUF](https://huggingface.co/Artefact2/Gembo-v1-70b-GGUF) |
|
|
|
# Prompt format |
|
Alpaca. You can also try some other formats, I'm pretty sure it has a lot of them from all those merges. |
|
``` |
|
### Instruction: |
|
{instruction} |
|
|
|
### Response: |
|
``` |
|
|
|
# Settings |
|
As I already mentioned, high temperature and rep.pen. works great. |
|
For RP try something like this: |
|
- temperature=5 |
|
- MinP=0.10 |
|
- rep.pen.=1.15 |
|
|
|
Adjust to match your needs. |
|
|
|
|
|
# How it was created |
|
I took Sao10K/Euryale-1.3-L2-70B (Good base model) and added |
|
- Mikael110/llama-2-70b-guanaco-qlora (Creativity+assistant) |
|
- IlyaGusev/saiga2_70b_lora (Creativity+assistant) |
|
- s1ghhh/medllama-2-70b-qlora-1.1 (More data) |
|
- v2ray/Airoboros-2.1-Jannie-70B-QLoRA (Creativity+assistant) |
|
- Chat-Error/fiction.live-Kimiko-V2-70B (Creativity) |
|
- Doctor-Shotgun/limarpv3-llama2-70b-qlora (Creativity) |
|
- v2ray/LLaMA-2-Wizard-70B-QLoRA (Creativity+assistant) |
|
- v2ray/TonyGPT-70B-QLoRA (Special spice) |
|
|
|
Then I SLERP-merged it with cognitivecomputations/dolphin-2.2-70b (Needed to bridge the gap between this wonderful mess and Smaxxxer, otherwise it's quality is low) with 0.3t and then SLERP-merged it again with ChuckMcSneed/SMaxxxer-v1-70b (Creativity) with 0.4t. For SLERP-merges I used https://github.com/arcee-ai/mergekit. |
|
|
|
# Benchmarks (Do they even mean anything anymore?) |
|
### NeoEvalPlusN_benchmark |
|
[My meme benchmark.](https://huggingface.co/datasets/ChuckMcSneed/NeoEvalPlusN_benchmark) |
|
| Test name | Gembo | |
|
| ---------- | ---------- | |
|
| B | 2.5 | |
|
| C | 1.5 | |
|
| D | 3 | |
|
| S | 7.5 | |
|
| P | 5.25 | |
|
| Total | 19.75 | |
|
|
|
Absurdly high. That's what happens when you optimize the merges for a benchmark. |
|
|
|
|
|
### WolframRavenwolf |
|
Benchmark by [@wolfram](https://huggingface.co/wolfram) |
|
|
|
Artefact2/Gembo-v1-70b-GGUF GGUF Q5_K_M, 4K context, Alpaca format: |
|
- ✅ Gave correct answers to all 18/18 multiple choice questions! Just the questions, no previous information, gave correct answers: 16/18 |
|
- ✅ Consistently acknowledged all data input with "OK". |
|
- ➖ Did NOT follow instructions to answer with just a single letter or more than just a single letter. |
|
|
|
This shows that this model can be used for real world use cases as an assistant. |
|
|
|
### [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
[Leaderboard on Huggingface](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
|Model |Average|ARC |HellaSwag|MMLU |TruthfulQA|Winogrande|GSM8K| |
|
|--------------------------------|-------|-----|---------|-----|----------|----------|-----| |
|
|ChuckMcSneed/Gembo-v1-70b |70.51 |71.25|86.98 |70.85|63.25 |80.51 |50.19| |
|
|ChuckMcSneed/SMaxxxer-v1-70b |72.23 |70.65|88.02 |70.55|60.7 |82.87 |60.58| |
|
|
|
Looks like adding a shitton of RP stuff decreased HellaSwag, WinoGrande and GSM8K, but increased TruthfulQA, MMLU and ARC. Interesting. To be hosnest, I'm a bit surprised that it didn't do that much worse. |
|
|
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ChuckMcSneed__Gembo-v1-70b) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |70.51| |
|
|AI2 Reasoning Challenge (25-Shot)|71.25| |
|
|HellaSwag (10-Shot) |86.98| |
|
|MMLU (5-Shot) |70.85| |
|
|TruthfulQA (0-shot) |63.25| |
|
|Winogrande (5-shot) |80.51| |
|
|GSM8k (5-shot) |50.19| |
|
|
|
|