Rombos-LLM-V2.5-Qwen-14b
Rombos-LLM-V2.6-Qwen-14b is the upgraded version of "rombodawg/Rombos-LLM-V2.5-Qwen-14b". The magic I performed to make this model better than it already was is only known to the Deepest state, dankest memers and God himself, so dont ask 😉. But it does perform a decent bit better than version 2.5 from my hand testing. Benchmarks will come later.
Check out the Continuous Finetuning method that I apply to all my models bellow:
Quants:
https://huggingface.co/rombodawg/Rombos-LLM-V2.6-Qwen-14b-Q8_0-GGUF
https://huggingface.co/rombodawg/Rombos-LLM-V2.6-Qwen-14b-Q5_K_M-GGUF
https://huggingface.co/bartowski/Rombos-LLM-V2.6-Qwen-14b-GGUF
Benchmarks:
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 35.89 |
IFEval (0-Shot) | 52.14 |
BBH (3-Shot) | 49.22 |
MATH Lvl 5 (4-Shot) | 28.85 |
GPQA (0-shot) | 17.00 |
MuSR (0-shot) | 19.26 |
MMLU-PRO (5-shot) | 48.85 |
Model tree for Apel-sin/rombos-llm-v2.6-qwen-14b-exl2
Base model
Qwen/Qwen2.5-14B
Finetuned
Qwen/Qwen2.5-14B-Instruct
Finetuned
rombodawg/Rombos-LLM-V2.6-Qwen-14b
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard52.140
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard49.220
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard28.850
- acc_norm on GPQA (0-shot)Open LLM Leaderboard17.000
- acc_norm on MuSR (0-shot)Open LLM Leaderboard19.260
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard48.850