--- license: llama3.1 library_name: transformers tags: - mergekit - merge base_model: - deepseek-ai/DeepSeek-R1-Distill-Llama-8B - nbeerbower/Llama3.1-Allades-8B - DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B model-index: - name: Distilled-DarkPlanet-Allades-8B results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 34.6 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Distilled-DarkPlanet-Allades-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 23.04 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Distilled-DarkPlanet-Allades-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 33.84 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Distilled-DarkPlanet-Allades-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 7.38 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Distilled-DarkPlanet-Allades-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 3.92 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Distilled-DarkPlanet-Allades-8B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 21.13 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Distilled-DarkPlanet-Allades-8B name: Open LLM Leaderboard --- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ### Merge Method This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [nbeerbower/Llama3.1-Allades-8B](https://huggingface.co/nbeerbower/Llama3.1-Allades-8B) as a base. ### Models Merged The following models were included in the merge: * [deepseek-ai/DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) * [DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B](https://huggingface.co/DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B parameters: density: 0.5 weight: 0.5 - model: DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B parameters: density: 0.5 weight: 0.5 merge_method: ties base_model: nbeerbower/Llama3.1-Allades-8B parameters: normalize: true int8_mask: true dtype: float16 ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Triangle104__Distilled-DarkPlanet-Allades-8B-details) | Metric |Value| |-------------------|----:| |Avg. |20.65| |IFEval (0-Shot) |34.60| |BBH (3-Shot) |23.04| |MATH Lvl 5 (4-Shot)|33.84| |GPQA (0-shot) | 7.38| |MuSR (0-shot) | 3.92| |MMLU-PRO (5-shot) |21.13|