|
--- |
|
license: apache-2.0 |
|
tags: |
|
- uncensored |
|
model-index: |
|
- name: Mistral-CatMacaroni-slerp-uncensored |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: AI2 Reasoning Challenge (25-Shot) |
|
type: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
args: |
|
num_few_shot: 25 |
|
metrics: |
|
- type: acc_norm |
|
value: 64.25 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=diffnamehard/Mistral-CatMacaroni-slerp-uncensored |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: HellaSwag (10-Shot) |
|
type: hellaswag |
|
split: validation |
|
args: |
|
num_few_shot: 10 |
|
metrics: |
|
- type: acc_norm |
|
value: 84.09 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=diffnamehard/Mistral-CatMacaroni-slerp-uncensored |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU (5-Shot) |
|
type: cais/mmlu |
|
config: all |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 62.66 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=diffnamehard/Mistral-CatMacaroni-slerp-uncensored |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: TruthfulQA (0-shot) |
|
type: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: mc2 |
|
value: 56.87 |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=diffnamehard/Mistral-CatMacaroni-slerp-uncensored |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: Winogrande (5-shot) |
|
type: winogrande |
|
config: winogrande_xl |
|
split: validation |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 79.72 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=diffnamehard/Mistral-CatMacaroni-slerp-uncensored |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GSM8k (5-shot) |
|
type: gsm8k |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 56.1 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=diffnamehard/Mistral-CatMacaroni-slerp-uncensored |
|
name: Open LLM Leaderboard |
|
--- |
|
This is an experimental model. |
|
|
|
Finetuned on dataset [toxic-dpo-v0.1-NoWarning-alpaca](https://huggingface.co/datasets/diffnamehard/toxic-dpo-v0.1-NoWarning-alpaca) using model [Mistral-CatMacaroni-slerp-7B](https://huggingface.co/diffnamehard/Mistral-CatMacaroni-slerp-7B) |
|
|
|
| Metric | Value | |
|
| --- | --- | |
|
| Avg. | 67.28 | |
|
| ARC (25-shot) | 64.25 | |
|
| HellaSwag (10-shot) | 84.09 | |
|
| MMLU (5-shot) | 62.66 | |
|
| TruthfulQA (0-shot) | 56.87 | |
|
| Winogrande (5-shot) | 79.72 | |
|
| GSM8K (5-shot) | 56.1 | |
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_diffnamehard__Mistral-CatMacaroni-slerp-uncensored) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |67.28| |
|
|AI2 Reasoning Challenge (25-Shot)|64.25| |
|
|HellaSwag (10-Shot) |84.09| |
|
|MMLU (5-Shot) |62.66| |
|
|TruthfulQA (0-shot) |56.87| |
|
|Winogrande (5-shot) |79.72| |
|
|GSM8k (5-shot) |56.10| |
|
|
|
|