Daredevil-7B
Daredevil-7B is a merge of the following models using LazyMergekit:
🧩 Configuration
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: samir-fama/SamirGPT-v1
parameters:
density: 0.53
weight: 0.4
- model: abacusai/Slerp-CM-mist-dpo
parameters:
density: 0.53
weight: 0.3
- model: EmbeddedLLM/Mistral-7B-Merge-14-v0.2
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
💻 Usage
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "shadowml/Daredevil-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 73.36 |
AI2 Reasoning Challenge (25-Shot) | 69.37 |
HellaSwag (10-Shot) | 87.17 |
MMLU (5-Shot) | 65.30 |
TruthfulQA (0-shot) | 64.09 |
Winogrande (5-shot) | 81.29 |
GSM8k (5-shot) | 72.93 |
- Downloads last month
- 1,109
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for shadowml/Daredevil-7B
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard69.370
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard87.170
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard65.300
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard64.090
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard81.290
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard72.930