LoRA trained on a thinking/reasoning and roleplaying dataset and then merged with the Qwen2.5-7B-Instruct-1M model, which supports up to 1 million token context lengths.

What this Model Can Do:

  • Roleplay: Engage in creative conversations and storytelling!
  • Reasoning: Tackle problems and answer your questions in a logical way (thanks to the LoRA layer).
  • Thinking: Use the <think> tag in your system prompts to activate the model's thinking abilities.

Merge Method

This model was merged using the Passthrough merge method using Qwen/Qwen2.5-7B-Instruct-1M + bunnycore/Qwen-2.5-7B-1M-RRP-v1-lora as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:


base_model: Qwen/Qwen2.5-7B-Instruct-1M+bunnycore/Qwen-2.5-7B-1M-RRP-v1-lora
dtype: bfloat16
merge_method: passthrough
models:
  - model: Qwen/Qwen2.5-7B-Instruct-1M+bunnycore/Qwen-2.5-7B-1M-RRP-v1-lora
tokenizer_source: Qwen/Qwen2.5-7B-Instruct-1M

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 32.96
IFEval (0-Shot) 74.81
BBH (3-Shot) 35.65
MATH Lvl 5 (4-Shot) 28.17
GPQA (0-shot) 7.05
MuSR (0-shot) 15.80
MMLU-PRO (5-shot) 36.29
Downloads last month
40
Safetensors
Model size
7.61B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for bunnycore/Qwen2.5-7B-RRP-1M

Evaluation results