Uploaded model

Phi4 Turn R1Distill LoRA Adapters

Overview

These LoRA adapters were trained using diverse reasoning datasets that incorporate structured Thought and Solution responses to enhance logical inference. This project was designed to test the R1 dataset on Phi-4, aiming to create a lightweight, fast, and efficient reasoning model.

All adapters were fine-tuned using an NVIDIA A800 GPU, ensuring high performance and compatibility for continued training, merging, or direct deployment.
As part of an open-source initiative, all resources are made publicly available for unrestricted research and development.


LoRA Adapters

Below are the currently available LoRA fine-tuned adapters (as of January 30, 2025):


GGUF Full & Quantized Models

To facilitate broader testing and real-world inference, GGUF Full and Quantized versions have been provided for evaluation on Open WebUI and other LLM interfaces.

Version 1

Version 1.1

Version 1.2

Version 1.3

Version 1.4

Version 1.5


Usage

Loading LoRA Adapters with transformers and peft

To load and apply the LoRA adapters on Phi-4, use the following approach:

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = "microsoft/Phi-4"
lora_adapter = "Quazim0t0/Phi4.Turn.R1Distill-Lora1"

tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, lora_adapter)

model.eval()
Downloads last month
111
Safetensors
Model size
14.7B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Quazim0t0/Phi4.Turn.R1Distill.16bit

Base model

microsoft/phi-4
Finetuned
(100)
this model
Quantizations
2 models

Dataset used to train Quazim0t0/Phi4.Turn.R1Distill.16bit