QuantFactory Banner

QuantFactory/Rombos-LLM-V2.5-Qwen-7b-GGUF

This is quantized version of rombodawg/Rombos-LLM-V2.5-Qwen-7b created using llama.cpp

Original Model Card

Rombos-LLM-V2.5-Qwen-7b

image/jpeg

Rombos-LLM-V2.5-Qwen-7b is a continues finetuned version of Qwen2.5-7B. I noticed recently that the Qwen team did not learn from my methods of continuous finetuning, the great benefits, and no downsides of it. So I took it upon myself to merge the instruct model with the base model myself using the Ties merge method

This version of the model shows higher performance than the original instruct and base models.

Quants:

GGUF: https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-7b-GGUF

Benchmarks: (Coming soon)

Downloads last month
95
GGUF
Model size
7.62B params
Architecture
qwen2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for QuantFactory/Rombos-LLM-V2.5-Qwen-7b-GGUF

Base model

Qwen/Qwen2.5-7B
Quantized
(151)
this model