File size: 1,012 Bytes
d0fb409
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a6bfc25
ae4407b
a6bfc25
ae4407b
a6bfc25
ae4407b
a6bfc25
ae4407b
a6bfc25
ae4407b
a6bfc25
aa1382f
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
---
library_name: transformers
base_model:
- Qwen/Qwen2.5-32B-Instruct
license: apache-2.0
---
# Rombos-LLM-V2.5-Qwen-32b

![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642cc1c253e76b4c2286c58e/hXnQV6WtMKrmIQPdjECSX.jpeg)

Rombos-LLM-V2.5-Qwen-32b is a continues finetuned version of Qwen2.5-32B. I noticed recently that the Qwen team did not learn from my methods of continuous finetuning, the great benefits, and no downsides of it. So I took it upon myself to merge the instruct model with the base model myself using the *Ties* merge method

This version of the model shows higher performance than the original instruct and base models. 

Quants: (Coming soon)

GGUF: https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF

EXL2: 

(8-bit)

- https://huggingface.co/Apel-sin/rombos-llm-v2.5-qwen-32b-exl2

(5-bit)

- https://huggingface.co/async0x42/Rombos-LLM-V2.5-Qwen-32b-exl2_5.0bpw

(4.25-bit)

- https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-32b-Exl2-4.25-bit