Fine-tuning модели на датасете от Ильи Гусева. Квантоаная версия . Тюнил с помощью

==((====))==  Unsloth 2024.9.post4: Fast Qwen2 patching. Transformers = 4.44.2.
   \\   /|    GPU: Tesla P40. Max memory: 23.866 GB. Platform = Linux.
O^O/ \_/ \    Pytorch: 2.5.1+cu124. CUDA = 6.1. CUDA Toolkit = 12.4.
\        /    Bfloat16 = FALSE. FA [Xformers = 0.0.27.post2. FA2 = False]
 "-____-"     Free Apache license: http://github.com/unslothai/unsloth
Downloads last month
92
GGUF
Model size
3.09B params
Architecture
qwen2

5-bit

6-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for andretisch/qwen2.5_3b_ru_Q6_K.gguf

Base model

Qwen/Qwen2.5-3B
Quantized
(106)
this model

Dataset used to train andretisch/qwen2.5_3b_ru_Q6_K.gguf