File size: 255 Bytes
d248aaa |
1 2 3 4 5 6 7 |
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Na0s/Llama-3.1-8B-Pruned-4-Layers_LoRA-PEFT-DPO
|