Nekochu commited on
Commit
2813a1a
1 Parent(s): f483d51

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -20,7 +20,7 @@ language:
20
 
21
  - Similar to the old [Nekochu/Llama-2-13B-fp16-french](https://huggingface.co/Nekochu/Llama-2-13B-fp16-french) with additional datasets.
22
  - I've (alway) kept LoRA `QLoRA_french_dpo` so it can be applied to any *LLaMA-3.1-8B* fine-tuned model but may affect performance.
23
- - Quants: exllamav2 [2.4bpw-h6](https://huggingface.co/Nekochu/Llama-3.1-8B-french-DPO/tree/2.4bpw-h6), [4.25bpw-h6](https://huggingface.co/Nekochu/Llama-3.1-8B-french-DPO/tree/4.25bpw-h6), [8.0bpw-h8](https://huggingface.co/Nekochu/Llama-3.1-8B-french-DPO/tree/8.0bpw-h8) | [GGUF](https://huggingface.co/Nekochu/Llama-3.1-8B-french-DPO/tree/gguf) Q4_K_M,IQ4_XS...
24
 
25
  <details>
26
  <summary>This training can be replicated using LLaMA-Factory. </summary>
 
20
 
21
  - Similar to the old [Nekochu/Llama-2-13B-fp16-french](https://huggingface.co/Nekochu/Llama-2-13B-fp16-french) with additional datasets.
22
  - I've (alway) kept LoRA `QLoRA_french_dpo` so it can be applied to any *LLaMA-3.1-8B* fine-tuned model but may affect performance.
23
+ - Quants: exl2 [2.4bpw-h6](https://huggingface.co/Nekochu/Llama-3.1-8B-french-DPO/tree/2.4bpw-h6), [4.25bpw-h6](https://huggingface.co/Nekochu/Llama-3.1-8B-french-DPO/tree/4.25bpw-h6), [8.0bpw-h8](https://huggingface.co/Nekochu/Llama-3.1-8B-french-DPO/tree/8.0bpw-h8) | [GGUF](https://huggingface.co/Nekochu/Llama-3.1-8B-french-DPO/tree/gguf) Q4_K_M,IQ4_XS...
24
 
25
  <details>
26
  <summary>This training can be replicated using LLaMA-Factory. </summary>