chienweichang commited on
Commit
ae255f0
1 Parent(s): 1ca763a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -17,9 +17,12 @@ This repo contains GGUF format model files for [yentinglin/Llama-3-Taiwan-8B-Ins
17
  ## Provided files
18
  | Name | Quant method | Bits | Size | Use case |
19
  | ---- | ---- | ---- | ---- | ---- |
20
- | [llama-3-taiwan-8b-instruct-128k-q5_k_m.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-128K-GGUF/blob/main/llama-3-taiwan-8b-instruct-128k-q5_k_m.gguf) | Q5_K_M | 5 | 5.73 GB| large, very low quality loss |
21
- | [llama-3-taiwan-8b-instruct-128k-q6_k.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-128K-GGUF/blob/main/llama-3-taiwan-8b-instruct-128k-q6_k.gguf) | Q6_K | 6 | 6.6 GB| very large, extremely low quality loss |
22
- | [llama-3-taiwan-8b-instruct-128k-q8_0.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-128K-GGUF/blob/main/llama-3-taiwan-8b-instruct-128k-q8_0.gguf) | Q8_0 | 8 | 8.54 GB| very large, extremely low quality loss |
 
 
 
23
 
24
  ## Original model card
25
 
 
17
  ## Provided files
18
  | Name | Quant method | Bits | Size | Use case |
19
  | ---- | ---- | ---- | ---- | ---- |
20
+ | [llama-3-taiwan-8b-instruct-128k-q5_0.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-128k-GGUF/blob/main/llama-3-taiwan-8b-instruct-128k-q5_0.gguf) | Q5_0 | 5 | 5.6 GB| legacy; medium, balanced quality |
21
+ | [llama-3-taiwan-8b-instruct-128k-q5_1.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-128k-GGUF/blob/main/llama-3-taiwan-8b-instruct-128k-q5_1.gguf) | Q5_1 | 5 | 6.07 GB| large, low quality loss |
22
+ | [llama-3-taiwan-8b-instruct-128k-q5_k_s.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-128k-GGUF/blob/main/llama-3-taiwan-8b-instruct-128k-q5_k_s.gguf) | Q5_K_S | 5 | 5.6 GB| large, very low quality loss |
23
+ | [llama-3-taiwan-8b-instruct-128k-q5_k_m.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-128k-GGUF/blob/main/llama-3-taiwan-8b-instruct-128k-q5_k_m.gguf) | Q5_K_M | 5 | 5.73 GB| large, very low quality loss |
24
+ | [llama-3-taiwan-8b-instruct-128k-q6_k.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-128k-GGUF/blob/main/llama-3-taiwan-8b-instruct-128k-q6_k.gguf) | Q6_K | 6 | 6.6 GB| very large, extremely low quality loss |
25
+ | [llama-3-taiwan-8b-instruct-128k-q8_0.gguf](https://huggingface.co/chienweichang/Llama-3-Taiwan-8B-Instruct-128k-GGUF/blob/main/llama-3-taiwan-8b-instruct-128k-q8_0.gguf) | Q8_0 | 8 | 8.54 GB| very large, extremely low quality loss |
26
 
27
  ## Original model card
28