Update README.md
Browse files
README.md
CHANGED
@@ -4,8 +4,8 @@ language:
|
|
4 |
---
|
5 |
|
6 |
## ζ¬γ’γγ«γ«γ€γγ¦ about this model.
|
7 |
-
[Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct)γ[ζ₯ζ¬θͺγε€γε«γΎγγιθ¦εΊ¦θ‘ε(iMatrix)](dahara1/imatrix-jpn-test)γδ½Ώγ£γ¦ιεεγγθΆ
ι·ζ(32Kδ»₯δΈ)θ¦η΄γε―θ½γ«γγggufηγ§γγζ₯ζ¬θͺε―ΎεΏθ½εγε€γγ«δΏζγγγ¦γγδΊγζεΎ
γγ¦γγΎγγ
|
8 |
-
This is a gguf version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) that has been quantized using [importance matrix (iMatrix) that contains a lot of Japanese](dahara1/imatrix-jpn-test) to enable summarization of long texts (over 32K). We hope that it retains a large amount of Japanese support.
|
9 |
|
10 |
ε°γͺγγ¨γQwen2.5-3B-Instruct-gguf-japanese-imatrix-128K/Qwen2.5-3B-Instruct-Q8_0-f16.ggufγ32KγγΌγ―γ³γθΆ
γγθΆ
ι·ζγζ£γγθ¦η΄γ§γγδΊγη’ΊθͺζΈγ§γγ
|
11 |
It has been confirmed that at least Qwen2.5-3B-Instruct-gguf-japanese-imatrix-128K/Qwen2.5-3B-Instruct-Q8_0-f16.gguf can correctly summarize extremely long texts exceeding 32K tokens.
|
|
|
4 |
---
|
5 |
|
6 |
## ζ¬γ’γγ«γ«γ€γγ¦ about this model.
|
7 |
+
[Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct)γ[ζ₯ζ¬θͺγε€γε«γΎγγιθ¦εΊ¦θ‘ε(iMatrix)](https://huggingface.co/dahara1/imatrix-jpn-test)γδ½Ώγ£γ¦ιεεγγθΆ
ι·ζ(32Kδ»₯δΈ)θ¦η΄γε―θ½γ«γγggufηγ§γγζ₯ζ¬θͺε―ΎεΏθ½εγε€γγ«δΏζγγγ¦γγδΊγζεΎ
γγ¦γγΎγγ
|
8 |
+
This is a gguf version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) that has been quantized using [importance matrix (iMatrix) that contains a lot of Japanese](https://huggingface.co/dahara1/imatrix-jpn-test) to enable summarization of long texts (over 32K). We hope that it retains a large amount of Japanese support.
|
9 |
|
10 |
ε°γͺγγ¨γQwen2.5-3B-Instruct-gguf-japanese-imatrix-128K/Qwen2.5-3B-Instruct-Q8_0-f16.ggufγ32KγγΌγ―γ³γθΆ
γγθΆ
ι·ζγζ£γγθ¦η΄γ§γγδΊγη’ΊθͺζΈγ§γγ
|
11 |
It has been confirmed that at least Qwen2.5-3B-Instruct-gguf-japanese-imatrix-128K/Qwen2.5-3B-Instruct-Q8_0-f16.gguf can correctly summarize extremely long texts exceeding 32K tokens.
|