File size: 6,310 Bytes
0a910cf
 
 
 
 
 
d71921c
 
0a910cf
 
 
 
413d3ce
 
0a910cf
 
 
 
 
 
 
 
 
 
 
413d3ce
 
 
0a910cf
 
 
 
 
 
 
 
413d3ce
0a910cf
 
 
413d3ce
0a910cf
 
 
 
413d3ce
 
0a910cf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b65f13e
0a910cf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3c4ac90
0a910cf
3c4ac90
 
9e095ee
3c4ac90
 
9e095ee
3c4ac90
 
 
9e095ee
3c4ac90
 
9e095ee
3c4ac90
 
055fbb8
3c4ac90
0a910cf
055fbb8
 
 
0a910cf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
---
language:
- ja
---

## 本モデルについて about this model.
[Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct)を[日本語が多く含まれる重要度行列(iMatrix)](https://huggingface.co/dahara1/imatrix-jpn-test)を使って量子化し、超長文(32K以上)要約を可能にしたgguf版です。日本語対応能力が多めに保持されている事を期待しています。  
This is a gguf version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) that has been quantized using [importance matrix (iMatrix) that contains a lot of Japanese](https://huggingface.co/dahara1/imatrix-jpn-test) to enable summarization of long texts (over 32K). We hope that it retains a large amount of Japanese support.

少なくともQwen2.5-3B-Instruct-gguf-japanese-imatrix-128K/Qwen2.5-3B-Instruct-Q8_0-f16.ggufが32Kトークンを超える超長文を正しく要約できる事を確認済です。  
It has been confirmed that at least Qwen2.5-3B-Instruct-gguf-japanese-imatrix-128K/Qwen2.5-3B-Instruct-Q8_0-f16.gguf can correctly summarize extremely long texts exceeding 32K tokens.  

128Kコンテキスト延長については[unsloth/Qwen2.5-Coder-32B-Instruct-128K-GGUF](https://huggingface.co/unsloth/Qwen2.5-Coder-32B-Instruct-128K-GGUF)の指摘を参考にしています。ありがとう。  
Regarding the 128K context extension, I have taken note of the suggestion made by [unsloth/Qwen2.5-Coder-32B-Instruct-128K-GGUF](https://huggingface.co/unsloth/Qwen2.5-Coder-32B-Instruct-128K-GGUF). Thank you.  


## For ollama users
ollama ユーザーは[FAQ](https://github.com/ollama/ollama/blob/main/docs/faq.md)を参考にしてcontext window sizeパラメーターを修正してください。  
If you use ollama, check [FAQ](https://github.com/ollama/ollama/blob/main/docs/faq.md) and set context window size parameter like below.

```
/set parameter num_ctx 40960
```
or API 
```
curl http://..../api/generate -d '{
  "model": ".....",
  "prompt": "......",
  "options": {
    "num_ctx": 40960
  }
}'
```

あなたが他のツールを使っている場合、同様にあなたの使っているツールのマニュアルを調べて、コンテキストウインドウサイズを延長する事を忘れないでください  
ただし、コンテキストサイズを必要以上に大きくするとモデルの実行速度が低下するので注意してください  
本モデルは理論上、最大値128K(131072)に設定できますが、実行速度と品質に影響が出る事が考えられます  

If you are using other tools, be sure to extend the context window size as well, by consulting the manual of your tool.  
But please note that increasing the context window size more than necessary will slow down the model's execution speed.  
In theory, this model can be set to the maximum value of 128K(131072), but this may affect execution speed and quality.  


## Sample llama.cpp script

以下は、Wikipediaの約50,000文字(34.8Kトークン)の記事を取得して内容を要約するサンプルです  
Below is a sample that retrieves a Wikipedia article of about 50,000 Japanese characters(34.8K tokens) and summarizes its contents.  


llama.cpp server command sample.
```
./llama.cpp/build/bin/Release/llama-server.exe -m ./Qwen2.5-3B-Instruct-Q8_0-f16.gguf -c 40960
```


llama.cpp client script sample.
```
import transformers
import requests
import json
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-3B-Instruct")

url = "https://ja.wikipedia.org/wiki/%E7%94%B7%E3%81%AE%E5%A8%98"

def get_wikipedia_text(url):
    response = requests.get(url)
    if response.status_code == 200:
        from bs4 import BeautifulSoup
        soup = BeautifulSoup(response.text, 'html.parser')
        paragraphs = soup.find_all('p')
        text = "\n".join([p.get_text() for p in paragraphs])
        return text
    else:
        raise Exception(f"Failed to fetch the article. Status code: {response.status_code}")
        
if __name__ == "__main__":

    html_text = get_wikipedia_text(url)
    #html_text = html_text[:40000]

    instruct = "### 指示\n\n日本語で3行で要約してください"

    # instruct first version
    messages  = [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": instruct + "\n\n" + html_text},
    ]

    # instruct last version
    messages  = [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": html_text  + "\n\n" + instruct},
    ]
    
    prompt = tokenizer.apply_chat_template(
            messages,
            add_generation_prompt=True,
            tokenize=False
    )
    print(prompt)

    payload = {
            "prompt": prompt,
            "n_predict": 512
    }

    url = "http://localhost:8080/completion"
    headers = {
        "Content-Type": "application/json"
    }

    response = requests.post(url, headers=headers, data=json.dumps(payload))
    if response.status_code != 200:
        print(f"Error: {response.text}")

    response_data = response.json()

    response_content = response_data.get('content', '').strip()
    print(response_content)
```

### 出力結果(output sample)

#### This 128K model
128K instruct first version  
![128K instruct first version](128k_full_instruct_first.png)

128K instruct last version  
![128K instruct last version](128k_full_instruct_last.png)

#### Standard 32K model
32K instruct first version  
![32K instruct first version](32k_full_instruct_first.png)

32K instruct last version  
![32K instruct last version](32k_full_instruct_last.png)


32K instruct first versionでは要約指示がコンテキスト外になっており、指示が無視されている事に注目してください。  
Notice that in the 32K instruct first version the summary instruction is out of context and the instruction is ignored.  

32K instruct last versionも記事冒頭部分がコンテキスト外になっているため、用語解説の視点が弱まっています。
The 32K instruct last version also has the beginning of the article out of context, weakening the perspective of the terminology explanation.