File size: 4,181 Bytes
0613fe5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50fbb08
a777b97
50fbb08
8b21d19
 
50fbb08
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8b21d19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50fbb08
8b21d19
 
6a95c27
8b21d19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
license: apache-2.0
datasets:
- amphora/QwQ-LongCoT-130K
language:
- en
base_model:
- prithivMLmods/QwQ-LCoT-7B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- QwQ
- 4B
- Adapter
- safetensors
- 4bit
- Qwen2.5
- text-generation-inference
---
### **QwQ-4B-Instruct-Model-Files**

The **QwQ-4B-Instruct** is a lightweight and efficient fine-tuned language model for instruction-following tasks and reasoning. It is based on a quantized version of the **Qwen2.5-7B** model, optimized for inference speed and reduced memory consumption, while retaining robust capabilities for complex tasks.

| **File Name**                   | **Size**        | **Description**                                    | **Upload Status** |
|----------------------------------|-----------------|---------------------------------------------------|-------------------|
| `.gitattributes`                 | 1.57 kB         | Tracks files stored with Git LFS.                 | Uploaded          |
| `README.md`                      | 271 Bytes       | Basic project documentation.                      | Updated           |
| `added_tokens.json`              | 657 Bytes       | Specifies additional tokens for the tokenizer.    | Uploaded          |
| `config.json`                    | 1.26 kB         | Detailed model configuration file.                | Uploaded          |
| `generation_config.json`         | 281 Bytes       | Configuration for text generation settings.        | Uploaded          |
| `merges.txt`                     | 1.82 MB         | Byte pair encoding (BPE) merge rules for tokenizer.| Uploaded          |
| `model-00001-of-00002.safetensors`| 4.46 GB         | Part 1 of the model weights in safetensors format.| Uploaded (LFS)    |
| `model-00002-of-00002.safetensors`| 1.09 GB         | Part 2 of the model weights in safetensors format.| Uploaded (LFS)    |
| `model.safetensors.index.json`   | 124 kB          | Index file for safetensors model sharding.         | Uploaded          |
| `special_tokens_map.json`        | 644 Bytes       | Mapping of special tokens (e.g., <PAD>, <EOS>).   | Uploaded          |
| `tokenizer.json`                 | 11.4 MB         | Complete tokenizer configuration.                 | Uploaded (LFS)    |
| `tokenizer_config.json`          | 7.73 kB         | Settings for the tokenizer integration.           | Uploaded          |
| `vocab.json`                     | 2.78 MB         | Vocabulary file containing token-to-id mappings.  | Uploaded          |

### **Key Features:**

1. **Model Size:**
   - **4.46B parameters.**

2. **Precision Support:**
   - Available in multiple tensor types:
     - **FP16**
     - **F32**
     - **U8 (Quantized)**

3. **Model Sharding:**
   - The model weights are stored in two parts for efficient download:
     - `model-00001-of-00002.safetensors` (4.46 GB)
     - `model-00002-of-00002.safetensors` (1.09 GB)
   - Indexed with `model.safetensors.index.json`.

4. **Tokenizer:**
   - Uses Byte-Pair Encoding (BPE).
   - Includes:
     - `vocab.json` (2.78 MB)
     - `merges.txt` (1.82 MB)
     - `tokenizer.json` (11.4 MB, pre-trained configuration).
   - Special tokens mapped in `special_tokens_map.json` (e.g., `<pad>`, `<eos>`).

5. **Configuration Files:**
   - `config.json`: Defines the architecture, hyperparameters, and settings.
   - `generation_config.json`: Specifies text generation behavior (e.g., max length, temperature).

---

### **Training Dataset:**
- **Dataset Name:** [amphora/QwQ-LongCoT-130K](https://huggingface.co/datasets/amphora/QwQ-LongCoT-130K)
- **Size:** 133k examples.
- **Focus:** Chain-of-Thought reasoning for detailed and logical outputs.

---

### **Use Cases:**
1. **Instruction-Following:**
   - Excels in handling concise and multi-step instructions.
   
2. **Reasoning:**
   - Well-suited for tasks requiring logical deductions and detailed explanations.

3. **Text Generation:**
   - Generates coherent and contextually aware responses across various domains.

4. **Resource-Constrained Applications:**
   - Optimized for scenarios requiring lower computational resources due to its smaller model size and quantization.

---