File size: 4,876 Bytes
72fb17d
 
 
 
 
 
c3b389c
 
 
 
 
 
 
 
 
 
 
221c934
438accc
27e1e54
438accc
27e1e54
438accc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27e1e54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
221c934
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
license: creativeml-openrail-m
datasets:
- amphora/QwQ-LongCoT-130K
language:
- en
base_model:
- Qwen/Qwen2.5-3B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- text-generation-inference
- long-CoT
- safetensors
- 3B
- Instruct
- QwQ
- Qwen2.5
---
### **QwQ-LCoT-3B-Instruct Model Card**

The **QwQ-LCoT-3B-Instruct** model is a lightweight, instruction-tuned language model designed for complex reasoning and explanation tasks. It is fine-tuned on the **Qwen2.5-3B-Instruct** base model using the **QwQ-LongCoT-130K** dataset, focusing on **long-chain-of-thought (LCoT)** reasoning for enhanced logical comprehension and detailed output generation.

| **File Name**                         | **Size**       | **Description**                                  | **Upload Status**  |
|----------------------------------------|----------------|-------------------------------------------------|--------------------|
| `.gitattributes`                       | 1.57 kB        | Specifies LFS tracking for large files.         | Uploaded           |
| `README.md`                            | 267 Bytes      | Basic project information file.                 | Updated            |
| `added_tokens.json`                    | 657 Bytes      | Custom tokens added to the tokenizer.           | Uploaded           |
| `config.json`                          | 859 Bytes      | Configuration file for the model.               | Uploaded           |
| `generation_config.json`               | 281 Bytes      | Configuration file for text generation settings.| Uploaded           |
| `merges.txt`                           | 1.82 MB        | Contains the byte-pair encoding (BPE) merges.   | Uploaded           |
| `pytorch_model-00001-of-00002.bin`     | 4.96 GB        | First shard of the model weights in PyTorch format. | Uploaded (LFS)     |
| `pytorch_model-00002-of-00002.bin`     | 1.21 GB        | Second shard of the model weights in PyTorch format. | Uploaded (LFS)     |
| `pytorch_model.bin.index.json`         | 36 kB          | Index mapping for sharded model weights.        | Uploaded           |
| `special_tokens_map.json`              | 644 Bytes      | Maps special tokens to their roles.             | Uploaded           |
| `tokenizer.json`                       | 11.4 MB        | Serialized tokenizer data.                      | Uploaded (LFS)     |
| `tokenizer_config.json`                | 7.73 kB        | Tokenizer configuration settings.               | Uploaded           |
| `vocab.json`                           | 2.78 MB        | Vocabulary file for the tokenizer.              | Uploaded           |

### **Key Features:**

1. **Long Chain-of-Thought Reasoning:**  
   - Specifically designed to generate comprehensive, step-by-step explanations for complex queries.

2. **Lightweight and Efficient:**  
   - With only 3 billion parameters, it is optimized for systems with limited computational resources without compromising reasoning capabilities.

3. **Instruction Optimization:**  
   - Fine-tuned to follow prompts and provide concise, actionable, and structured responses.

---

### **Training Details:**

- **Base Model:** [Qwen2.5-3B-Instruct](#)  
- **Dataset:** [amphora/QwQ-LongCoT-130K](#)  
  - Comprising 133,000 annotated samples focusing on logical tasks and structured thinking.
---

### **Capabilities:**

1. **Text Generation:**  
   - Provides detailed, structured, and logical text outputs tailored to user prompts.

2. **Reasoning Tasks:**  
   - Solves step-by-step problems in math, logic, and science.

3. **Educational Assistance:**  
   - Generates coherent explanations for academic and research purposes.

4. **Dialogue and Summarization:**  
   - Handles conversational queries and summarizes long documents effectively.

---

### **Usage Instructions:**

1. **Setup:**
   Download all model files and ensure compatibility with the Hugging Face Transformers library.

2. **Loading the Model:**
   ```python
   from transformers import AutoModelForCausalLM, AutoTokenizer
   
   model_name = "prithivMLmods/QwQ-LCoT-3B-Instruct"
   tokenizer = AutoTokenizer.from_pretrained(model_name)
   model = AutoModelForCausalLM.from_pretrained(model_name)
   ```

3. **Generate Long-Chain Reasoning Outputs:**
   ```python
   input_text = "Explain the process of photosynthesis step-by-step."
   inputs = tokenizer(input_text, return_tensors="pt")
   outputs = model.generate(**inputs, max_length=300, temperature=0.5)
   print(tokenizer.decode(outputs[0], skip_special_tokens=True))
   ```

4. **Customize Output Generation:**  
   Modify the `generation_config.json` file for different scenarios:
   - **`temperature`**: Controls randomness (lower = deterministic, higher = creative).  
   - **`max_length`**: Sets response length.  
   - **`top_p`**: Adjusts sampling for diversity in outputs.

---