File size: 14,334 Bytes
3c2ee69
 
 
 
7b19ed9
 
3c2ee69
2e6c3b6
 
3c2ee69
2e6c3b6
 
 
 
3c2ee69
2e6c3b6
 
3c2ee69
2e6c3b6
 
3c2ee69
2e6c3b6
3c2ee69
 
 
 
 
 
55a0119
2e6c3b6
 
 
3c2ee69
2e6c3b6
3c2ee69
2e6c3b6
3c2ee69
5bfb69d
3c2ee69
 
2e6c3b6
 
 
 
3c2ee69
2e6c3b6
3c2ee69
2e6c3b6
3c2ee69
2e6c3b6
3c2ee69
 
 
2e6c3b6
3c2ee69
2e6c3b6
3c2ee69
 
2e6c3b6
 
3c2ee69
2e6c3b6
3c2ee69
 
 
2e6c3b6
 
 
3c2ee69
2e6c3b6
3c2ee69
2e6c3b6
3c2ee69
2e6c3b6
 
3c2ee69
2e6c3b6
3c2ee69
28d4b82
2e6c3b6
3c2ee69
2e6c3b6
3c2ee69
28d4b82
2e6c3b6
3c2ee69
2e6c3b6
3c2ee69
28d4b82
2e6c3b6
3c2ee69
2e6c3b6
3c2ee69
2e6c3b6
 
 
3c2ee69
2e6c3b6
3c2ee69
2e6c3b6
3c2ee69
2e6c3b6
3c2ee69
2e6c3b6
3c2ee69
 
 
2e6c3b6
3c2ee69
2e6c3b6
3c2ee69
2e6c3b6
3c2ee69
 
 
 
 
2e6c3b6
 
3c2ee69
2e6c3b6
 
 
3c2ee69
2e6c3b6
3c2ee69
2e6c3b6
 
 
 
3c2ee69
2e6c3b6
 
 
 
 
265656e
2e6c3b6
 
 
 
3c2ee69
2e6c3b6
 
 
 
 
 
 
3c2ee69
2e6c3b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3c2ee69
2e6c3b6
 
 
 
 
 
 
 
3c2ee69
2e6c3b6
 
 
 
 
 
 
3c2ee69
2e6c3b6
 
 
 
 
 
 
3c2ee69
2e6c3b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3c2ee69
2e6c3b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3c2ee69
2e6c3b6
 
 
3c2ee69
2e6c3b6
 
 
9cee162
2e6c3b6
9cee162
 
 
e1288db
 
 
 
 
 
 
 
 
 
 
 
 
 
2e6c3b6
 
 
 
55a0119
 
2e6c3b6
3c2ee69
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
---
language:
- en
- zh
tags:
- medical
---
<div align="center">
<h1>
  Baichuan-M1-14B-Base
</h1>
</div>

<p align="center">
๐Ÿค— <a href="https://huggingface.co/baichuan-inc/Baichuan-M1-14B-Base" target="_blank">Baichuan-M1-14B-Base</a> โ€ข ๐Ÿค— <a href="https://huggingface.co/baichuan-inc/Baichuan-M1-14B-Instruct" target="_blank">Baichuan-M1-14B-Instruct</a> โ€ข ๐Ÿ’ฌ <a href="https://y41.8if.cn/JQCj6n" target="_blank">WeChat</a>
</p>


---

# ๐Ÿ“– Table of Contents

- [๐Ÿ Model Introduction](#intro)
- [๐Ÿ”ฌ Data Collection and Processing](#data)
- [๐Ÿง  New Model Architecture](#structure)
- [โš™๏ธ Training Methodology](#training)
- [๐Ÿ“Š Benchmark Results](#benchmark)
- [๐Ÿš€ Quick Start](#quick)
- [๐Ÿ“œ License and Statement](#declare)

---
<a name="intro"></a>
# ๐Ÿ Model Introduction

**Baichuan-14B-M1** is the industry's first open-source large language model developed from scratch by Baichuan Intelligence, specifically optimized for medical scenarios. While excelling in general capabilities, it demonstrates powerful performance in the medical field. It achieves results comparable to models of similar size in most general benchmark evaluations, while outperforming models five times larger in medical scenarios. Below are the core features of the model:

- Trained from scratch on **20 trillion tokens** of high-quality medical and general data.
- Specialized modeling for **20+ medical departments** with fine-grained medical expertise.
- Introduces **innovative model architecture**, significantly improving context understanding and long-sequence task performance.
- Provides **[๐Ÿค— Base Model](https://huggingface.co/baichuan-inc/Baichuan-M1-14B-Base)** and **[๐Ÿค— Instruct Model](https://huggingface.co/baichuan-inc/Baichuan-M1-14B-Instruct)**.


---
<a name="data"></a>
# ๐Ÿ”ฌ Data Collection and Processing

## Medical Data Collection

We conducted meticulous data collection and synthesis for the medical field, including:

- **Tens of millions of professional medical data**: Chinese/English professional papers, medical cases, medical textbooks, knowledge bases, etc.
- **Hundreds of millions of medical Q&A and clinical data**: Covering complex medical reasoning and real-world clinical cases.
- **Comprehensive data classification and evaluation**: Categorized by medical departments, content, and value to ensure balanced data distribution and filter out truly valuable medical data.

## Data Synthesis and Optimization

- **Synthetic data design**: Combining knowledge graphs, cases, and textbooks to generate diverse, high-quality medical reasoning data.
- **Self-reflection mechanism and reward model**: Continuously improving the quality of synthetic data, ultimately generating **nearly a trillion tokens** of reasoning data, covering long-tail knowledge and complex scenarios.


## General Data Collection

- **20T multilingual general dataset**: Including 14T English data, 4T Chinese data, and 2T data covering 30 mainstream languages.
- **Deduplication and upsampling strategy**: Upsampling high-quality data to significantly enhance model performance.
- **27 global knowledge categories**: Optimizing data ratios based on small model experiments to balance general and domain-specific capabilities.

---
<a name="structure"></a>
# ๐Ÿง  New Model Architecture

## Short Convolution Attention Mechanism

- By introducing lightweight short convolution operations when computing Key and Value, the reliance of standard Transformer models on induction heads for learning is significantly reduced. Traditional Transformers rely on induction heads to capture repetitive patterns and contextual dependencies in sequences, which requires a certain model width and depth. Short convolution decouples the Key and Value sequences in the time dimension, enhancing context learning capabilities. Extensive experiments from toy models to models with over ten billion parameters show that the short convolution attention mechanism excels in language modeling tasks, especially those heavily dependent on contextual information.


## Sliding Window Attention Mechanism

- Adopting a sliding window attention mechanism in some layers to reduce KV Cache memory usage.
- Balancing computational efficiency and performance, especially suitable for long-sequence tasks.

## Optimizing Position Encoding Oscillation

- By increasing the dimensions of some attention heads, RoPE curve oscillation is reduced.
- More stable performance in long-sequence tasks while maintaining the model's ability to capture diverse features.

## High Peak Learning Rate Strategy

- Using **WSD learning rate scheduling strategy** with high peak learning rates to promote model generalization.
- Significant improvement in benchmark task performance.

## Adaptive Gradient Update

- **Dynamic gradient clipping**: Skipping updates when gradients are too large to reduce instability caused by special samples or steep loss spaces.

---
<a name="training"></a>
# โš™๏ธ Training Methodology

We innovatively adopted a **multi-stage curriculum learning and alignment optimization** approach, systematically enhancing model capabilities through the following two parts:

## 1. Multi-Stage Curriculum Learning

Training is divided into three stages, progressively optimizing the model's general and medical domain capabilities:

1. **General Knowledge Enhancement Stage**: Focused on general language modeling to improve basic language and common sense.
2. **Medical Basic Knowledge Enhancement Stage**: Introducing high-quality medical data to enhance reasoning, mathematical, and medical knowledge.
3. **Medical Advanced Knowledge Enhancement Stage**: Further optimizing data quality, focusing on complex medical reasoning, disease diagnosis, and long-tail knowledge.

## 2. Alignment Optimization

Enhancing model generation quality, logical reasoning, and user preference alignment through reinforcement learning and pairwise data optimization:

1. **Pairwise Data**: Covering multi-turn dialogues, instruction following, math and code, and reasoning tasks, sourced from human annotations and multi-model generation.
2. **Optimization Process**:
   - **ELO**: Optimizing diverse, high-quality chain-of-thought generation based on maximum likelihood.
   - **TDPO**: Using pairwise data to optimize the generation model for better user preference alignment.
   - **PPO**: Further enhancing generation logic and task performance through policy optimization.
     

This combined approach of multi-stage and alignment optimization enables the model to achieve exceptional performance in both general and medical domain capabilities.

---
<a name="benchmark"></a>
# ๐Ÿ“Š Benchmark Results

Our evaluation covers all mainstream benchmarks, achieving excellent metrics in both open-source and closed-source evaluations, demonstrating outstanding medical scenario capabilities while maintaining strong general performance.

<table style="border: 1px solid #000; border-collapse: collapse; width: 100%; text-align: center;">
    <thead>
        <tr>
            <th>Category</th>
            <th>Benchmark</th>
            <th style="font-size:15px;">Baichuan-M1-14B-Instruct</th>
            <th style="font-size:15px;">Qwen2.5-14B-Instruct</th>
            <th style="font-size:15px;">Qwen2.5-72B-Instruct</th>
            <th style="font-size:15px;">claude-3.5-sonnet-20241022</th>
            <th style="font-size:15px;">gpt-4o</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td colspan="2" style="text-align: center;">Average Score</td>
            <td>72.23</td>
            <td>65.39</td>
            <td>70.51</td>
            <td>74.85</td>
            <td>75.00</td>
        </tr>
        <tr>
            <td rowspan="7" style="vertical-align: middle;">Clinical Practice</td>
            <td style="text-align: left;">cmbclin</td>
            <td>77.40</td>
            <td>71.51</td>
            <td>75.36</td>
            <td>78.37</td>
            <td>75.36</td>
        </tr>
        <tr>
            <td style="text-align: left;">clinicalbench_diag</td>
            <td>70.90</td>
            <td>68.85</td>
            <td>72.23</td>
            <td>75.00</td>
            <td>73.05</td>
        </tr>
        <tr>
            <td style="text-align: left;">clinicalbench_hos</td>
            <td>70.05</td>
            <td>68.83</td>
            <td>70.53</td>
            <td>65.58</td>
            <td>69.38</td>
        </tr>
        <tr>
            <td style="text-align: left;">clinicalbench_treat</td>
            <td>56.38</td>
            <td>55.03</td>
            <td>57.30</td>
            <td>64.03</td>
            <td>59.35</td>
        </tr>
        <tr>
            <td style="text-align: left;">rarearena_rdc</td>
            <td>81.80</td>
            <td>66.40</td>
            <td>76.20</td>
            <td>89.60</td>
            <td>88.40</td>
        </tr>
        <tr>
            <td style="text-align: left;">rarearena_rds</td>
            <td>54.00</td>
            <td>42.60</td>
            <td>49.80</td>
            <td>59.80</td>
            <td>57.20</td>
        </tr>
        <tr>
            <td style="text-align: left;">rarebench</td>
            <td>59.60</td>
            <td>52.80</td>
            <td>60.60</td>
            <td>65.30</td>
            <td>62.80</td>
        </tr>
        <tr>
            <td rowspan="10" style="vertical-align: middle;">Exams</td>
            <td style="text-align: left;">cmexam</td>
            <td>80.10</td>
            <td>77.70</td>
            <td>82.70</td>
            <td>77.50</td>
            <td>78.00</td>
        </tr>
        <tr>
            <td style="text-align: left;">Pediatric Qualification Exam</td>
            <td>78.48</td>
            <td>74.68</td>
            <td>84.81</td>
            <td>76.58</td>
            <td>78.48</td>
        </tr>
        <tr>
            <td style="text-align: left;">Internal Medicine Qualification Exam</td>
            <td>83.42</td>
            <td>86.10</td>
            <td>87.17</td>
            <td>87.70</td>
            <td>83.42</td>
        </tr>
        <tr>
            <td style="text-align: left;">General Practice Qualification Exam</td>
            <td>87.07</td>
            <td>88.44</td>
            <td>88.44</td>
            <td>81.63</td>
            <td>84.35</td>
        </tr>
        <tr>
            <td style="text-align: left;">USMLE</td>
            <td>78.00</td>
            <td>67.20</td>
            <td>76.70</td>
            <td>85.90</td>
            <td>87.10</td>
        </tr>
        <tr>
            <td style="text-align: left;">medbullets</td>
            <td>66.88</td>
            <td>54.22</td>
            <td>64.29</td>
            <td>72.40</td>
            <td>75.97</td>
        </tr>
        <tr>
            <td style="text-align: left;">mediq</td>
            <td>83.40</td>
            <td>66.80</td>
            <td>79.90</td>
            <td>88.80</td>
            <td>90.20</td>
        </tr>
        <tr>
            <td style="text-align: left;">nejmqa</td>
            <td>49.75</td>
            <td>45.69</td>
            <td>50.76</td>
            <td>69.54</td>
            <td>54.31</td>
        </tr>
        <tr>
            <td style="text-align: left;">pubmedqa</td>
            <td>75.20</td>
            <td>76.40</td>
            <td>75.60</td>
            <td>77.00</td>
            <td>77.60</td>
        </tr>
        <tr>
            <td style="text-align: left;">redisqa</td>
            <td>74.50</td>
            <td>69.70</td>
            <td>75.00</td>
            <td>83.20</td>
            <td>82.80</td>
        </tr>
        <tr>
            <td rowspan="5" style="vertical-align: middle;">Basic Capabilities</td>
            <td style="text-align: left;">mednli_dis</td>
            <td>80.40</td>
            <td>68.90</td>
            <td>74.90</td>
            <td>58.30</td>
            <td>79.80</td>
        </tr>
        <tr>
            <td style="text-align: left;">medcalc</td>
            <td>56.00</td>
            <td>31.40</td>
            <td>37.90</td>
            <td>52.60</td>
            <td>49.00</td>
        </tr>
        <tr>
            <td style="text-align: left;">MMLU-anatomy</td>
            <td>80.00</td>
            <td>67.41</td>
            <td>71.11</td>
            <td>86.67</td>
            <td>91.11</td>
        </tr>
        <tr>
            <td style="text-align: left;">MMLU-virology</td>
            <td>54.82</td>
            <td>56.02</td>
            <td>53.01</td>
            <td>54.22</td>
            <td>57.23</td>
        </tr>
        <tr>
            <td style="text-align: left;">MMLU-genetics</td>
            <td>91.00</td>
            <td>82.00</td>
            <td>87.00</td>
            <td>97.00</td>
            <td>95.00</td>
        </tr>
    </tbody>
</table>


---
<a name="quick"></a>
# ๐Ÿš€ Quick Start

### ๐Ÿค— Hugging Face Transformers

We recommend using the latest version of the Transformers library (at least 4.47.0). The following code snippet demonstrates how to use the **Baichuan-M1-14B-Instruct** model:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# 1. Load pre-trained model and tokenizer
model_name = "baichuan-inc/Baichuan-M1-14B-Base"  
tokenizer = AutoTokenizer.from_pretrained(model_name,trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name,trust_remote_code=True,torch_dtype = torch.bfloat16).cuda()

input_text = "I have recently recovered from my cold."


inputs = tokenizer(input_text, return_tensors="pt").to(model.device)

outputs = model.generate(
        inputs["input_ids"],
        max_length=100, 
    )

generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("Generated Text:")
print(generated_text)
```

---
<a name="declare"></a>
# ๐Ÿ“œ License and Statement
The use of the model must comply with [ใ€ŠBaichuan-M1-14Bๆจกๅž‹็คพๅŒบ่ฎธๅฏๅ่ฎฎใ€‹](https://github.com/baichuan-inc/Baichuan-M1-14B/blob/main/Baichuan-M1-14Bๆจกๅž‹็คพๅŒบ่ฎธๅฏๅ่ฎฎ.pdf).

The development team of Baichuan has not developed any commercial applications based on this model. All users must comply with laws and regulations and must not use the model for harmful national security or illegal purposes.