--- language: - zh pipeline_tag: text-generation license: apache-2.0 task_categories: - text-generation size_categories: - 10B

OpenCSG

[OpenCSG Community] [👾github] [wechat] [Twitter]

[📖Technical Report](https://arxiv.org/abs/2501.08197) **smoltalk-chinese** is a Chinese fine-tuning dataset constructed with reference to the SmolTalk dataset. It aims to provide high-quality synthetic data support for training large language models (LLMs). The dataset consists entirely of synthetic data, comprising over 700,000 entries. It is specifically designed to enhance the performance of Chinese LLMs across various tasks, improving their versatility and adaptability. ## Dataset Composition The **smoltalk-chinese** dataset is composed of multiple sections, covering a wide range of task types to ensure exceptional model performance across different application scenarios. #### **1. Magpie-Ultra Reference Tasks** Using Magpie, three-round dialogue data was synthesized for tasks including: - **Information-seeking**: Provides accurate and concise information on a wide range of topics, assisting users in finding specific facts, concept explanations, or detailed information. - **Reasoning**: Focuses on logical thinking and solving complex problems, helping users organize complex thoughts, analyze situations, and draw conclusions. - **Planning**: Assists users in formulating effective plans and strategies, organizing thoughts, setting goals, and creating feasible solutions for tasks or activities. - **Editing**: Improves written content by offering suggestions for grammar, style, clarity, and overall structure, aiding users in refining their writing. - **Coding**: Assists users in writing, reviewing, and debugging code in various programming languages, offering clear explanations and best practices. - **Math**: Addresses questions across a broad range of mathematical disciplines, from foundational concepts to advanced topics, providing clear and concise explanations and solutions. - **Role-playing**: Engages in various role-playing scenarios, adopting different roles based on user requests to create immersive and interactive user experiences. - **Data-analysis**: Helps users understand and extract useful information from datasets, providing insights into data trends and performing analytical tasks. - **Creative-writing**: Supports creative writing tasks, assisting users in crafting compelling stories, poetry, articles, and other creative texts. - **Advice-seeking**: Offers thoughtful advice and guidance, helping users address personal, professional, or life challenges. - **Brainstorming**: Generates ideas and fosters creative thinking, assisting users in exploring possibilities and proposing innovative concepts. #### **2. Additional Tasks Referenced from SmolTalk** Using Magpie, one-round dialogue tasks were synthesized for: - **Format-constrain**: Responds strictly according to the format specified by the user, adhering to all formatting requirements. - **Rewrite**: Rewrites text as per user requirements, making it more concise, focused, or changing the tone, similar to editing. - **Summary**: Summarizes text based on user instructions, meeting specific summarization requirements. - **Safe**: Identifies illegal content and reasonably refuses to respond or provides appropriate advice if illegal instructions are detected. - **Translate**: Translates between English and Chinese as per user requests, fulfilling specific translation requirements. - **Doc**: Answers user questions based on reference text, striving to use information from the reference material without introducing external knowledge. #### **3. Simulated Daily Conversations** Five-round dialogue data was generated to simulate conversational styles typical of everyday interactions, enhancing the model's performance in realistic communication scenarios. #### ** 4. Math Problems from the Math23K Chinese Version** Math problem data, including detailed reasoning steps in the answers, was generated using **deepseek-v2.5**.Increased Training Data Size and Content Diversity # Dataset Generation Methodology The construction of the **smoltalk-chinese** dataset adheres to strict standards, ensuring data quality and diversity: #### **Data Generation** - Magpie was used to synthesize the raw data. - Generation models included **deepseek-v2.5** and **qwen2.5-72b-instruct**, combined with the **Distilabel** library to ensure diversity and richness in the generated content. #### **Data Filtering** - The **qwen2-7b-instruct** model scored the clarity and fluency of the first instruction in the dialogue data on a scale of 0–5. Only data with a score of 2 or above was retained to ensure quality. #### **Deduplication** - The **gte-large-zh** model encoded the first instruction in the conversation data. Deduplication was performed based on embedding similarity (threshold set at 0.8), ensuring the diversity of the data. #### **Task Type and Text Length Statistics**

experiment

# Experiments #### **Experimental Validation** To verify the fine-tuning effectiveness of the **smoltalk-chinese** dataset, the following experimental design was implemented: 1. **Base Model** The base model used was **opencsg/csg-wukong-ablation-chinese-fineweb-edu** (a 2B model pretrained on the **chinese-fineweb-edu** dataset). 2. **Fine-tuning Process** Fine-tuning was performed on **smoltalk-chinese**, **Magpie-Qwen2-Pro-200K-Chinese** and **infinity-instruct** datasets (selecting 7M entries and the Chinese subset of approximately 1M entries), with the following settings: - **Epochs**: 2 - **Learning Rate**: 3e-4 - **Scheduler**: Cosine decay - **Global Batch Size**: 32 3. **Evaluation Results** The model's Chinese conversational capabilities were evaluated on [**Alignbench**](https://github.com/THUDM/AlignBench). Results demonstrated significant advantages for the model fine-tuned on the **smoltalk-chinese** dataset across multiple metrics, confirming the dataset's effectiveness in improving Chinese language model performance. | Dataset | Professional Skills | Chinese Comprehension | Basic Tasks | Math Calculation | Text Writing | General Q&A | Role Playing | Logical Reasoning | Chinese Reasoning | Chinese Language | Total Score | | ----------------------------- | ------------------- | --------------------- | ----------- | ---------------- | ------------ | ----------- | ------------ | ----------------- | ----------------- | ---------------- | ----------- | | smoltalk-chinese | 3.77 | 3.43 | 3.24 | 1.94 | 3.47 | 5.08 | 3.59 | 2.55 | 2.25 | 3.76 | 3 | | infinity-instruct | 2.63 | 2.12 | 1.84 | 1.29 | 3.48 | 4.32 | 3.46 | 1.58 | 1.44 | 2.97 | 2.2 | | Magpie-Qwen2-Pro-200K-Chinese | 2.68 | 2.72 | 2.53 | 1.44 | 3.73 | 4.03 | 3.5 | 2.13 | 1.78 | 3.2 | 2.49 |

experiment

**We warmly invite developers and researchers interested in this field to follow and engage with the community, working together to advance the technology. Stay tuned for the open-source release of the dataset!** ## License Agreement Usage of the Chinese SmolTalk dataset requires adherence to the OpenCSG Community License. The Chinese SmolTalk dataset supports commercial use. If you plan to use the OpenCSG model or its derivatives for commercial purposes, you must comply with the terms and conditions outlined in the OpenCSG Community License as well as the Apache 2.0 License. For commercial use, please send an email to lorraineg@opencsg.com and obtain permission.

# Chinese SmolTalk数据集介绍

OpenCSG

[OpenCSG 社区] [👾github] [微信] [推特]

smoltalk-chinese 是一个参考 SmolTalk 数据集构建的中文微调数据集,旨在为大型语言模型(LLM)的训练提供高质量的合成数据支持。该数据集全部由合成数据组成,涵盖超过70万条数据,专门设计用于提升中文大型语言模型在多种任务上的表现,增强模型的多功能性和适应性。 ## 数据集组成 smoltalk-chinese 数据集由多个部分组成,覆盖广泛的任务类型,以确保模型在不同应用场景中的优异表现。 1. **参考 magpie-ultra 的任务类型,使用magpie合成的3轮对话数据。任务包括:** **information-seeking** - 提供广泛主题的准确和简明信息,帮助用户找到具体事实、概念解释或主题细节。 **reasoning** - 专注于逻辑思维和复杂问题解决,帮助用户理清复杂思想、分析情况并得出结论。 **planning** - 帮助用户制定有效计划和策略,协助组织思想、设定目标并为各种任务或活动制定可行方案。 **editing** - 改进书面内容,提供语法、风格、清晰度和整体结构的建议,帮助用户改进写作。 **coding** - 协助用户编写、审查和调试各种编程语言的代码,提供清晰的解释和最佳实践。 **math** - 回答广泛数学学科的问题,从基础概念到高级主题,提供清晰简明的解释和解决方案。 **role-playing** - 参与各种角色扮演场景,根据用户要求采纳不同角色,创造沉浸式和互动的用户体验。 **data-analysis** - 帮助用户理解并从数据集中提取有用信息,进行数据分析任务,提供清晰的数据趋势说明。 **creative-writing** - 支持创意写作工作,帮助用户创作引人入胜的故事、诗歌、文章及其他创意文本。 **advice-seeking** - 提供深思熟虑的建议和指导,帮助用户解决各种个人或职业或生活问题。 **brainstorming** - 生成想法和促进创造性思维,帮助用户探索可能性并提出创新概念。 2. **参考 smoltalk 中其它任务类型,使用magpie合成的1轮对话任务。任务包括:** **format-constrain** - 严格按照用户指定的格式回答问题,不能忽视任何一个格式要求。 **rewrite** - 文本重写,根据用户要求使表达更精简、重点更突出、改变语气等。和editing类似。 **summary** - 文本总结,根据用户要求总结文本,并满足特定的总结要求。 **safe** - 辨别非法内容,鉴别用户指令中的非法内容并合理拒绝回答或给出劝告。 **translate** - 翻译中英文文本,根据用户要求进行英译中或中译英,并满足特定的翻译要求。 **doc** - 根据参考文本回答用户问题,尽量使用参考文本中的信息,不引入自身知识。 3. **模拟日常生活中的对话风格,生成五轮对话数据,增强模型在真实交流场景中的表现能力。** 4. **来自Math23K中文版的数学题数据,答案包含详细推理步骤,由deepseek-v2.5生成。** ## 数据集合成方法 smoltalk-chinese 数据集的构建过程严格遵循高标准,确保数据的质量和多样性: ### **数据生成** 使用 Magpie 合成原始数据,采用的生成模型包括 deepseek-v2.5 和 qwen2.5-72b-instruct,结合 Distilabel 库进行数据生成,确保生成内容的丰富性和多样性。 ### **数据筛选** 利用 qwen2-7b-instruct 模型对对话数据的第一条指令进行清晰度和流畅度评分(评分范围为0-5分),仅保留评分在2分及以上的数据,以保证数据质量。 ### **去重处理** 使用 gte-large-zh 模型对对话数据的第一条指令进行编码,根据嵌入相似度(阈值设定为0.8)进行去重处理,确保数据的独特性和多样性。 ### 各种任务类型数量与文本长度统计

experiment

## 实验 ### 实验验证 为了验证 smoltalk-chinese 数据集的微调效果,我们采用以下实验设计: ### **基础模型** 选用 opencsg/csg-wukong-ablation-chinese-fineweb-edu(在 chinese-fineweb-edu 上预训练的2B模型)作为基础模型。 ### **微调过程** 分别在 smoltalk-chinese 和 Magpie-Qwen2-Pro-200K-Chinese 和 infinity-instruct 数据集(选取7M和Gen的中文部分,约1M条)上进行微调,训练设置为 - **Epochs**: 2 - **Learning Rate**: 3e-4 - **Scheduler**: Cosine decay - **Global Batch Size**: 32 在 [**Alignbench**](https://github.com/THUDM/AlignBench) 上评估模型的中文对话能力,结果表明,基于 smoltalk-chinese 微调的模型在多个指标上表现出显著优势,验证了 smoltalk-chinese 数据集在提升中文语言模型表现方面的有效性。 | 数据集 | 专业能力 | 中文理解 | 基本任务 | 数学计算 | 文本写作 | 综合问答 | 角色扮演 | 逻辑推理 | 中文推理 | 中文语言 | 总分 | | ----------------------------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | ---- | | smoltalk-chinese | 3.77 | 3.43 | 3.24 | 1.94 | 3.47 | 5.08 | 3.59 | 2.55 | 2.25 | 3.76 | 3 | | infinity-instruct | 2.63 | 2.12 | 1.84 | 1.29 | 3.48 | 4.32 | 3.46 | 1.58 | 1.44 | 2.97 | 2.2 | | Magpie-Qwen2-Pro-200K-Chinese | 2.68 | 2.72 | 2.53 | 1.44 | 3.73 | 4.03 | 3.5 | 2.13 | 1.78 | 3.2 | 2.49 |

experiment

训练的模型在 opencsg/csg-wukong-2b-smoltalk-chinese **我们诚邀对这一领域感兴趣的开发者和研究者关注和联系社区,共同推动技术的进步。敬请期待数据集的开源发布!** ## 许可协议 使用 Chinese SmolTalk数据集需要遵循 OpenCSG 社区许可证。Chinese SmolTalk数据集支持商业用途。如果您计划将 OpenCSG 模型或其衍生产品用于商业目的,您必须遵守 OpenCSG 社区许可证以及 Apache 2.0 许可证中的条款和条件。如用于商业用途,需发送邮件至 lorraineg@opencsg.com,并获得许可。 ## Citation ``` @misc{yu2025opencsgchinesecorpusseries, title={OpenCSG Chinese Corpus: A Series of High-quality Chinese Datasets for LLM Training}, author={Yijiong Yu and Ziyun Dai and Zekun Wang and Wei Wang and Ran Chen and Ji Pei}, year={2025}, eprint={2501.08197}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2501.08197}, } ```