Update README.md
Browse files
README.md
CHANGED
|
@@ -58,10 +58,10 @@ base_model:
|
|
| 58 |
## Key Features
|
| 59 |
|
| 60 |
- **Korean-Centric Reasoning with English Support**: Optimized primarily for Korean reasoning tasks while providing full support for English, enabling robust bilingual usage.
|
| 61 |
-
- **Language-Mixed Chain-of-Thought**: Employs
|
| 62 |
- **Specialized in Science & Technology**: Trained with strong emphasis on scientific and technological domains, making it well-suited for expert-level queries in these areas.
|
| 63 |
-
- **Base Model**: Built upon [KONI-Llama3.1-8B-Instruct-20241024](https://huggingface.co/KISTI-KONI/KONI-Llama3.1-8B-Instruct-20241024), derived from the Llama-3.1-8B-Instruct lineage
|
| 64 |
-
- **Alignment**: Enhanced through Supervised Fine-Tuning (SFT) tailored for Korean
|
| 65 |
- **Strengths**: Demonstrating substantial performance gains across diverse reasoning benchmarks, this model provides coherent and complex reasoning in both Korean and English, capable of addressing a broad spectrum of tasks such as science and technology queries, general knowledge, mathematics, and logical problem-solving.
|
| 66 |
- **Intended Use**: Designed for science and technology Q&A, mathematical and logical problem-solving, Korean document understanding, and as a reasoning backbone for agent systems.
|
| 67 |
|
|
|
|
| 58 |
## Key Features
|
| 59 |
|
| 60 |
- **Korean-Centric Reasoning with English Support**: Optimized primarily for Korean reasoning tasks while providing full support for English, enabling robust bilingual usage.
|
| 61 |
+
- **Language-Mixed Chain-of-Thought**: Employs Language-Mixed Chain-of-Thought strategy that interleaves Korean and English during the thought process, improving both consistency and accuracy in complex reasoning.
|
| 62 |
- **Specialized in Science & Technology**: Trained with strong emphasis on scientific and technological domains, making it well-suited for expert-level queries in these areas.
|
| 63 |
+
- **Base Model**: Built upon [KONI-Llama3.1-8B-Instruct-20241024](https://huggingface.co/KISTI-KONI/KONI-Llama3.1-8B-Instruct-20241024), derived from the Llama-3.1-8B-Instruct lineage.
|
| 64 |
+
- **Alignment**: Enhanced through Supervised Fine-Tuning (SFT) on 260k Language-Mixed Chain-of-Thought (CoT) examples, tailored for bilingual(Korean/English) reasoning and terminology preservation.
|
| 65 |
- **Strengths**: Demonstrating substantial performance gains across diverse reasoning benchmarks, this model provides coherent and complex reasoning in both Korean and English, capable of addressing a broad spectrum of tasks such as science and technology queries, general knowledge, mathematics, and logical problem-solving.
|
| 66 |
- **Intended Use**: Designed for science and technology Q&A, mathematical and logical problem-solving, Korean document understanding, and as a reasoning backbone for agent systems.
|
| 67 |
|