yangdonghun3 commited on
Commit
0e27fa4
·
verified ·
1 Parent(s): f99b40e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -58,10 +58,10 @@ base_model:
58
  ## Key Features
59
 
60
  - **Korean-Centric Reasoning with English Support**: Optimized primarily for Korean reasoning tasks while providing full support for English, enabling robust bilingual usage.
61
- - **Language-Mixed Chain-of-Thought**: Employs a reasoning strategy that interleaves Korean and English during the thought process, improving both consistency and accuracy in complex reasoning.
62
  - **Specialized in Science & Technology**: Trained with strong emphasis on scientific and technological domains, making it well-suited for expert-level queries in these areas.
63
- - **Base Model**: Built upon [KONI-Llama3.1-8B-Instruct-20241024](https://huggingface.co/KISTI-KONI/KONI-Llama3.1-8B-Instruct-20241024), derived from the Llama-3.1-8B-Instruct lineage, ensuring compatibility with existing Llama-3.1 infrastructure.
64
- - **Alignment**: Enhanced through Supervised Fine-Tuning (SFT) tailored for Korean and English reasoning and terminology preservation.
65
  - **Strengths**: Demonstrating substantial performance gains across diverse reasoning benchmarks, this model provides coherent and complex reasoning in both Korean and English, capable of addressing a broad spectrum of tasks such as science and technology queries, general knowledge, mathematics, and logical problem-solving.
66
  - **Intended Use**: Designed for science and technology Q&A, mathematical and logical problem-solving, Korean document understanding, and as a reasoning backbone for agent systems.
67
 
 
58
  ## Key Features
59
 
60
  - **Korean-Centric Reasoning with English Support**: Optimized primarily for Korean reasoning tasks while providing full support for English, enabling robust bilingual usage.
61
+ - **Language-Mixed Chain-of-Thought**: Employs Language-Mixed Chain-of-Thought strategy that interleaves Korean and English during the thought process, improving both consistency and accuracy in complex reasoning.
62
  - **Specialized in Science & Technology**: Trained with strong emphasis on scientific and technological domains, making it well-suited for expert-level queries in these areas.
63
+ - **Base Model**: Built upon [KONI-Llama3.1-8B-Instruct-20241024](https://huggingface.co/KISTI-KONI/KONI-Llama3.1-8B-Instruct-20241024), derived from the Llama-3.1-8B-Instruct lineage.
64
+ - **Alignment**: Enhanced through Supervised Fine-Tuning (SFT) on 260k Language-Mixed Chain-of-Thought (CoT) examples, tailored for bilingual(Korean/English) reasoning and terminology preservation.
65
  - **Strengths**: Demonstrating substantial performance gains across diverse reasoning benchmarks, this model provides coherent and complex reasoning in both Korean and English, capable of addressing a broad spectrum of tasks such as science and technology queries, general knowledge, mathematics, and logical problem-solving.
66
  - **Intended Use**: Designed for science and technology Q&A, mathematical and logical problem-solving, Korean document understanding, and as a reasoning backbone for agent systems.
67