killawhale2
commited on
Commit
•
f74b172
1
Parent(s):
9442df0
Update README.md
Browse files
README.md
CHANGED
@@ -11,7 +11,7 @@ language:
|
|
11 |
|
12 |
# **Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!**
|
13 |
|
14 |
-
**(This model is [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) fine-tuned version for single-turn conversation.
|
15 |
|
16 |
|
17 |
# **Introduction**
|
@@ -20,8 +20,8 @@ We introduce the first 10.7 billion (B) parameter model, SOLAR-10.7B. It's compa
|
|
20 |
|
21 |
We developed the Depth Up-Scaling technique. Built on the Llama2 architecture, SOLAR-10.7B incorporates the innovative Upstage Depth Up-Scaling. We then integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model.
|
22 |
|
23 |
-
Depth-Upscaled SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table
|
24 |
-
Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements.
|
25 |
|
26 |
# **Instruction Fine-Tuning Strategy**
|
27 |
|
|
|
11 |
|
12 |
# **Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!**
|
13 |
|
14 |
+
**(This model is [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) fine-tuned version for single-turn conversation.)**
|
15 |
|
16 |
|
17 |
# **Introduction**
|
|
|
20 |
|
21 |
We developed the Depth Up-Scaling technique. Built on the Llama2 architecture, SOLAR-10.7B incorporates the innovative Upstage Depth Up-Scaling. We then integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model.
|
22 |
|
23 |
+
Depth-Upscaled SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table.
|
24 |
+
Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements.
|
25 |
|
26 |
# **Instruction Fine-Tuning Strategy**
|
27 |
|