File size: 4,586 Bytes
bb11a33
 
 
068743a
 
 
7698e3e
 
068743a
 
 
7698e3e
068743a
7698e3e
068743a
 
 
 
d9aae8f
60adef1
 
d9aae8f
60adef1
 
068743a
ea64f23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
068743a
 
7d698b4
068743a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
00ae028
 
068743a
 
 
 
 
d3167df
 
 
 
 
ee0b265
 
 
 
068743a
ee0b265
6d1672c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
---
license: apache-2.0
---

# **Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!**

**(This model is [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) fine-tuned version for single-turn conversation. Detailed description to be added.)**


# **Introduction**

We introduce the first 10.7 billion (B) parameter model, SOLAR-10.7B. It's compact, yet remarkably powerful, and demonstrates unparalleled state-of-the-art performance in models with parameters under 30B.

We developed the Depth Up-Scaling technique. Built on the Llama2 architecture, SOLAR-10.7B incorporates the innovative Upstage Depth Up-Scaling. We then integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model.

Depth-Upscaled SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table ([link to be updated soon]).
Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements. [[link to be updated soon]]

# **Instruction Fine-Tuning Strategy**

We utilize state-of-the-art instruction fine-tuning methods including supervised fine-tuning (SFT) and direct preference optimization (DPO) [1].
Using open source datasets with Alpaca- and OpenOrca-style and generated  synthetic datasets, we apply iterative DPO training, a proprietary alignment strategy, to maximize the performance of our resulting model.

[1] Rafailov, R., Sharma, A., Mitchell, E., Ermon, S., Manning, C.D. and Finn, C., 2023. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290.

# **Evaluation Results**

| Model                                  | H6    | Model Size |
|----------------------------------------|-------|------------|
| **SOLAR-10.7B-Instruct-v1.0**              | **74.20** | **~ 11B**      |
| mistralai/Mixtral-8x7B-Instruct-v0.1   | 72.62 | ~ 46.7B    |
| 01-ai/Yi-34B-200K                      | 70.81 | ~ 34B      |
| 01-ai/Yi-34B                           | 69.42 | ~ 34B      |
| mistralai/Mixtral-8x7B-v0.1            | 68.42 | ~ 46.7B    |
| meta-llama/Llama-2-70b-hf              | 67.87 | ~ 70B      |
| tiiuae/falcon-180B                     | 67.85 | ~ 180B     |
| **SOLAR-10.7B-v1.0**                   | **66.04** | **~11B**   |
| mistralai/Mistral-7B-Instruct-v0.2     | 65.71 | ~ 7B       |
| Qwen/Qwen-14B                          | 65.86 | ~ 14B      |
| 01-ai/Yi-34B-Chat                      | 65.32 | ~34B       |
| meta-llama/Llama-2-70b-chat-hf         | 62.4  | ~ 70B      |
| mistralai/Mistral-7B-v0.1              | 60.97 | ~ 7B       |
| mistralai/Mistral-7B-Instruct-v0.1     | 54.96 | ~ 7B       |

# **Usage Instructions**

This model has been fine-tuned primarily for single-turn conversation, making it less suitable for multi-turn conversations such as chat.

### **Version**

Make sure you have the correct version of the transformers library installed:

```sh
pip install transformers==4.35.2
```

### **Loading the Model**

Use the following Python code to load the model:

```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("Upstage/SOLAR-10.7B-Instruct-v1.0")
model = AutoModelForCausalLM.from_pretrained(
    "Upstage/SOLAR-10.7B-Instruct-v1.0",
    device_map="auto",
    torch_dtype=torch.float16,
)
```

### **Conducting Single-Turn Conversation**

```python
conversation = [ {'role': 'user', 'content': 'Hello?'} ] 

prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)

inputs = tokenizer(prompt, return_tensors="pt").to(model.device) 
outputs = model.generate(**inputs, use_cache=True, max_length=4096)
output_text = tokenizer.decode(outputs[0]) 
print(output_text)
```

Below is an example of the output.
```
<s> ### User:
Hello?

### Assistant:
Hello, how can I assist you today? Please feel free to ask any questions or request help with a specific task.</s>
```

### **The Upstage AI Team** ###
Upstage is creating the best LLM and DocAI. Please find more information at https://upstage.ai 

### **Contact Us** ###
Any questions and suggestions, please use the discussion tab. If you want to contact us directly, drop an email to [contact@upstage.ai](mailto:contact@upstage.ai)