Jan
commited on
Commit
•
bf61d89
1
Parent(s):
c044c63
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,98 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
---
|
6 |
+
<!-- header start -->
|
7 |
+
<!-- 200823 -->
|
8 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
9 |
+
<img src="https://github.com/janhq/jan/assets/89722390/35daac7d-b895-487c-a6ac-6663daaad78e" alt="Jan banner" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
10 |
+
</div>
|
11 |
+
|
12 |
+
<p align="center">
|
13 |
+
<a
|
14 |
+
href="https://jan.ai/">Jan</a>
|
15 |
+
- <a href="https://discord.gg/AsJ8krTT3N">Discord</a>
|
16 |
+
</p>
|
17 |
+
<!-- header end -->
|
18 |
+
|
19 |
+
# Model Description
|
20 |
+
This model uses the `DARE` method to merge [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) with 3 leading models in 12th Dec on [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard):
|
21 |
+
1. [OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp)
|
22 |
+
2. [MetaMath-Cybertron-Starling](https://huggingface.co/Q-bert/MetaMath-Cybertron-Starling)
|
23 |
+
3. [v1olet_marcoroni-go-bruins-merge-7B](https://huggingface.co/v1olet/v1olet_marcoroni-go-bruins-merge-7B)
|
24 |
+
|
25 |
+
- base model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
|
26 |
+
|
27 |
+
The yaml config file for this model is here:
|
28 |
+
|
29 |
+
```yaml
|
30 |
+
base_model: mistralai/Mistral-7B-Instruct-v0.2
|
31 |
+
dtype: bfloat16
|
32 |
+
merge_method: dare_ties
|
33 |
+
models:
|
34 |
+
- model: mistralai/Mistral-7B-Instruct-v0.2
|
35 |
+
- model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
|
36 |
+
parameters:
|
37 |
+
density: 0.8
|
38 |
+
weight: 0.4
|
39 |
+
- model: Q-bert/MetaMath-Cybertron-Starling
|
40 |
+
parameters:
|
41 |
+
density: 0.8
|
42 |
+
weight: 0.3
|
43 |
+
- model: v1olet/v1olet_marcoroni-go-bruins-merge-7B
|
44 |
+
parameters:
|
45 |
+
density: 0.8
|
46 |
+
weight: 0.3
|
47 |
+
parameters:
|
48 |
+
int8_mask: true
|
49 |
+
```
|
50 |
+
|
51 |
+
# Prompt template:
|
52 |
+
|
53 |
+
- **ChatML**
|
54 |
+
|
55 |
+
```
|
56 |
+
<|im_start|>system
|
57 |
+
{system_message}<|im_end|>
|
58 |
+
<|im_start|>user
|
59 |
+
{prompt}<|im_end|>
|
60 |
+
<|im_start|>assistant
|
61 |
+
```
|
62 |
+
- **Alpaca**
|
63 |
+
```
|
64 |
+
{system_message}
|
65 |
+
|
66 |
+
### Instruction:
|
67 |
+
{prompt}
|
68 |
+
|
69 |
+
### Response:
|
70 |
+
```
|
71 |
+
|
72 |
+
# About Jan
|
73 |
+
Jan believes in the need for an open-source AI ecosystem and is building the infra and tooling to allow open-source AIs to compete on a level playing field with proprietary ones.
|
74 |
+
|
75 |
+
Jan's long-term vision is to build a cognitive framework for future robots, who are practical, useful assistants for humans and businesses in everyday life.
|
76 |
+
|
77 |
+
# Jan Model Merger
|
78 |
+
This is a test project for merging models.
|
79 |
+
|
80 |
+
# Open LLM Leaderboard Evaluation Results
|
81 |
+
|
82 |
+
Detailed results can be found here.
|
83 |
+
|
84 |
+
| Metric | Value |
|
85 |
+
|-----------------------|---------------------------|
|
86 |
+
| Avg. | ?|
|
87 |
+
| ARC (25-shot) | ? |
|
88 |
+
| HellaSwag (10-shot) | ? |
|
89 |
+
| MMLU (5-shot) | ?|
|
90 |
+
| TruthfulQA (0-shot) | ? |
|
91 |
+
| Winogrande (5-shot) | ? |
|
92 |
+
| GSM8K (5-shot) | ? |
|
93 |
+
|
94 |
+
# Acknowlegement
|
95 |
+
- [mergekit](https://github.com/cg123/mergekit)
|
96 |
+
- [DARE](https://github.com/yule-BUAA/MergeLM/blob/main/README.md)
|
97 |
+
- [SLERP](https://github.com/Digitous/LLM-SLERP-Merge)
|
98 |
+
- [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness)
|