File size: 2,293 Bytes
07e18df
0c4080e
 
 
 
 
 
 
 
 
 
07e18df
 
0c4080e
07e18df
8de079d
 
93eb9da
8de079d
 
93eb9da
 
 
 
 
 
 
 
7f9484a
93eb9da
7f9484a
93eb9da
4fb85ac
 
0c4080e
 
 
 
07e18df
93eb9da
 
 
 
 
 
0c4080e
0f09075
8de079d
 
07e18df
 
 
 
 
 
 
 
 
 
8de079d
 
0c4080e
 
 
8de079d
 
0c4080e
 
 
 
 
 
 
 
 
 
 
 
 
 
07e18df
0c4080e
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
license: apache-2.0
tags:
- moe
- merge
- mergekit
- lazymergekit
- DopeorNope/SOLARC-M-10.7B
- maywell/PiVoT-10.7B-Mistral-v0.2-RP
- kyujinpy/Sakura-SOLAR-Instruct
- jeonsworld/CarbonVillain-en-10.7B-v1
---

# Lumosia-MoE-4x10.7

The name Lumosia was selected as its a MoE of Multiple SOLAR Merges so it really "Lights the way".... its 3am.

This is a very experimantal model. its a MoE of all good performing Solar models (based off of personal experiance not open leaderboard), 

Why? Dunno whated to see what would happen

context is maybe 32k? waiting for GGUF to upload.


Template:
```
### System:

### USER:{prompt}

### Assistant:
```

Lumosia-MoE-4x10.7 is a Mixure of Experts (MoE) made with the following models:
* [DopeorNope/SOLARC-M-10.7B](https://huggingface.co/DopeorNope/SOLARC-M-10.7B)
* [maywell/PiVoT-10.7B-Mistral-v0.2-RP](https://huggingface.co/maywell/PiVoT-10.7B-Mistral-v0.2-RP)
* [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct)
* [jeonsworld/CarbonVillain-en-10.7B-v1](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v1)

## Evals:


* Pending


## 🧩 Configuration

```
yamlbase_model: DopeorNope/SOLARC-M-10.7B
gate_mode: hidden
dtype: bfloat16
experts:
  - source_model: DopeorNope/SOLARC-M-10.7B
    positive_prompts: [""]
  - source_model: maywell/PiVoT-10.7B-Mistral-v0.2-RP
    positive_prompts: [""]
  - source_model: kyujinpy/Sakura-SOLAR-Instruct
    positive_prompts: [""]
  - source_model: jeonsworld/CarbonVillain-en-10.7B-v1
    positive_prompts: [""]
```

## 💻 Usage

```
python
!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "Steelskull/Lumosia-MoE-4x10.7"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)

messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```