Update README.md
Browse files
README.md
CHANGED
@@ -11,9 +11,20 @@ tags:
|
|
11 |
# Lumina-RP
|
12 |
|
13 |
Lumina-RP is a Mixture of Experts (MoE) made with [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing).
|
14 |
-
This model has improved roleplaying and storytelling from [Lumina-3.5](https://huggingface.co/Ppoyaa/Lumina-3.5) while still retaining its strength.
|
15 |
It uses a context window of up to 32k.
|
16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
## 💻 Usage
|
18 |
|
19 |
```python
|
|
|
11 |
# Lumina-RP
|
12 |
|
13 |
Lumina-RP is a Mixture of Experts (MoE) made with [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing).
|
14 |
+
This model has improved roleplaying and storytelling from [Lumina-3.5](https://huggingface.co/Ppoyaa/Lumina-3.5) while still retaining its strength. A slight improvement on the Open LLM leaderboard.
|
15 |
It uses a context window of up to 32k.
|
16 |
|
17 |
+
# 🏆 Open LLM Leaderboard Evaluation Results
|
18 |
+
| Metric |Value|
|
19 |
+
|---------------------------------|----:|
|
20 |
+
|Avg. |75.59|
|
21 |
+
|AI2 Reasoning Challenge (25-Shot)|72.61|
|
22 |
+
|HellaSwag (10-Shot) |88.45|
|
23 |
+
|MMLU (5-Shot) |64.73|
|
24 |
+
|TruthfulQA (0-shot) |74.24|
|
25 |
+
|Winogrande (5-shot) |83.90|
|
26 |
+
|GSM8k (5-shot) |69.60|
|
27 |
+
|
28 |
## 💻 Usage
|
29 |
|
30 |
```python
|