File size: 1,189 Bytes
e91ef56
 
 
 
 
 
91bee7d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
---
license: bigscience-openrail-m
---

This model is a merge of 80% starchatplus_beta and 20% wizardcoder.

It is intended as a researching into merging and routing of experts.

"multiple-py": {
    "pass@1": 0.36645962732919257
  }

* this is just using a .1 sample of the eval for test purposes *
* 
hf-causal (pretrained=Multi-Domain-Expert-Layers/scorpius_16b,dtype=bfloat16), limit: 0.1, provide_description: False, num_fewshot: 0, batch_size: None
|                      Task                       |Version|  Metric   | Value |   |Stderr|
|-------------------------------------------------|------:|-----------|------:|---|-----:|
|arc_challenge                                    |      0|acc        | 0.4103|±  |0.0457|
|                                                 |       |acc_norm   | 0.4103|±  |0.0457|
|arc_easy                                         |      0|acc        | 0.7350|±  |0.0410|
|                                                 |       |acc_norm   | 0.6923|±  |0.0429|
|hellaswag                                        |      0|acc        | 0.5812|±  |0.0458|
|                                                 |       |acc_norm   | 0.7778|±  |0.0386|