Edit model card

arco+

This is an untrained passthrough model based on arco and danube as a first effort to train a small enough reasoning language model that generalizes across all kind of reasoning tasks.

Benchmarks

Parameters Model MMLU ARC HellaSwag PIQA Winogrande Average
488m arco-lite 23.22 33.45 56.55 69.70 59.19 48.46
773m arco-plus 23.06 36.43 60.09 72.36 60.46 50.48

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
    - model: appvoid/arco
      layer_range: [0, 14]
  - sources:
    - model: h2oai/h2o-danube3-500m-base
      layer_range: [4, 16]

merge_method: passthrough
dtype: float16
Downloads last month
25
GGUF
Model size
773M params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for mav23/arco-plus-GGUF

Merge model
this model