arco / README.md
appvoid's picture
Update README.md
71cd043 verified
metadata
license: apache-2.0

cubby

arco consistently outperforms every sota model below 600m parameters as well as some 1b base models on average on 5 core benchmarks and is competitive with the best 0.7b-1b llms. arco is a merge of multiple internal models fine-tuned on a diverse set of styles and finally merged with the several models (including palmer-004-turbo and danube3-chat), followed by a merge with base model to preserve knowledge.

prompt

there is no prompt intentionally set but this one worked really good for me:

The following is a conversation between a super smart AI assistant and an user.

user: <your question>

assistant:

benchmarks

zero-shot evaluations performed on current sota ~0.5b models.

Parameters Model MMLU ARC-C HellaSwag PIQA Winogrande Average
0.5b qwen2 44.13 28.92 49.05 69.31 56.99 49.68
1.1b tinyllama 25.77 30.29 59.35 73.29 59.59 49.66
0.5b danube3-base 24.81 36.18 60.46 73.78 61.01 51.25
0.5b danube3-chat 25.54 36.26 60.72 74.32 61.40 51.64
0.5b palmer-004-turbo 27.36 35.58 61.79 73.67 61.17 51.91
1.1b palmer-004 26.61 34.90 61.73 74.81 64.17 52.44
0.5b arco 26.17 37.29 62.88 74.37 62.27 52.60

supporters

Buy Me A Coffee

trivia

arco comes from spanish word "bow" which is always associated with arrows and hence, speed.