Datasets:

Languages:
English
Size:
n>1T
ArXiv:
DOI:
License:

How to compute the aggerate score?

#35
by mornmirror - opened

For each row of data in the eval_results.csv, I tried to calculate the average value of the metrics for all tasks, but the final result is not equal to agg_score. So how is this score calculated?
Additionally, I tried using lighteval for evaluation, and there is a metric in the evaluation result called "all (acc / acc_norm)". Is it equivalent to the agg_score?

HuggingFaceFW org

It is the average of these columns: ['commonsense_qa/acc_norm', 'hellaswag/acc_norm', 'openbookqa/acc_norm', 'piqa/acc_norm', 'siqa/acc_norm', 'winogrande/acc_norm', 'arc/acc_norm', 'mmlu/acc_norm']

guipenedo changed discussion status to closed

Sign up or log in to comment