metrics
- how do you calculate the metrics for the public scores track1 to track4? Are lower values better or worse?
- for public_score, is it showing the accuracy or error rate?
Hi @picekl
I'm sorry for not providing more complete information. I've read the Dataset page but still unsure why the metrics are shown like that. For example, the page shows the losses as sum of all the costs but the leaderboard shows it as a fraction, so would I be dividing the sum with the total number of samples or the total number of costs?
In addition, It seems like a higher public_score would place you in higher rank, so I would assume that public_score is the accuracy, but the dataset page stated that it is the error rate, so I'm confused about which one
Since the public_score represents accuracy (please correct me ) and rest of the tracks represent losses, so am i right to say that higher public_score and lower public_score_tracks would net you better result?
Hi @mrzave ,
You are right. The first metric is accuracy-based; thus, higher == better. Rest is the opposite.
However, leaderboard ranking will not matter as much as you might think.
We will provide 3 * 500E to the best papers, i.e., the best approaches described in a technical report and with reproducible results.
Best,
Lukas