metadata
title: Solbench Leaderboard
emoji: π
colorFrom: pink
colorTo: purple
sdk: gradio
app_file: app.py
pinned: true
datasets:
- braindao/solbench-naive-judge-random-v1
- braindao/solbench-naive-judge-openzeppelin-v1
- braindao/solbench-humaneval-for-solidity-v1
- braindao/solbench-humaneval-for-solidity-v2
license: apache-2.0
sdk_version: 4.40.0
thumbnail: >-
https://cdn-uploads.huggingface.co/production/uploads/5f19edf678d261307936f4c8/4v6TPbN8qa6JptyCFUy-J.png
Start the configuration
Most of the variables to change for a default leaderboard are in src/env.py
(replace the path for your leaderboard) and src/about.py
(for tasks).
Results files should have the following format and be stored as json files:
{
"config": {
"model_dtype": "torch.float16", # or torch.bfloat16 or 8bit or 4bit
"model_name": "path of the model on the hub: org/model",
"model_sha": "revision on the hub",
},
"results": {
"task_name": {
"metric_name": score,
},
"task_name2": {
"metric_name": score,
}
}
}
Code logic for more complex edits
You'll find
- the main table' columns names and properties in
src/display/utils.py
- the logic to read all results and request files, then convert them in dataframe lines, in
src/leaderboard/read_evals.py
, andsrc/populate.py
- teh logic to allow or filter submissions in
src/submission/submit.py
andsrc/submission/check_validity.py