datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
2.54M
| likes
int64 0
6.35k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
40
โ | createdAt
unknown | card
stringlengths 19
1M
|
---|---|---|---|---|---|---|---|---|
open-llm-leaderboard/gmonsoon__StockSeaLLMs-7B-v1-details | open-llm-leaderboard | "2024-11-21T13:49:09Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T13:45:45Z" | ---
pretty_name: Evaluation run of gmonsoon/StockSeaLLMs-7B-v1
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [gmonsoon/StockSeaLLMs-7B-v1](https://huggingface.co/gmonsoon/StockSeaLLMs-7B-v1)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/gmonsoon__StockSeaLLMs-7B-v1-details\"\
,\n\tname=\"gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-21T13-45-45.216237](https://huggingface.co/datasets/open-llm-leaderboard/gmonsoon__StockSeaLLMs-7B-v1-details/blob/main/gmonsoon__StockSeaLLMs-7B-v1/results_2024-11-21T13-45-45.216237.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"prompt_level_loose_acc,none\": 0.43068391866913125,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.021308808857898823,\n \"\
acc_norm,none\": 0.47956933454403944,\n \"acc_norm_stderr,none\": 0.005332877202997923,\n\
\ \"acc,none\": 0.39519614361702127,\n \"acc_stderr,none\"\
: 0.00445720656433847,\n \"exact_match,none\": 0.17598187311178248,\n\
\ \"exact_match_stderr,none\": 0.009729917778735123,\n \"\
inst_level_loose_acc,none\": 0.5407673860911271,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"inst_level_strict_acc,none\": 0.513189448441247,\n \
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_strict_acc,none\"\
: 0.4066543438077634,\n \"prompt_level_strict_acc_stderr,none\": 0.021138283177336344,\n\
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.5238673841346988,\n \"acc_norm_stderr,none\"\
: 0.006158688482621799,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.852,\n\
\ \"acc_norm_stderr,none\": 0.022503547243806186\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6256684491978609,\n \"acc_norm_stderr,none\"\
: 0.0354849234134303\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.508,\n \"acc_norm_stderr,none\": 0.03168215643141386\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.58,\n\
\ \"acc_norm_stderr,none\": 0.03127799950463661\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.576,\n \"acc_norm_stderr,none\":\
\ 0.03131803437491622\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.408,\n \"acc_norm_stderr,none\": 0.031145209846548512\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.748,\n \
\ \"acc_norm_stderr,none\": 0.027513851933031318\n },\n \"\
leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\": \" \
\ - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.512,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.676,\n \"acc_norm_stderr,none\": 0.029658294924545567\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.632,\n \"acc_norm_stderr,none\": 0.03056207062099311\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.652,\n \"acc_norm_stderr,none\":\
\ 0.030186568464511673\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.36,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.5684931506849316,\n \"acc_norm_stderr,none\": 0.041131302645371945\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.456,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.576,\n \
\ \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.452,\n \"acc_norm_stderr,none\":\
\ 0.03153986449255664\n },\n \"leaderboard_bbh_snarks\": {\n \
\ \"alias\": \" - leaderboard_bbh_snarks\",\n \"acc_norm,none\"\
: 0.6404494382022472,\n \"acc_norm_stderr,none\": 0.03606913914074032\n\
\ },\n \"leaderboard_bbh_sports_understanding\": {\n \"\
alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.812,\n \"acc_norm_stderr,none\": 0.02476037772775051\n },\n\
\ \"leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" -\
\ leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.456,\n\
\ \"acc_norm_stderr,none\": 0.031563285061213475\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.16,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.124,\n \"acc_norm_stderr,none\":\
\ 0.020886382258673272\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.268,\n \"acc_norm_stderr,none\":\
\ 0.02806876238252672\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3028523489932886,\n\
\ \"acc_norm_stderr,none\": 0.013316733936515984,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.2676767676767677,\n \"acc_norm_stderr,none\": 0.031544498882702825\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.304029304029304,\n\
\ \"acc_norm_stderr,none\": 0.019704024937907735\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3169642857142857,\n \"acc_norm_stderr,none\"\
: 0.0220076215848248\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.4066543438077634,\n \"prompt_level_strict_acc_stderr,none\": 0.021138283177336344,\n\
\ \"inst_level_strict_acc,none\": 0.513189448441247,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.43068391866913125,\n \"prompt_level_loose_acc_stderr,none\": 0.021308808857898823,\n\
\ \"inst_level_loose_acc,none\": 0.5407673860911271,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.17598187311178248,\n \"exact_match_stderr,none\"\
: 0.009729917778735123,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.3811074918566775,\n\
\ \"exact_match_stderr,none\": 0.02776327166045321\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \" \
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.10569105691056911,\n \"exact_match_stderr,none\": 0.0278344722877674\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.09090909090909091,\n\
\ \"exact_match_stderr,none\": 0.0251172256361608\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\": \"\
\ - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.025,\n \"exact_match_stderr,none\": 0.009346956263824575\n \
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\": \"\
\ - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.14285714285714285,\n\
\ \"exact_match_stderr,none\": 0.028289929799333556\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.3005181347150259,\n \"exact_match_stderr,none\"\
: 0.033088185944157515\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.02962962962962963,\n \"exact_match_stderr,none\"\
: 0.014648038602753809\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.39519614361702127,\n\
\ \"acc_stderr,none\": 0.00445720656433847\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.42063492063492064,\n \"acc_norm_stderr,none\"\
: 0.017713270487861726,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.536,\n\
\ \"acc_norm_stderr,none\": 0.031603975145223735\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.328125,\n \"acc_norm_stderr,none\"\
: 0.029403146715355242\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.4,\n \"acc_norm_stderr,none\": 0.031046021028253316\n\
\ }\n },\n \"leaderboard\": {\n \"prompt_level_loose_acc,none\"\
: 0.43068391866913125,\n \"prompt_level_loose_acc_stderr,none\": 0.021308808857898823,\n\
\ \"acc_norm,none\": 0.47956933454403944,\n \"acc_norm_stderr,none\"\
: 0.005332877202997923,\n \"acc,none\": 0.39519614361702127,\n \"\
acc_stderr,none\": 0.00445720656433847,\n \"exact_match,none\": 0.17598187311178248,\n\
\ \"exact_match_stderr,none\": 0.009729917778735123,\n \"inst_level_loose_acc,none\"\
: 0.5407673860911271,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n\
\ \"inst_level_strict_acc,none\": 0.513189448441247,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_strict_acc,none\": 0.4066543438077634,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.021138283177336344,\n \"\
alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.5238673841346988,\n \"acc_norm_stderr,none\": 0.006158688482621799,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"\
acc_norm,none\": 0.852,\n \"acc_norm_stderr,none\": 0.022503547243806186\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6256684491978609,\n \"acc_norm_stderr,none\"\
: 0.0354849234134303\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.508,\n \"acc_norm_stderr,none\": 0.03168215643141386\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.408,\n \"acc_norm_stderr,none\": 0.031145209846548512\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.748,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.512,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.676,\n \"acc_norm_stderr,none\": 0.029658294924545567\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.632,\n \"acc_norm_stderr,none\": 0.03056207062099311\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.652,\n \"acc_norm_stderr,none\": 0.030186568464511673\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.36,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.5684931506849316,\n\
\ \"acc_norm_stderr,none\": 0.041131302645371945\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.456,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.452,\n \"acc_norm_stderr,none\": 0.03153986449255664\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6404494382022472,\n \"acc_norm_stderr,none\"\
: 0.03606913914074032\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.812,\n \"acc_norm_stderr,none\": 0.02476037772775051\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.456,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.16,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.124,\n \"acc_norm_stderr,none\": 0.020886382258673272\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.268,\n \"acc_norm_stderr,none\": 0.02806876238252672\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3028523489932886,\n\
\ \"acc_norm_stderr,none\": 0.013316733936515984,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.2676767676767677,\n\
\ \"acc_norm_stderr,none\": 0.031544498882702825\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.304029304029304,\n \"acc_norm_stderr,none\": 0.019704024937907735\n \
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3169642857142857,\n \"acc_norm_stderr,none\"\
: 0.0220076215848248\n },\n \"leaderboard_ifeval\": {\n \"alias\":\
\ \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.4066543438077634,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.021138283177336344,\n \
\ \"inst_level_strict_acc,none\": 0.513189448441247,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.43068391866913125,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.021308808857898823,\n \"inst_level_loose_acc,none\"\
: 0.5407673860911271,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.17598187311178248,\n\
\ \"exact_match_stderr,none\": 0.009729917778735123,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.3811074918566775,\n \"exact_match_stderr,none\": 0.02776327166045321\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.10569105691056911,\n \"exact_match_stderr,none\": 0.0278344722877674\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.09090909090909091,\n \"exact_match_stderr,none\"\
: 0.0251172256361608\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.025,\n \"exact_match_stderr,none\": 0.009346956263824575\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\": \" - leaderboard_math_num_theory_hard\"\
,\n \"exact_match,none\": 0.14285714285714285,\n \"exact_match_stderr,none\"\
: 0.028289929799333556\n },\n \"leaderboard_math_prealgebra_hard\": {\n \
\ \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \"exact_match,none\"\
: 0.3005181347150259,\n \"exact_match_stderr,none\": 0.033088185944157515\n\
\ },\n \"leaderboard_math_precalculus_hard\": {\n \"alias\": \" -\
\ leaderboard_math_precalculus_hard\",\n \"exact_match,none\": 0.02962962962962963,\n\
\ \"exact_match_stderr,none\": 0.014648038602753809\n },\n \"leaderboard_mmlu_pro\"\
: {\n \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.39519614361702127,\n\
\ \"acc_stderr,none\": 0.00445720656433847\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.42063492063492064,\n \"acc_norm_stderr,none\"\
: 0.017713270487861726,\n \"alias\": \" - leaderboard_musr\"\n },\n \
\ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\
,\n \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\": 0.031603975145223735\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\
\ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.328125,\n\
\ \"acc_norm_stderr,none\": 0.029403146715355242\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\
: 0.4,\n \"acc_norm_stderr,none\": 0.031046021028253316\n }\n}\n```"
repo_url: https://huggingface.co/gmonsoon/StockSeaLLMs-7B-v1
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_navigate
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_snarks
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_gpqa_extended
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_gpqa_main
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_gpqa_main_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_ifeval
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_ifeval_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_mmlu_pro
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_musr_object_placements
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-21T13-45-45.216237.jsonl'
- config_name: gmonsoon__StockSeaLLMs-7B-v1__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_21T13_45_45.216237
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-21T13-45-45.216237.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-21T13-45-45.216237.jsonl'
---
# Dataset Card for Evaluation run of gmonsoon/StockSeaLLMs-7B-v1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [gmonsoon/StockSeaLLMs-7B-v1](https://huggingface.co/gmonsoon/StockSeaLLMs-7B-v1)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/gmonsoon__StockSeaLLMs-7B-v1-details",
name="gmonsoon__StockSeaLLMs-7B-v1__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-21T13-45-45.216237](https://huggingface.co/datasets/open-llm-leaderboard/gmonsoon__StockSeaLLMs-7B-v1-details/blob/main/gmonsoon__StockSeaLLMs-7B-v1/results_2024-11-21T13-45-45.216237.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"prompt_level_loose_acc,none": 0.43068391866913125,
"prompt_level_loose_acc_stderr,none": 0.021308808857898823,
"acc_norm,none": 0.47956933454403944,
"acc_norm_stderr,none": 0.005332877202997923,
"acc,none": 0.39519614361702127,
"acc_stderr,none": 0.00445720656433847,
"exact_match,none": 0.17598187311178248,
"exact_match_stderr,none": 0.009729917778735123,
"inst_level_loose_acc,none": 0.5407673860911271,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.513189448441247,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.4066543438077634,
"prompt_level_strict_acc_stderr,none": 0.021138283177336344,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5238673841346988,
"acc_norm_stderr,none": 0.006158688482621799,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.852,
"acc_norm_stderr,none": 0.022503547243806186
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6256684491978609,
"acc_norm_stderr,none": 0.0354849234134303
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.508,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.408,
"acc_norm_stderr,none": 0.031145209846548512
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.748,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.676,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.632,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.652,
"acc_norm_stderr,none": 0.030186568464511673
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.36,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.5684931506849316,
"acc_norm_stderr,none": 0.041131302645371945
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.452,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6404494382022472,
"acc_norm_stderr,none": 0.03606913914074032
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.812,
"acc_norm_stderr,none": 0.02476037772775051
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.16,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.124,
"acc_norm_stderr,none": 0.020886382258673272
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.268,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3028523489932886,
"acc_norm_stderr,none": 0.013316733936515984,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2676767676767677,
"acc_norm_stderr,none": 0.031544498882702825
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.304029304029304,
"acc_norm_stderr,none": 0.019704024937907735
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3169642857142857,
"acc_norm_stderr,none": 0.0220076215848248
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.4066543438077634,
"prompt_level_strict_acc_stderr,none": 0.021138283177336344,
"inst_level_strict_acc,none": 0.513189448441247,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.43068391866913125,
"prompt_level_loose_acc_stderr,none": 0.021308808857898823,
"inst_level_loose_acc,none": 0.5407673860911271,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.17598187311178248,
"exact_match_stderr,none": 0.009729917778735123,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.3811074918566775,
"exact_match_stderr,none": 0.02776327166045321
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.10569105691056911,
"exact_match_stderr,none": 0.0278344722877674
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.09090909090909091,
"exact_match_stderr,none": 0.0251172256361608
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.025,
"exact_match_stderr,none": 0.009346956263824575
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.14285714285714285,
"exact_match_stderr,none": 0.028289929799333556
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.3005181347150259,
"exact_match_stderr,none": 0.033088185944157515
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.02962962962962963,
"exact_match_stderr,none": 0.014648038602753809
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.39519614361702127,
"acc_stderr,none": 0.00445720656433847
},
"leaderboard_musr": {
"acc_norm,none": 0.42063492063492064,
"acc_norm_stderr,none": 0.017713270487861726,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.328125,
"acc_norm_stderr,none": 0.029403146715355242
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.4,
"acc_norm_stderr,none": 0.031046021028253316
}
},
"leaderboard": {
"prompt_level_loose_acc,none": 0.43068391866913125,
"prompt_level_loose_acc_stderr,none": 0.021308808857898823,
"acc_norm,none": 0.47956933454403944,
"acc_norm_stderr,none": 0.005332877202997923,
"acc,none": 0.39519614361702127,
"acc_stderr,none": 0.00445720656433847,
"exact_match,none": 0.17598187311178248,
"exact_match_stderr,none": 0.009729917778735123,
"inst_level_loose_acc,none": 0.5407673860911271,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.513189448441247,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.4066543438077634,
"prompt_level_strict_acc_stderr,none": 0.021138283177336344,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5238673841346988,
"acc_norm_stderr,none": 0.006158688482621799,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.852,
"acc_norm_stderr,none": 0.022503547243806186
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6256684491978609,
"acc_norm_stderr,none": 0.0354849234134303
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.508,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.408,
"acc_norm_stderr,none": 0.031145209846548512
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.748,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.676,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.632,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.652,
"acc_norm_stderr,none": 0.030186568464511673
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.36,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.5684931506849316,
"acc_norm_stderr,none": 0.041131302645371945
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.452,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6404494382022472,
"acc_norm_stderr,none": 0.03606913914074032
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.812,
"acc_norm_stderr,none": 0.02476037772775051
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.16,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.124,
"acc_norm_stderr,none": 0.020886382258673272
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.268,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3028523489932886,
"acc_norm_stderr,none": 0.013316733936515984,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2676767676767677,
"acc_norm_stderr,none": 0.031544498882702825
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.304029304029304,
"acc_norm_stderr,none": 0.019704024937907735
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3169642857142857,
"acc_norm_stderr,none": 0.0220076215848248
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.4066543438077634,
"prompt_level_strict_acc_stderr,none": 0.021138283177336344,
"inst_level_strict_acc,none": 0.513189448441247,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.43068391866913125,
"prompt_level_loose_acc_stderr,none": 0.021308808857898823,
"inst_level_loose_acc,none": 0.5407673860911271,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.17598187311178248,
"exact_match_stderr,none": 0.009729917778735123,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.3811074918566775,
"exact_match_stderr,none": 0.02776327166045321
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.10569105691056911,
"exact_match_stderr,none": 0.0278344722877674
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.09090909090909091,
"exact_match_stderr,none": 0.0251172256361608
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.025,
"exact_match_stderr,none": 0.009346956263824575
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.14285714285714285,
"exact_match_stderr,none": 0.028289929799333556
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.3005181347150259,
"exact_match_stderr,none": 0.033088185944157515
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.02962962962962963,
"exact_match_stderr,none": 0.014648038602753809
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.39519614361702127,
"acc_stderr,none": 0.00445720656433847
},
"leaderboard_musr": {
"acc_norm,none": 0.42063492063492064,
"acc_norm_stderr,none": 0.017713270487861726,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.328125,
"acc_norm_stderr,none": 0.029403146715355242
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.4,
"acc_norm_stderr,none": 0.031046021028253316
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
davidberenstein1957/qwen2.5-coder-0.5b-openai_humaneval | davidberenstein1957 | "2024-11-21T14:22:39Z" | 6 | 0 | [
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us",
"observers"
] | null | "2024-11-21T14:14:26Z" | ---
tags:
- observers
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
SeppeV/joke_gen_of_mistral_ft_mean_score_dpo_w_ex_reasoning_prompt_wo_ex_jo | SeppeV | "2024-11-21T14:20:08Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T14:20:07Z" | ---
dataset_info:
features:
- name: jokeText
dtype: string
- name: userId
dtype: int64
splits:
- name: train
num_bytes: 95640
num_examples: 125
download_size: 53031
dataset_size: 95640
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SeppeV/results_joke_gen_mistral_ft_mean_score_dpo_w_ex_reason_prmpt_wo_ex_jo_ens_test | SeppeV | "2024-11-21T14:29:03Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T14:29:00Z" | ---
dataset_info:
features:
- name: jokeText
dtype: string
- name: userId
dtype: int64
- name: score
dtype: float32
splits:
- name: train
num_bytes: 96140
num_examples: 125
download_size: 53482
dataset_size: 96140
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
juliadollis/mistral_toxigen-data-train_zeroshot_limiar3 | juliadollis | "2024-11-21T14:32:41Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T14:32:39Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: target_group
dtype: string
- name: factual?
dtype: string
- name: ingroup_effect
dtype: string
- name: lewd
dtype: string
- name: framing
dtype: string
- name: predicted_group
dtype: string
- name: stereotyping
dtype: string
- name: intent
dtype: float64
- name: toxicity_ai
dtype: float64
- name: toxicity_human
dtype: float64
- name: predicted_author
dtype: string
- name: actual_method
dtype: string
- name: is_toxic
dtype: int64
- name: predicted_is_toxic
dtype: int64
- name: y_true
dtype: int64
splits:
- name: train
num_bytes: 3508181
num_examples: 8960
download_size: 731936
dataset_size: 3508181
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ismailaib/FleetVision | ismailaib | "2024-11-21T14:37:49Z" | 6 | 1 | [
"license:mit",
"size_categories:1K<n<10K",
"format:csv",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T14:36:33Z" | ---
license: mit
---
|
katyazevskaya/python-course | katyazevskaya | "2024-11-21T14:40:46Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T14:40:43Z" | ---
dataset_info:
features:
- name: Original Text
dtype: string
- name: Lemmatized Text
sequence: string
- name: POS Annotation
sequence:
sequence: string
- name: NER Annotation
sequence:
sequence: string
splits:
- name: train
num_bytes: 190320
num_examples: 1
download_size: 104732
dataset_size: 190320
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
A-l-e-x/gravitation | A-l-e-x | "2024-11-21T15:20:56Z" | 6 | 0 | [
"license:mit",
"region:us"
] | null | "2024-11-21T15:19:53Z" | ---
license: mit
---
|
jfcalvo/test-export-with-changes-3 | jfcalvo | "2024-11-21T15:33:45Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T15:33:43Z" | ---
dataset_info:
features:
- name: pokemon
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 180000
num_examples: 10000
download_size: 3818
dataset_size: 180000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/P_wiki_doc10000_real32 | dgambettaphd | "2024-11-21T15:33:56Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T15:33:52Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 1818793
num_examples: 10000
download_size: 1211248
dataset_size: 1818793
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jfcalvo/test-export-with-changes-different-split | jfcalvo | "2024-11-21T15:35:17Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T15:35:14Z" | ---
dataset_info:
features:
- name: pokemon
dtype: string
- name: type
dtype: string
splits:
- name: testing
num_bytes: 180000
num_examples: 10000
download_size: 3818
dataset_size: 180000
configs:
- config_name: default
data_files:
- split: testing
path: data/testing-*
---
|
jfcalvo/test-export-with-records | jfcalvo | "2024-11-21T15:58:07Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T15:58:05Z" | ---
dataset_info:
config_name: anotherone2
features:
- name: pokemon
dtype: string
- name: type
dtype: string
splits:
- name: testing
num_bytes: 180000
num_examples: 10000
download_size: 3818
dataset_size: 180000
configs:
- config_name: anotherone2
data_files:
- split: testing
path: anotherone2/testing-*
---
|
juliadollis/mistral_toxigen-data-train_zeroshot_curto_limiar3 | juliadollis | "2024-11-21T15:58:13Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T15:58:11Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: target_group
dtype: string
- name: factual?
dtype: string
- name: ingroup_effect
dtype: string
- name: lewd
dtype: string
- name: framing
dtype: string
- name: predicted_group
dtype: string
- name: stereotyping
dtype: string
- name: intent
dtype: float64
- name: toxicity_ai
dtype: float64
- name: toxicity_human
dtype: float64
- name: predicted_author
dtype: string
- name: actual_method
dtype: string
- name: is_toxic
dtype: int64
- name: predicted_is_toxic
dtype: int64
- name: y_true
dtype: int64
splits:
- name: train
num_bytes: 3508181
num_examples: 8960
download_size: 731958
dataset_size: 3508181
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct-details | open-llm-leaderboard | "2024-11-21T16:09:20Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T16:06:21Z" | ---
pretty_name: Evaluation run of GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct](https://huggingface.co/GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct-details\"\
,\n\tname=\"GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-21T16-06-20.952173](https://huggingface.co/datasets/open-llm-leaderboard/GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct-details/blob/main/GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct/results_2024-11-21T16-06-20.952173.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"prompt_level_strict_acc,none\": 0.6062846580406654,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.021024834145872404,\n \"\
acc_norm,none\": 0.5431314048514723,\n \"acc_norm_stderr,none\": 0.005317050852347761,\n\
\ \"inst_level_loose_acc,none\": 0.7302158273381295,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\",\n \"acc,none\": 0.4263630319148936,\n\
\ \"acc_stderr,none\": 0.004508763683858449,\n \"inst_level_strict_acc,none\"\
: 0.7038369304556354,\n \"inst_level_strict_acc_stderr,none\": \"N/A\"\
,\n \"exact_match,none\": 0.19788519637462235,\n \"exact_match_stderr,none\"\
: 0.009998835994126825,\n \"prompt_level_loose_acc,none\": 0.6395563770794824,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.0206614696698795,\n \
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n\
\ \"acc_norm,none\": 0.5948620031244576,\n \"acc_norm_stderr,none\"\
: 0.006083807836624403,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.856,\n\
\ \"acc_norm_stderr,none\": 0.022249407735450245\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6363636363636364,\n \"acc_norm_stderr,none\"\
: 0.03527198153014412\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.608,\n \"acc_norm_stderr,none\": 0.030938207620401222\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.636,\n\
\ \"acc_norm_stderr,none\": 0.030491555220405475\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.584,\n \"acc_norm_stderr,none\":\
\ 0.031235856237014505\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.436,\n \"acc_norm_stderr,none\": 0.031425567060281365\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.724,\n \
\ \"acc_norm_stderr,none\": 0.02832853727421142\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\":\
\ 0.031563285061213475\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\":\
\ 0.031603975145223735\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.796,\n \"acc_norm_stderr,none\":\
\ 0.025537121574548162\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \
\ \"acc_norm,none\": 0.744,\n \"acc_norm_stderr,none\": 0.027657108718204846\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \"\
\ - leaderboard_bbh_navigate\",\n \"acc_norm,none\": 0.644,\n \
\ \"acc_norm_stderr,none\": 0.0303436806571532\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.448,\n \"acc_norm_stderr,none\": 0.03151438761115349\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.5684931506849316,\n \"acc_norm_stderr,none\": 0.041131302645371945\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.692,\n \"acc_norm_stderr,none\": 0.02925692860650181\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.864,\n \
\ \"acc_norm_stderr,none\": 0.021723342617052086\n },\n \"\
leaderboard_bbh_salient_translation_error_detection\": {\n \"alias\"\
: \" - leaderboard_bbh_salient_translation_error_detection\",\n \"acc_norm,none\"\
: 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n },\n\
\ \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6460674157303371,\n \"acc_norm_stderr,none\"\
: 0.03594285405211505\n },\n \"leaderboard_bbh_sports_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \
\ \"acc_norm,none\": 0.852,\n \"acc_norm_stderr,none\": 0.022503547243806186\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.592,\n\
\ \"acc_norm_stderr,none\": 0.03114520984654851\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.252,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.268,\n \"acc_norm_stderr,none\":\
\ 0.02806876238252672\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.344,\n \"acc_norm_stderr,none\":\
\ 0.03010450339231644\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.476,\n \"acc_norm_stderr,none\": 0.03164968895968774\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3347315436241611,\n\
\ \"acc_norm_stderr,none\": 0.013681339748209233,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.3434343434343434,\n \"acc_norm_stderr,none\": 0.03383201223244441\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.32234432234432236,\n\
\ \"acc_norm_stderr,none\": 0.020020102750045735\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.34598214285714285,\n \"acc_norm_stderr,none\"\
: 0.022499241830682457\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.6062846580406654,\n \"prompt_level_strict_acc_stderr,none\": 0.021024834145872404,\n\
\ \"inst_level_strict_acc,none\": 0.7038369304556354,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.6395563770794824,\n \"prompt_level_loose_acc_stderr,none\": 0.0206614696698795,\n\
\ \"inst_level_loose_acc,none\": 0.7302158273381295,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.19788519637462235,\n \"exact_match_stderr,none\"\
: 0.009998835994126825,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.43322475570032576,\n\
\ \"exact_match_stderr,none\": 0.028327050442298423\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.11382113821138211,\n \"exact_match_stderr,none\": 0.02875360087323741\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.06818181818181818,\n\
\ \"exact_match_stderr,none\": 0.022022378945902827\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.03571428571428571,\n \"exact_match_stderr,none\": 0.011110196729254557\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.12337662337662338,\n\
\ \"exact_match_stderr,none\": 0.026587484423674337\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.35751295336787564,\n \"exact_match_stderr,none\"\
: 0.03458816042181008\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.05925925925925926,\n \"exact_match_stderr,none\"\
: 0.02039673654232189\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.4263630319148936,\n\
\ \"acc_stderr,none\": 0.004508763683858449\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.4775132275132275,\n \"acc_norm_stderr,none\"\
: 0.01802634312352244,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \"\
\ - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.576,\n\
\ \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.43359375,\n \"acc_norm_stderr,none\"\
: 0.031033834158735715\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.424,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ }\n },\n \"leaderboard\": {\n \"prompt_level_strict_acc,none\"\
: 0.6062846580406654,\n \"prompt_level_strict_acc_stderr,none\": 0.021024834145872404,\n\
\ \"acc_norm,none\": 0.5431314048514723,\n \"acc_norm_stderr,none\"\
: 0.005317050852347761,\n \"inst_level_loose_acc,none\": 0.7302158273381295,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"acc,none\": 0.4263630319148936,\n\
\ \"acc_stderr,none\": 0.004508763683858449,\n \"inst_level_strict_acc,none\"\
: 0.7038369304556354,\n \"inst_level_strict_acc_stderr,none\": \"N/A\",\n\
\ \"exact_match,none\": 0.19788519637462235,\n \"exact_match_stderr,none\"\
: 0.009998835994126825,\n \"prompt_level_loose_acc,none\": 0.6395563770794824,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.0206614696698795,\n \"\
alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.5948620031244576,\n \"acc_norm_stderr,none\": 0.006083807836624403,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"\
acc_norm,none\": 0.856,\n \"acc_norm_stderr,none\": 0.022249407735450245\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6363636363636364,\n \"acc_norm_stderr,none\"\
: 0.03527198153014412\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.608,\n \"acc_norm_stderr,none\": 0.030938207620401222\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.584,\n \"acc_norm_stderr,none\": 0.031235856237014505\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.436,\n \"acc_norm_stderr,none\": 0.031425567060281365\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.724,\n \"acc_norm_stderr,none\": 0.02832853727421142\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\": 0.031603975145223735\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.796,\n \"acc_norm_stderr,none\": 0.025537121574548162\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.744,\n \"acc_norm_stderr,none\": 0.027657108718204846\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.644,\n \"acc_norm_stderr,none\": 0.0303436806571532\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.448,\n \"acc_norm_stderr,none\": 0.03151438761115349\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.5684931506849316,\n\
\ \"acc_norm_stderr,none\": 0.041131302645371945\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.692,\n \"acc_norm_stderr,none\": 0.02925692860650181\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.864,\n \"acc_norm_stderr,none\": 0.021723342617052086\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6460674157303371,\n \"acc_norm_stderr,none\"\
: 0.03594285405211505\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.852,\n \"acc_norm_stderr,none\": 0.022503547243806186\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.592,\n \"acc_norm_stderr,none\": 0.03114520984654851\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.252,\n \"acc_norm_stderr,none\": 0.027513851933031318\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.268,\n \"acc_norm_stderr,none\": 0.02806876238252672\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.344,\n \"acc_norm_stderr,none\": 0.03010450339231644\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.476,\n \"acc_norm_stderr,none\": 0.03164968895968774\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3347315436241611,\n\
\ \"acc_norm_stderr,none\": 0.013681339748209233,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.3434343434343434,\n\
\ \"acc_norm_stderr,none\": 0.03383201223244441\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.32234432234432236,\n \"acc_norm_stderr,none\": 0.020020102750045735\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.34598214285714285,\n \"acc_norm_stderr,none\"\
: 0.022499241830682457\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.6062846580406654,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.021024834145872404,\n \
\ \"inst_level_strict_acc,none\": 0.7038369304556354,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.6395563770794824,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.0206614696698795,\n \"inst_level_loose_acc,none\"\
: 0.7302158273381295,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.19788519637462235,\n\
\ \"exact_match_stderr,none\": 0.009998835994126825,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.43322475570032576,\n \"exact_match_stderr,none\": 0.028327050442298423\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.11382113821138211,\n \"exact_match_stderr,none\": 0.02875360087323741\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.06818181818181818,\n \"exact_match_stderr,none\"\
: 0.022022378945902827\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.03571428571428571,\n \"exact_match_stderr,none\"\
: 0.011110196729254557\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.12337662337662338,\n \"exact_match_stderr,none\": 0.026587484423674337\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.35751295336787564,\n \"exact_match_stderr,none\"\
: 0.03458816042181008\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.05925925925925926,\n \"exact_match_stderr,none\": 0.02039673654232189\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.4263630319148936,\n \"acc_stderr,none\": 0.004508763683858449\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.4775132275132275,\n\
\ \"acc_norm_stderr,none\": 0.01802634312352244,\n \"alias\": \" -\
\ leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.43359375,\n \"acc_norm_stderr,none\": 0.031033834158735715\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.424,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ }\n}\n```"
repo_url: https://huggingface.co/GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_navigate
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_snarks
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_gpqa_extended
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_gpqa_main
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_gpqa_main_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_ifeval
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_ifeval_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_mmlu_pro
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_musr_object_placements
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-21T16-06-20.952173.jsonl'
- config_name: GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_21T16_06_20.952173
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-21T16-06-20.952173.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-21T16-06-20.952173.jsonl'
---
# Dataset Card for Evaluation run of GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct](https://huggingface.co/GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct-details",
name="GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-21T16-06-20.952173](https://huggingface.co/datasets/open-llm-leaderboard/GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct-details/blob/main/GoToCompany__gemma2-9b-cpt-sahabatai-v1-instruct/results_2024-11-21T16-06-20.952173.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"prompt_level_strict_acc,none": 0.6062846580406654,
"prompt_level_strict_acc_stderr,none": 0.021024834145872404,
"acc_norm,none": 0.5431314048514723,
"acc_norm_stderr,none": 0.005317050852347761,
"inst_level_loose_acc,none": 0.7302158273381295,
"inst_level_loose_acc_stderr,none": "N/A",
"acc,none": 0.4263630319148936,
"acc_stderr,none": 0.004508763683858449,
"inst_level_strict_acc,none": 0.7038369304556354,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.19788519637462235,
"exact_match_stderr,none": 0.009998835994126825,
"prompt_level_loose_acc,none": 0.6395563770794824,
"prompt_level_loose_acc_stderr,none": 0.0206614696698795,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5948620031244576,
"acc_norm_stderr,none": 0.006083807836624403,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.856,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6363636363636364,
"acc_norm_stderr,none": 0.03527198153014412
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.584,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.436,
"acc_norm_stderr,none": 0.031425567060281365
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.724,
"acc_norm_stderr,none": 0.02832853727421142
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.796,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.744,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.644,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.5684931506849316,
"acc_norm_stderr,none": 0.041131302645371945
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.692,
"acc_norm_stderr,none": 0.02925692860650181
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.864,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6460674157303371,
"acc_norm_stderr,none": 0.03594285405211505
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.852,
"acc_norm_stderr,none": 0.022503547243806186
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.268,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.344,
"acc_norm_stderr,none": 0.03010450339231644
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.476,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3347315436241611,
"acc_norm_stderr,none": 0.013681339748209233,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3434343434343434,
"acc_norm_stderr,none": 0.03383201223244441
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.32234432234432236,
"acc_norm_stderr,none": 0.020020102750045735
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.34598214285714285,
"acc_norm_stderr,none": 0.022499241830682457
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.6062846580406654,
"prompt_level_strict_acc_stderr,none": 0.021024834145872404,
"inst_level_strict_acc,none": 0.7038369304556354,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.6395563770794824,
"prompt_level_loose_acc_stderr,none": 0.0206614696698795,
"inst_level_loose_acc,none": 0.7302158273381295,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.19788519637462235,
"exact_match_stderr,none": 0.009998835994126825,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.43322475570032576,
"exact_match_stderr,none": 0.028327050442298423
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.11382113821138211,
"exact_match_stderr,none": 0.02875360087323741
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.06818181818181818,
"exact_match_stderr,none": 0.022022378945902827
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.03571428571428571,
"exact_match_stderr,none": 0.011110196729254557
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.12337662337662338,
"exact_match_stderr,none": 0.026587484423674337
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.35751295336787564,
"exact_match_stderr,none": 0.03458816042181008
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.05925925925925926,
"exact_match_stderr,none": 0.02039673654232189
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.4263630319148936,
"acc_stderr,none": 0.004508763683858449
},
"leaderboard_musr": {
"acc_norm,none": 0.4775132275132275,
"acc_norm_stderr,none": 0.01802634312352244,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.43359375,
"acc_norm_stderr,none": 0.031033834158735715
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
}
},
"leaderboard": {
"prompt_level_strict_acc,none": 0.6062846580406654,
"prompt_level_strict_acc_stderr,none": 0.021024834145872404,
"acc_norm,none": 0.5431314048514723,
"acc_norm_stderr,none": 0.005317050852347761,
"inst_level_loose_acc,none": 0.7302158273381295,
"inst_level_loose_acc_stderr,none": "N/A",
"acc,none": 0.4263630319148936,
"acc_stderr,none": 0.004508763683858449,
"inst_level_strict_acc,none": 0.7038369304556354,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.19788519637462235,
"exact_match_stderr,none": 0.009998835994126825,
"prompt_level_loose_acc,none": 0.6395563770794824,
"prompt_level_loose_acc_stderr,none": 0.0206614696698795,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5948620031244576,
"acc_norm_stderr,none": 0.006083807836624403,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.856,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6363636363636364,
"acc_norm_stderr,none": 0.03527198153014412
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.584,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.436,
"acc_norm_stderr,none": 0.031425567060281365
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.724,
"acc_norm_stderr,none": 0.02832853727421142
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.796,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.744,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.644,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.5684931506849316,
"acc_norm_stderr,none": 0.041131302645371945
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.692,
"acc_norm_stderr,none": 0.02925692860650181
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.864,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6460674157303371,
"acc_norm_stderr,none": 0.03594285405211505
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.852,
"acc_norm_stderr,none": 0.022503547243806186
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.252,
"acc_norm_stderr,none": 0.027513851933031318
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.268,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.344,
"acc_norm_stderr,none": 0.03010450339231644
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.476,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3347315436241611,
"acc_norm_stderr,none": 0.013681339748209233,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3434343434343434,
"acc_norm_stderr,none": 0.03383201223244441
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.32234432234432236,
"acc_norm_stderr,none": 0.020020102750045735
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.34598214285714285,
"acc_norm_stderr,none": 0.022499241830682457
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.6062846580406654,
"prompt_level_strict_acc_stderr,none": 0.021024834145872404,
"inst_level_strict_acc,none": 0.7038369304556354,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.6395563770794824,
"prompt_level_loose_acc_stderr,none": 0.0206614696698795,
"inst_level_loose_acc,none": 0.7302158273381295,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.19788519637462235,
"exact_match_stderr,none": 0.009998835994126825,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.43322475570032576,
"exact_match_stderr,none": 0.028327050442298423
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.11382113821138211,
"exact_match_stderr,none": 0.02875360087323741
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.06818181818181818,
"exact_match_stderr,none": 0.022022378945902827
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.03571428571428571,
"exact_match_stderr,none": 0.011110196729254557
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.12337662337662338,
"exact_match_stderr,none": 0.026587484423674337
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.35751295336787564,
"exact_match_stderr,none": 0.03458816042181008
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.05925925925925926,
"exact_match_stderr,none": 0.02039673654232189
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.4263630319148936,
"acc_stderr,none": 0.004508763683858449
},
"leaderboard_musr": {
"acc_norm,none": 0.4775132275132275,
"acc_norm_stderr,none": 0.01802634312352244,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.43359375,
"acc_norm_stderr,none": 0.031033834158735715
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.424,
"acc_norm_stderr,none": 0.03131803437491622
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
PaDaS-Lab/webfaq-en-test | PaDaS-Lab | "2024-11-21T16:49:29Z" | 6 | 0 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"multilinguality:monolingual",
"source_datasets:msmarco",
"language:en",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"text-retrieval"
] | [
"text-retrieval"
] | "2024-11-21T16:49:23Z" | ---
language:
- en
multilinguality:
- monolingual
task_categories:
- text-retrieval
source_datasets:
- msmarco
task_ids:
- document-retrieval
config_names:
- corpus
tags:
- text-retrieval
dataset_info:
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: float64
splits:
- name: dev
num_bytes: 2451014
num_examples: 52160
- config_name: corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_bytes: 15332374
num_examples: 52160
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: queries
num_bytes: 3989191
num_examples: 52160
configs:
- config_name: default
data_files:
- split: dev
path: qrels/dev.jsonl
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
- config_name: queries
data_files:
- split: queries
path: queries.jsonl
--- |
procit007/treated_0.3 | procit007 | "2024-11-21T16:54:13Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T16:52:37Z" | ---
dataset_info:
features:
- name: gender
dtype: string
- name: accent
dtype: string
- name: speaker_id
dtype: int64
- name: speaker_name
dtype: string
- name: text
dtype: string
- name: normalized_text
dtype: string
- name: audio
dtype: audio
- name: treated
dtype: bool
- name: metrics
struct:
- name: clipping_ratio
dtype: float64
- name: duration
dtype: float64
- name: is_valid
dtype: bool
- name: rms_energy
dtype: float64
- name: sample_rate
dtype: int64
- name: silence_ratio
dtype: float64
- name: snr
dtype: float64
splits:
- name: train
num_bytes: 3176831243.0
num_examples: 10000
download_size: 2978489519
dataset_size: 3176831243.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
juliadollis/mistral_toxigen-data-train_2fewshot_limiar3 | juliadollis | "2024-11-21T17:20:37Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T17:20:34Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: target_group
dtype: string
- name: factual?
dtype: string
- name: ingroup_effect
dtype: string
- name: lewd
dtype: string
- name: framing
dtype: string
- name: predicted_group
dtype: string
- name: stereotyping
dtype: string
- name: intent
dtype: float64
- name: toxicity_ai
dtype: float64
- name: toxicity_human
dtype: float64
- name: predicted_author
dtype: string
- name: actual_method
dtype: string
- name: is_toxic
dtype: int64
- name: predicted_is_toxic
dtype: int64
- name: y_true
dtype: int64
splits:
- name: train
num_bytes: 3508181
num_examples: 8960
download_size: 731939
dataset_size: 3508181
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RyanYr/self-reflect_mini8Bit-t0_mistlarge-t12_om2-140k_binlabel | RyanYr | "2024-11-21T18:16:05Z" | 6 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T18:15:53Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: generated_solution
dtype: string
- name: answer
dtype: string
- name: problem_source
dtype: string
- name: response@0
sequence: string
- name: response@1
sequence: string
- name: response@2
sequence: string
- name: response@0_ans
sequence: string
- name: response@0_correctness
sequence: bool
- name: response@2_ans
sequence: string
- name: response@2_correctness
sequence: bool
splits:
- name: train
num_bytes: 689139678
num_examples: 140000
download_size: 304980057
dataset_size: 689139678
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/zelk12__MT3-Gen2-gemma-2-9B-details | open-llm-leaderboard | "2024-11-21T18:47:37Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T18:43:34Z" | ---
pretty_name: Evaluation run of zelk12/MT3-Gen2-gemma-2-9B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [zelk12/MT3-Gen2-gemma-2-9B](https://huggingface.co/zelk12/MT3-Gen2-gemma-2-9B)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/zelk12__MT3-Gen2-gemma-2-9B-details\"\
,\n\tname=\"zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_boolean_expressions\",\n\
\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-21T18-43-33.559212](https://huggingface.co/datasets/open-llm-leaderboard/zelk12__MT3-Gen2-gemma-2-9B-details/blob/main/zelk12__MT3-Gen2-gemma-2-9B/results_2024-11-21T18-43-33.559212.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"prompt_level_strict_acc,none\": 0.744916820702403,\n \"\
prompt_level_strict_acc_stderr,none\": 0.018758491950414184,\n \"acc,none\"\
: 0.43326130319148937,\n \"acc_stderr,none\": 0.004517680579088188,\n\
\ \"acc_norm,none\": 0.54987676741471,\n \"acc_norm_stderr,none\"\
: 0.005289250250282228,\n \"prompt_level_loose_acc,none\": 0.767097966728281,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.01818926607409182,\n \
\ \"exact_match,none\": 0.02039274924471299,\n \"exact_match_stderr,none\"\
: 0.003847017757728751,\n \"inst_level_loose_acc,none\": 0.842925659472422,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"inst_level_strict_acc,none\"\
: 0.8237410071942446,\n \"inst_level_strict_acc_stderr,none\": \"N/A\"\
,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.6080541572643638,\n \"acc_norm_stderr,none\"\
: 0.0060467875310710436,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.852,\n\
\ \"acc_norm_stderr,none\": 0.022503547243806186\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6310160427807486,\n \"acc_norm_stderr,none\"\
: 0.03538078548260318\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.596,\n \"acc_norm_stderr,none\": 0.03109668818482536\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.656,\n\
\ \"acc_norm_stderr,none\": 0.03010450339231644\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.62,\n \"acc_norm_stderr,none\": 0.030760116042626098\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\"\
: \" - leaderboard_bbh_geometric_shapes\",\n \"acc_norm,none\": 0.52,\n\
\ \"acc_norm_stderr,none\": 0.03166085340849512\n },\n \
\ \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.692,\n \"acc_norm_stderr,none\":\
\ 0.02925692860650181\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.576,\n \"acc_norm_stderr,none\":\
\ 0.03131803437491622\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.572,\n \"acc_norm_stderr,none\":\
\ 0.031355968923772626\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.836,\n \"acc_norm_stderr,none\":\
\ 0.023465261002076715\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \
\ \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \"\
\ - leaderboard_bbh_navigate\",\n \"acc_norm,none\": 0.676,\n \
\ \"acc_norm_stderr,none\": 0.029658294924545567\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.284,\n \"acc_norm_stderr,none\": 0.02857695873043744\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.5958904109589042,\n \"acc_norm_stderr,none\": 0.0407519857003932\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.684,\n \"acc_norm_stderr,none\": 0.02946265759857865\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.808,\n \
\ \"acc_norm_stderr,none\": 0.02496069198917196\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.592,\n \"acc_norm_stderr,none\":\
\ 0.03114520984654851\n },\n \"leaderboard_bbh_snarks\": {\n \
\ \"alias\": \" - leaderboard_bbh_snarks\",\n \"acc_norm,none\"\
: 0.6966292134831461,\n \"acc_norm_stderr,none\": 0.03455421944400101\n\
\ },\n \"leaderboard_bbh_sports_understanding\": {\n \"\
alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.832,\n \"acc_norm_stderr,none\": 0.023692813205492536\n },\n\
\ \"leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" -\
\ leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.844,\n\
\ \"acc_norm_stderr,none\": 0.022995023034068682\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.296,\n \"acc_norm_stderr,none\": 0.028928939388379694\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.304,\n \"acc_norm_stderr,none\":\
\ 0.02915021337415965\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.364,\n \"acc_norm_stderr,none\":\
\ 0.030491555220405475\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.512,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3573825503355705,\n\
\ \"acc_norm_stderr,none\": 0.013891832771494425,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.3888888888888889,\n \"acc_norm_stderr,none\": 0.03473279590836963\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.3534798534798535,\n\
\ \"acc_norm_stderr,none\": 0.020477414126085836\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3482142857142857,\n \"acc_norm_stderr,none\"\
: 0.022533152157915175\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.744916820702403,\n \"prompt_level_strict_acc_stderr,none\": 0.018758491950414184,\n\
\ \"inst_level_strict_acc,none\": 0.8237410071942446,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.767097966728281,\n \"prompt_level_loose_acc_stderr,none\": 0.01818926607409182,\n\
\ \"inst_level_loose_acc,none\": 0.842925659472422,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\"\n },\n \"leaderboard_math_hard\": {\n \"exact_match,none\"\
: 0.02039274924471299,\n \"exact_match_stderr,none\": 0.003847017757728751,\n\
\ \"alias\": \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_algebra_hard\",\n \
\ \"exact_match,none\": 0.05537459283387622,\n \"exact_match_stderr,none\"\
: 0.01307447837002421\n },\n \"leaderboard_math_counting_and_prob_hard\"\
: {\n \"alias\": \" - leaderboard_math_counting_and_prob_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0\n },\n \"leaderboard_math_geometry_hard\": {\n \"\
alias\": \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\"\
: 0.007575757575757576,\n \"exact_match_stderr,none\": 0.007575757575757577\n\
\ },\n \"leaderboard_math_intermediate_algebra_hard\": {\n \
\ \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n \
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.01948051948051948,\n\
\ \"exact_match_stderr,none\": 0.011173331005571083\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.031088082901554404,\n \"exact_match_stderr,none\"\
: 0.012525310625527019\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.43326130319148937,\n \"acc_stderr,none\"\
: 0.004517680579088188\n },\n \"leaderboard_musr\": {\n \
\ \"acc_norm,none\": 0.41005291005291006,\n \"acc_norm_stderr,none\"\
: 0.017490273970870246,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.54,\n\
\ \"acc_norm_stderr,none\": 0.031584653891499004\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.27734375,\n \"acc_norm_stderr,none\"\
: 0.02803528549328419\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.416,\n \"acc_norm_stderr,none\": 0.031235856237014505\n\
\ }\n },\n \"leaderboard\": {\n \"prompt_level_strict_acc,none\"\
: 0.744916820702403,\n \"prompt_level_strict_acc_stderr,none\": 0.018758491950414184,\n\
\ \"acc,none\": 0.43326130319148937,\n \"acc_stderr,none\": 0.004517680579088188,\n\
\ \"acc_norm,none\": 0.54987676741471,\n \"acc_norm_stderr,none\"\
: 0.005289250250282228,\n \"prompt_level_loose_acc,none\": 0.767097966728281,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.01818926607409182,\n \"\
exact_match,none\": 0.02039274924471299,\n \"exact_match_stderr,none\": 0.003847017757728751,\n\
\ \"inst_level_loose_acc,none\": 0.842925659472422,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"inst_level_strict_acc,none\": 0.8237410071942446,\n \
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"alias\": \"leaderboard\"\
\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.6080541572643638,\n\
\ \"acc_norm_stderr,none\": 0.0060467875310710436,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.852,\n \"acc_norm_stderr,none\": 0.022503547243806186\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6310160427807486,\n \"acc_norm_stderr,none\"\
: 0.03538078548260318\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.596,\n \"acc_norm_stderr,none\": 0.03109668818482536\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.656,\n \"acc_norm_stderr,none\": 0.03010450339231644\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.62,\n \"acc_norm_stderr,none\": 0.030760116042626098\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.692,\n \"acc_norm_stderr,none\": 0.02925692860650181\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.572,\n \"acc_norm_stderr,none\": 0.031355968923772626\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.836,\n \"acc_norm_stderr,none\": 0.023465261002076715\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.676,\n \"acc_norm_stderr,none\": 0.029658294924545567\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.284,\n \"acc_norm_stderr,none\": 0.02857695873043744\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.5958904109589042,\n\
\ \"acc_norm_stderr,none\": 0.0407519857003932\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.684,\n \"acc_norm_stderr,none\": 0.02946265759857865\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.808,\n \"acc_norm_stderr,none\": 0.02496069198917196\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.592,\n \"acc_norm_stderr,none\": 0.03114520984654851\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6966292134831461,\n \"acc_norm_stderr,none\"\
: 0.03455421944400101\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.832,\n \"acc_norm_stderr,none\": 0.023692813205492536\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.844,\n \"acc_norm_stderr,none\": 0.022995023034068682\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.296,\n \"acc_norm_stderr,none\": 0.028928939388379694\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.304,\n \"acc_norm_stderr,none\": 0.02915021337415965\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.364,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.512,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3573825503355705,\n\
\ \"acc_norm_stderr,none\": 0.013891832771494425,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.3888888888888889,\n\
\ \"acc_norm_stderr,none\": 0.03473279590836963\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.3534798534798535,\n \"acc_norm_stderr,none\": 0.020477414126085836\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3482142857142857,\n \"acc_norm_stderr,none\"\
: 0.022533152157915175\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.744916820702403,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.018758491950414184,\n \
\ \"inst_level_strict_acc,none\": 0.8237410071942446,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.767097966728281,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.01818926607409182,\n \"inst_level_loose_acc,none\"\
: 0.842925659472422,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.02039274924471299,\n\
\ \"exact_match_stderr,none\": 0.003847017757728751,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.05537459283387622,\n \"exact_match_stderr,none\": 0.01307447837002421\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_geometry_hard\"\
: {\n \"alias\": \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\"\
: 0.007575757575757576,\n \"exact_match_stderr,none\": 0.007575757575757577\n\
\ },\n \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_num_theory_hard\"\
: {\n \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.01948051948051948,\n \"exact_match_stderr,none\": 0.011173331005571083\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.031088082901554404,\n \"exact_match_stderr,none\"\
: 0.012525310625527019\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_mmlu_pro\"\
: {\n \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.43326130319148937,\n\
\ \"acc_stderr,none\": 0.004517680579088188\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.41005291005291006,\n \"acc_norm_stderr,none\"\
: 0.017490273970870246,\n \"alias\": \" - leaderboard_musr\"\n },\n \
\ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\
,\n \"acc_norm,none\": 0.54,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\
\ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.27734375,\n\
\ \"acc_norm_stderr,none\": 0.02803528549328419\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\
: 0.416,\n \"acc_norm_stderr,none\": 0.031235856237014505\n }\n}\n```"
repo_url: https://huggingface.co/zelk12/MT3-Gen2-gemma-2-9B
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_navigate
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_snarks
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_gpqa_extended
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_gpqa_main
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_gpqa_main_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_ifeval
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_ifeval_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_mmlu_pro
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_musr_object_placements
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-21T18-43-33.559212.jsonl'
- config_name: zelk12__MT3-Gen2-gemma-2-9B__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_21T18_43_33.559212
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-21T18-43-33.559212.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-21T18-43-33.559212.jsonl'
---
# Dataset Card for Evaluation run of zelk12/MT3-Gen2-gemma-2-9B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [zelk12/MT3-Gen2-gemma-2-9B](https://huggingface.co/zelk12/MT3-Gen2-gemma-2-9B)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/zelk12__MT3-Gen2-gemma-2-9B-details",
name="zelk12__MT3-Gen2-gemma-2-9B__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-21T18-43-33.559212](https://huggingface.co/datasets/open-llm-leaderboard/zelk12__MT3-Gen2-gemma-2-9B-details/blob/main/zelk12__MT3-Gen2-gemma-2-9B/results_2024-11-21T18-43-33.559212.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"prompt_level_strict_acc,none": 0.744916820702403,
"prompt_level_strict_acc_stderr,none": 0.018758491950414184,
"acc,none": 0.43326130319148937,
"acc_stderr,none": 0.004517680579088188,
"acc_norm,none": 0.54987676741471,
"acc_norm_stderr,none": 0.005289250250282228,
"prompt_level_loose_acc,none": 0.767097966728281,
"prompt_level_loose_acc_stderr,none": 0.01818926607409182,
"exact_match,none": 0.02039274924471299,
"exact_match_stderr,none": 0.003847017757728751,
"inst_level_loose_acc,none": 0.842925659472422,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.8237410071942446,
"inst_level_strict_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6080541572643638,
"acc_norm_stderr,none": 0.0060467875310710436,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.852,
"acc_norm_stderr,none": 0.022503547243806186
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6310160427807486,
"acc_norm_stderr,none": 0.03538078548260318
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.596,
"acc_norm_stderr,none": 0.03109668818482536
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.656,
"acc_norm_stderr,none": 0.03010450339231644
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.62,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.692,
"acc_norm_stderr,none": 0.02925692860650181
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.836,
"acc_norm_stderr,none": 0.023465261002076715
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.676,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.284,
"acc_norm_stderr,none": 0.02857695873043744
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.5958904109589042,
"acc_norm_stderr,none": 0.0407519857003932
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.684,
"acc_norm_stderr,none": 0.02946265759857865
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.808,
"acc_norm_stderr,none": 0.02496069198917196
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6966292134831461,
"acc_norm_stderr,none": 0.03455421944400101
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.832,
"acc_norm_stderr,none": 0.023692813205492536
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.844,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.296,
"acc_norm_stderr,none": 0.028928939388379694
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.304,
"acc_norm_stderr,none": 0.02915021337415965
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.364,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3573825503355705,
"acc_norm_stderr,none": 0.013891832771494425,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3888888888888889,
"acc_norm_stderr,none": 0.03473279590836963
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.3534798534798535,
"acc_norm_stderr,none": 0.020477414126085836
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3482142857142857,
"acc_norm_stderr,none": 0.022533152157915175
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.744916820702403,
"prompt_level_strict_acc_stderr,none": 0.018758491950414184,
"inst_level_strict_acc,none": 0.8237410071942446,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.767097966728281,
"prompt_level_loose_acc_stderr,none": 0.01818926607409182,
"inst_level_loose_acc,none": 0.842925659472422,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.02039274924471299,
"exact_match_stderr,none": 0.003847017757728751,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.05537459283387622,
"exact_match_stderr,none": 0.01307447837002421
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.007575757575757576,
"exact_match_stderr,none": 0.007575757575757577
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.01948051948051948,
"exact_match_stderr,none": 0.011173331005571083
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.031088082901554404,
"exact_match_stderr,none": 0.012525310625527019
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.43326130319148937,
"acc_stderr,none": 0.004517680579088188
},
"leaderboard_musr": {
"acc_norm,none": 0.41005291005291006,
"acc_norm_stderr,none": 0.017490273970870246,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.27734375,
"acc_norm_stderr,none": 0.02803528549328419
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.416,
"acc_norm_stderr,none": 0.031235856237014505
}
},
"leaderboard": {
"prompt_level_strict_acc,none": 0.744916820702403,
"prompt_level_strict_acc_stderr,none": 0.018758491950414184,
"acc,none": 0.43326130319148937,
"acc_stderr,none": 0.004517680579088188,
"acc_norm,none": 0.54987676741471,
"acc_norm_stderr,none": 0.005289250250282228,
"prompt_level_loose_acc,none": 0.767097966728281,
"prompt_level_loose_acc_stderr,none": 0.01818926607409182,
"exact_match,none": 0.02039274924471299,
"exact_match_stderr,none": 0.003847017757728751,
"inst_level_loose_acc,none": 0.842925659472422,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.8237410071942446,
"inst_level_strict_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.6080541572643638,
"acc_norm_stderr,none": 0.0060467875310710436,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.852,
"acc_norm_stderr,none": 0.022503547243806186
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6310160427807486,
"acc_norm_stderr,none": 0.03538078548260318
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.596,
"acc_norm_stderr,none": 0.03109668818482536
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.656,
"acc_norm_stderr,none": 0.03010450339231644
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.62,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.692,
"acc_norm_stderr,none": 0.02925692860650181
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.836,
"acc_norm_stderr,none": 0.023465261002076715
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.676,
"acc_norm_stderr,none": 0.029658294924545567
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.284,
"acc_norm_stderr,none": 0.02857695873043744
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.5958904109589042,
"acc_norm_stderr,none": 0.0407519857003932
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.684,
"acc_norm_stderr,none": 0.02946265759857865
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.808,
"acc_norm_stderr,none": 0.02496069198917196
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6966292134831461,
"acc_norm_stderr,none": 0.03455421944400101
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.832,
"acc_norm_stderr,none": 0.023692813205492536
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.844,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.296,
"acc_norm_stderr,none": 0.028928939388379694
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.304,
"acc_norm_stderr,none": 0.02915021337415965
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.364,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3573825503355705,
"acc_norm_stderr,none": 0.013891832771494425,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3888888888888889,
"acc_norm_stderr,none": 0.03473279590836963
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.3534798534798535,
"acc_norm_stderr,none": 0.020477414126085836
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3482142857142857,
"acc_norm_stderr,none": 0.022533152157915175
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.744916820702403,
"prompt_level_strict_acc_stderr,none": 0.018758491950414184,
"inst_level_strict_acc,none": 0.8237410071942446,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.767097966728281,
"prompt_level_loose_acc_stderr,none": 0.01818926607409182,
"inst_level_loose_acc,none": 0.842925659472422,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.02039274924471299,
"exact_match_stderr,none": 0.003847017757728751,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.05537459283387622,
"exact_match_stderr,none": 0.01307447837002421
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.007575757575757576,
"exact_match_stderr,none": 0.007575757575757577
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.01948051948051948,
"exact_match_stderr,none": 0.011173331005571083
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.031088082901554404,
"exact_match_stderr,none": 0.012525310625527019
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.43326130319148937,
"acc_stderr,none": 0.004517680579088188
},
"leaderboard_musr": {
"acc_norm,none": 0.41005291005291006,
"acc_norm_stderr,none": 0.017490273970870246,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.27734375,
"acc_norm_stderr,none": 0.02803528549328419
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.416,
"acc_norm_stderr,none": 0.031235856237014505
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ncoop57/personas-translation-f4d93fec-2af0-4abc-8419-29c0b5450e1f | ncoop57 | "2024-11-21T18:52:28Z" | 6 | 0 | [
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us",
"fastdata",
"synthetic"
] | null | "2024-11-21T18:52:25Z" | ---
tags:
- fastdata
- synthetic
---
# personas-translation-f4d93fec-2af0-4abc-8419-29c0b5450e1f
_Note: This is an AI-generated dataset, so its content may be inaccurate or false._
**Source of the data:**
The dataset was generated using [Fastdata](https://github.com/AnswerDotAI/fastdata) library and claude-3-haiku-20240307 with the following input:
## System Prompt
```
You will help generate synthetic data of English and Spanish phrases.
```
## Prompt Template
```
<examples>
{examples}
</examples>
Create an English and Spanish translation pair that is similar to the examples and would be appropriate for the following persona:
<persona>{persona}</persona>
```
## Sample Input
```json
[{'persona': "A Political Analyst specialized in El Salvador's political landscape.", 'examples': [Hello, my name is Nathan. I am a research scientist at an AI startup. โก *Hola, me llamo Nathan. Soy ciencia investigador en un startup de IA.*, How much wood could a woodchuck chuck if a woodchuck could chuck wood? โก *ยฟCuรกnta madera podrรญa arrojar una marmota si una marmota pudiera arrojar madera?*, Thomas Cranmer (2 July 1489 - 21 March 1556) was a leader of the English Reformation and Archbishop of Canterbury during the reigns of Henry VIII, Edward VI and, for a short time, Mary I. He helped build the case for the annulment of Henry's marriage to Catherine of Aragon, which was one of the causes of the separation of the English Church from union with the Holy See. โก *Thomas Cranmer (2 de julio de 1489 - 21 de marzo de 1556) fue un lรญder de la Reforma inglesa y arzobispo de Canterbury durante los reinados de Henry VIII, Edward VI y, por un corto tiempo, Marรญa I. Ayudรณ a construir el caso para la anulaciรณn de El matrimonio de Henry con Catalina de Aragรณn, que fue una de las causas de la separaciรณn de la Iglesia inglesa de la uniรณn con la Santa Sede.*]}, {'persona': 'A legal advisor who understands the legal implications of incomplete or inaccurate project documentation', 'examples': [Hello, my name is Nathan. I am a research scientist at an AI startup. โก *Hola, me llamo Nathan. Soy ciencia investigador en un startup de IA.*, How much wood could a woodchuck chuck if a woodchuck could chuck wood? โก *ยฟCuรกnta madera podrรญa arrojar una marmota si una marmota pudiera arrojar madera?*, Thomas Cranmer (2 July 1489 - 21 March 1556) was a leader of the English Reformation and Archbishop of Canterbury during the reigns of Henry VIII, Edward VI and, for a short time, Mary I. He helped build the case for the annulment of Henry's marriage to Catherine of Aragon, which was one of the causes of the separation of the English Church from union with the Holy See. โก *Thomas Cranmer (2 de julio de 1489 - 21 de marzo de 1556) fue un lรญder de la Reforma inglesa y arzobispo de Canterbury durante los reinados de Henry VIII, Edward VI y, por un corto tiempo, Marรญa I. Ayudรณ a construir el caso para la anulaciรณn de El matrimonio de Henry con Catalina de Aragรณn, que fue una de las causas de la separaciรณn de la Iglesia inglesa de la uniรณn con la Santa Sede.*]}]
```
|
sumuks/e1v0.1-single-shot-questions-multihop-original | sumuks | "2024-11-21T18:53:32Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T18:53:31Z" | ---
dataset_info:
features:
- name: chunk_ids
sequence: string
- name: generator_model
dtype: string
- name: question_type
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: document_analysis
dtype: string
- name: chunk_analysis
sequence: string
- name: potential_question_directions
sequence: string
- name: best_direction
dtype: string
- name: reasoning
dtype: string
- name: estimated_difficulty
dtype: int64
- name: testable_concepts
sequence: string
- name: difficulty_justification
dtype: string
- name: quote_context
dtype: string
- name: supporting_quotes
sequence: string
splits:
- name: train
num_bytes: 1068599
num_examples: 398
download_size: 261721
dataset_size: 1068599
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Metaskepsis/sft | Metaskepsis | "2024-11-21T19:19:15Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T19:18:36Z" | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 622781553
num_examples: 79960
download_size: 193855449
dataset_size: 622781553
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
theazer69/padilha2 | theazer69 | "2024-11-21T19:32:57Z" | 6 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-11-21T19:31:32Z" | ---
license: openrail
---
|
cfahlgren1/llama-3.1-awesome-chatgpt-prompts | cfahlgren1 | "2024-11-21T19:51:50Z" | 6 | 2 | [
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"observers"
] | null | "2024-11-21T19:44:07Z" | ---
tags:
- observers
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
neoneye/simon-arc-solve-rotate-v11 | neoneye | "2024-11-21T20:45:35Z" | 6 | 0 | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-to-text",
"text-to-image"
] | "2024-11-21T20:44:36Z" | ---
license: mit
task_categories:
- image-to-text
- text-to-image
language:
- en
pretty_name: simons ARC (abstraction & reasoning corpus) solve rotate version 11
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: data.jsonl
---
# Version 1
ARC-AGI Tasks where the image gets rotated cw/ccw/180 and transposed.
The image sizes are between 1 and 4 pixels.
Predict the number of rows in the output image.
# Version 2
image size: 1-5.
# Version 3
image size: 1-5.
Added `flipx` and `flipy` transformations.
# Version 4
image size: 1-5.
number of tests: 1-2. Previously there were always just 1 test.
Added `flipa` and `flipb` transformations, that flips over the diagonal.
# Version 5
image size: 1-5.
number of tests: 1-2.
# Version 6
image size: 1-13.
# Version 7
Earlier predictions added to some of the rows.
# Version 8
Earlier predictions with focus on repair 1 bad pixel.
# Version 9
Added fields: `arc_task`, `test_index`, `earlier_output`.
# Version 10
Replaced RLE compressed response with raw pixel response.
# Version 11
image size: 1-16.
|
neoneye/simon-arc-solve-translate-v12 | neoneye | "2024-11-21T22:06:16Z" | 6 | 0 | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-to-text",
"text-to-image"
] | "2024-11-21T22:05:04Z" | ---
license: mit
task_categories:
- image-to-text
- text-to-image
language:
- en
pretty_name: simons ARC (abstraction & reasoning corpus) solve translate version 12
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: data.jsonl
---
# Version 1
ARC-AGI Tasks where the image gets translated by plus/minus 1 pixel in up/down/left/right directions.
The image sizes are between 1 and 4 pixels.
# Version 2
Only translate plus/minus 1 up/down are enabled.
image width: 1-4, image height: 3-4.
My hypothesis is that it's easy with RLE data to translate up/down.
# Version 3
Only translate plus/minus 1 left/right are enabled.
image width: 3-4, image height: 1-4.
# Version 4
All transformations have same weight.
image size: 3-4.
# Version 5
Added diagonal translation by 1 pixel.
All transformations have same weight.
image size: 3-4.
# Version 6
All transformations have same weight.
image size: 3-5.
# Version 7
All transformations have same weight.
image size: 3-5.
number of test pairs: 1-2. Previous it was alway 1 test pair.
# Version 8
All transformations have same weight.
image size: 3-5.
number of test pairs: 1-2.
Added: Predict the number of rows in the output image.
# Version 9
Increased the translation distance from -1..+1, to -2..+2.
image size 1-8.
# Version 10
Increased the translation distance from -2..+2, to -3..+3.
image size 1-12.
# Version 11
Added fields: `arc_task`, `test_index`, `earlier_output`.
# Version 12
Replaced RLE compressed response with raw pixel response.
image size 1-5.
max translation 1.
|
eliasfiz/rlhf-tiny | eliasfiz | "2024-11-21T22:18:05Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T22:18:02Z" | ---
dataset_info:
features:
- name: audio
sequence:
sequence: int64
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 200360
num_examples: 12
download_size: 61717
dataset_size: 200360
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
neoneye/simon-arc-solve-rotate-v12 | neoneye | "2024-11-21T22:31:10Z" | 6 | 0 | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-to-text",
"text-to-image"
] | "2024-11-21T22:29:55Z" | ---
license: mit
task_categories:
- image-to-text
- text-to-image
language:
- en
pretty_name: simons ARC (abstraction & reasoning corpus) solve rotate version 12
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: data.jsonl
---
# Version 1
ARC-AGI Tasks where the image gets rotated cw/ccw/180 and transposed.
The image sizes are between 1 and 4 pixels.
Predict the number of rows in the output image.
# Version 2
image size: 1-5.
# Version 3
image size: 1-5.
Added `flipx` and `flipy` transformations.
# Version 4
image size: 1-5.
number of tests: 1-2. Previously there were always just 1 test.
Added `flipa` and `flipb` transformations, that flips over the diagonal.
# Version 5
image size: 1-5.
number of tests: 1-2.
# Version 6
image size: 1-13.
# Version 7
Earlier predictions added to some of the rows.
# Version 8
Earlier predictions with focus on repair 1 bad pixel.
# Version 9
Added fields: `arc_task`, `test_index`, `earlier_output`.
# Version 10
Replaced RLE compressed response with raw pixel response.
# Version 11
image size: 1-16.
# Version 12
I think the image sizes was too big for the model to make sense of the data. Trying with smaller images.
image size: 1-5.
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_33 | ZixuanKe | "2024-11-21T22:48:35Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T22:48:33Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 208604
num_examples: 41
download_size: 34372
dataset_size: 208604
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Tippawan/Finetune-mt-story-telling-221124-messages | Tippawan | "2024-11-21T22:51:05Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T22:51:04Z" | ---
dataset_info:
features:
- name: en
dtype: string
- name: th
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 719000
num_examples: 5629
- name: test
num_bytes: 145238
num_examples: 1126
- name: validation
num_bytes: 145163
num_examples: 1126
download_size: 577598
dataset_size: 1009401
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
neoneye/simon-arc-solve-scale-v9 | neoneye | "2024-11-21T23:16:54Z" | 6 | 0 | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-to-text",
"text-to-image"
] | "2024-11-21T23:14:58Z" | ---
license: mit
task_categories:
- image-to-text
- text-to-image
language:
- en
pretty_name: simons ARC (abstraction & reasoning corpus) solve scale version 9
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: data.jsonl
---
# Version 1
ARC-AGI Tasks where the images gets scaled up/down in both x and y direction.
example count: 2-4.
test count: 1-2.
image size: 3-10.
scale factor: 1-3.
# Version 2
image size: 1-20.
scale factor: 1-7.
# Version 3
image size: 1-30.
scale factor: 1-7.
# Version 4
Added a few noise to the images.
image size: 1-10.
scale factor: 1-7.
Only scale down.
Number of noise pixels per pixel cell: 0-2.
# Version 5
More noisy images for down scaling.
image size: 1-12.
Number of noise pixels per pixel cell: 0-half.
# Version 6
Earlier predictions added to some of the rows.
# Version 7
Added fields: `arc_task`, `test_index`, `earlier_output`.
# Version 8
Replaced RLE compressed response with raw pixel response.
image size: 1-5.
scale factor: 1-7.
# Version 9
image size: 1-7.
scale factor: 1-3.
|
neoneye/simon-arc-solve-skew-v5 | neoneye | "2024-11-21T23:33:46Z" | 6 | 0 | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-to-text",
"text-to-image"
] | "2024-11-21T23:32:47Z" | ---
license: mit
task_categories:
- image-to-text
- text-to-image
language:
- en
pretty_name: simons ARC (abstraction & reasoning corpus) solve skew version 5
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: data.jsonl
---
# Version 1
ARC-AGI Tasks where the job is to apply skew/unkew in the directions up/down/left/right.
example count: 2-4.
test count: 1-2.
image size: 1-4.
# Version 2
image size: 1-7.
# Version 3
Earlier predictions added to some of the rows.
# Version 4
Added fields: `arc_task`, `test_index`, `earlier_output`.
# Version 5
Replaced RLE compressed response with raw pixel response. |
dogtooth/llama-31-diverse-generations-hs | dogtooth | "2024-11-21T23:50:12Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T23:50:10Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
sequence: string
splits:
- name: train
num_bytes: 48336201
num_examples: 10163
download_size: 20400362
dataset_size: 48336201
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TSOWatch/1001NightsTreasureKnowledge | TSOWatch | "2024-11-22T00:09:09Z" | 6 | 0 | [
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:08:33Z" | ---
license: creativeml-openrail-m
---
|
TSOWatch/1001NightsWoodcutter | TSOWatch | "2024-11-22T00:23:30Z" | 6 | 0 | [
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:23:14Z" | ---
license: creativeml-openrail-m
---
|
open-llm-leaderboard/allenai__Llama-3.1-Tulu-3-8B-details | open-llm-leaderboard | "2024-11-22T00:34:22Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:31:09Z" | ---
pretty_name: Evaluation run of allenai/Llama-3.1-Tulu-3-8B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [allenai/Llama-3.1-Tulu-3-8B](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/allenai__Llama-3.1-Tulu-3-8B-details\"\
,\n\tname=\"allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-22T00-31-08.901515](https://huggingface.co/datasets/open-llm-leaderboard/allenai__Llama-3.1-Tulu-3-8B-details/blob/main/allenai__Llama-3.1-Tulu-3-8B/results_2024-11-22T00-31-08.901515.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"acc_norm,none\": 0.38785834738617203,\n \"acc_norm_stderr,none\"\
: 0.005273329157943381,\n \"inst_level_loose_acc,none\": 0.8752997601918465,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"inst_level_strict_acc,none\"\
: 0.8585131894484412,\n \"inst_level_strict_acc_stderr,none\": \"N/A\"\
,\n \"exact_match,none\": 0.19637462235649547,\n \"exact_match_stderr,none\"\
: 0.009854609082277298,\n \"acc,none\": 0.2826628989361702,\n \
\ \"acc_stderr,none\": 0.0041053027261143855,\n \"prompt_level_strict_acc,none\"\
: 0.7948243992606284,\n \"prompt_level_strict_acc_stderr,none\": 0.01737807119675965,\n\
\ \"prompt_level_loose_acc,none\": 0.8151571164510166,\n \"\
prompt_level_loose_acc_stderr,none\": 0.01670417955850395,\n \"alias\"\
: \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.4025342822426662,\n \"acc_norm_stderr,none\": 0.006072426154807149,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.8,\n \"acc_norm_stderr,none\": 0.02534897002097912\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.5187165775401069,\n\
\ \"acc_norm_stderr,none\": 0.03663608375537843\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.288,\n \"acc_norm_stderr,none\":\
\ 0.028697004587398253\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.604,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.472,\n\
\ \"acc_norm_stderr,none\": 0.031636489531544396\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.328,\n \"acc_norm_stderr,none\":\
\ 0.029752391824475363\n },\n \"leaderboard_bbh_hyperbaton\": {\n\
\ \"alias\": \" - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\"\
: 0.536,\n \"acc_norm_stderr,none\": 0.031603975145223735\n },\n\
\ \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.256,\n \"acc_norm_stderr,none\": 0.027657108718204846\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.212,\n \"acc_norm_stderr,none\": 0.025901884690541117\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.416,\n \"acc_norm_stderr,none\": 0.031235856237014505\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.688,\n \"acc_norm_stderr,none\": 0.029361067575219852\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\"\
: \" - leaderboard_bbh_object_counting\",\n \"acc_norm,none\": 0.288,\n\
\ \"acc_norm_stderr,none\": 0.028697004587398253\n },\n \
\ \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" - leaderboard_bbh_penguins_in_a_table\"\
,\n \"acc_norm,none\": 0.3904109589041096,\n \"acc_norm_stderr,none\"\
: 0.040513109165891854\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.456,\n \"acc_norm_stderr,none\":\
\ 0.031563285061213475\n },\n \"leaderboard_bbh_ruin_names\": {\n\
\ \"alias\": \" - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\"\
: 0.456,\n \"acc_norm_stderr,none\": 0.031563285061213475\n },\n\
\ \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.396,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" -\
\ leaderboard_bbh_snarks\",\n \"acc_norm,none\": 0.5224719101123596,\n\
\ \"acc_norm_stderr,none\": 0.03754432508487191\n },\n \
\ \"leaderboard_bbh_sports_understanding\": {\n \"alias\": \" - leaderboard_bbh_sports_understanding\"\
,\n \"acc_norm,none\": 0.496,\n \"acc_norm_stderr,none\":\
\ 0.0316851985511992\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \
\ \"acc_norm,none\": 0.116,\n \"acc_norm_stderr,none\": 0.020293429803083823\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.136,\n \"acc_norm_stderr,none\":\
\ 0.021723342617052086\n },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.144,\n \"acc_norm_stderr,none\":\
\ 0.022249407735450245\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.292,\n \"acc_norm_stderr,none\":\
\ 0.02881432040220563\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2986577181208054,\n\
\ \"acc_norm_stderr,none\": 0.013264655332365493,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.30303030303030304,\n \"acc_norm_stderr,none\": 0.03274287914026869\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.28205128205128205,\n\
\ \"acc_norm_stderr,none\": 0.019275803929950375\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3169642857142857,\n \"acc_norm_stderr,none\"\
: 0.0220076215848248\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.7948243992606284,\n \"prompt_level_strict_acc_stderr,none\": 0.01737807119675965,\n\
\ \"inst_level_strict_acc,none\": 0.8585131894484412,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.8151571164510166,\n \"prompt_level_loose_acc_stderr,none\": 0.01670417955850395,\n\
\ \"inst_level_loose_acc,none\": 0.8752997601918465,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.19637462235649547,\n \"exact_match_stderr,none\"\
: 0.009854609082277298,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.3811074918566775,\n\
\ \"exact_match_stderr,none\": 0.02776327166045321\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \" \
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.11382113821138211,\n \"exact_match_stderr,none\": 0.02875360087323741\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.06060606060606061,\n\
\ \"exact_match_stderr,none\": 0.020847129156682045\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.03214285714285714,\n \"exact_match_stderr,none\": 0.01055955866175321\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.12987012987012986,\n\
\ \"exact_match_stderr,none\": 0.02717696535667076\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.46113989637305697,\n \"exact_match_stderr,none\"\
: 0.03597524411734576\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.022222222222222223,\n \"exact_match_stderr,none\"\
: 0.01273389971505968\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.2826628989361702,\n\
\ \"acc_stderr,none\": 0.004105302726114385\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.4166666666666667,\n \"acc_norm_stderr,none\"\
: 0.01768575862518651,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \"\
\ - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.528,\n\
\ \"acc_norm_stderr,none\": 0.031636489531544396\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.31640625,\n \"acc_norm_stderr,none\"\
: 0.02912403057115479\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.408,\n \"acc_norm_stderr,none\": 0.031145209846548512\n\
\ }\n },\n \"leaderboard\": {\n \"acc_norm,none\": 0.38785834738617203,\n\
\ \"acc_norm_stderr,none\": 0.005273329157943381,\n \"inst_level_loose_acc,none\"\
: 0.8752997601918465,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n\
\ \"inst_level_strict_acc,none\": 0.8585131894484412,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"exact_match,none\": 0.19637462235649547,\n \"exact_match_stderr,none\"\
: 0.009854609082277298,\n \"acc,none\": 0.2826628989361702,\n \"acc_stderr,none\"\
: 0.0041053027261143855,\n \"prompt_level_strict_acc,none\": 0.7948243992606284,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.01737807119675965,\n \
\ \"prompt_level_loose_acc,none\": 0.8151571164510166,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.01670417955850395,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.4025342822426662,\n \"acc_norm_stderr,none\"\
: 0.006072426154807149,\n \"alias\": \" - leaderboard_bbh\"\n },\n \
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.8,\n \"acc_norm_stderr,none\": 0.02534897002097912\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5187165775401069,\n \"acc_norm_stderr,none\"\
: 0.03663608375537843\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.288,\n \"acc_norm_stderr,none\": 0.028697004587398253\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.604,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.472,\n \"acc_norm_stderr,none\": 0.031636489531544396\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.328,\n \"acc_norm_stderr,none\": 0.029752391824475363\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\": 0.031603975145223735\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.256,\n \"acc_norm_stderr,none\": 0.027657108718204846\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.212,\n \"acc_norm_stderr,none\": 0.025901884690541117\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.416,\n \"acc_norm_stderr,none\": 0.031235856237014505\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.688,\n \"acc_norm_stderr,none\": 0.029361067575219852\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.288,\n \"acc_norm_stderr,none\": 0.028697004587398253\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.3904109589041096,\n\
\ \"acc_norm_stderr,none\": 0.040513109165891854\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.456,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.456,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.396,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.5224719101123596,\n \"acc_norm_stderr,none\"\
: 0.03754432508487191\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.496,\n \"acc_norm_stderr,none\": 0.0316851985511992\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \"\
acc_norm,none\": 0.116,\n \"acc_norm_stderr,none\": 0.020293429803083823\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.136,\n \"acc_norm_stderr,none\": 0.021723342617052086\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.144,\n \"acc_norm_stderr,none\": 0.022249407735450245\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.292,\n \"acc_norm_stderr,none\": 0.02881432040220563\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2986577181208054,\n\
\ \"acc_norm_stderr,none\": 0.013264655332365493,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.30303030303030304,\n\
\ \"acc_norm_stderr,none\": 0.03274287914026869\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.28205128205128205,\n \"acc_norm_stderr,none\": 0.019275803929950375\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3169642857142857,\n \"acc_norm_stderr,none\"\
: 0.0220076215848248\n },\n \"leaderboard_ifeval\": {\n \"alias\":\
\ \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.7948243992606284,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.01737807119675965,\n \
\ \"inst_level_strict_acc,none\": 0.8585131894484412,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.8151571164510166,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.01670417955850395,\n \"inst_level_loose_acc,none\"\
: 0.8752997601918465,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.19637462235649547,\n\
\ \"exact_match_stderr,none\": 0.009854609082277298,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.3811074918566775,\n \"exact_match_stderr,none\": 0.02776327166045321\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.11382113821138211,\n \"exact_match_stderr,none\": 0.02875360087323741\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.06060606060606061,\n \"exact_match_stderr,none\"\
: 0.020847129156682045\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.03214285714285714,\n \"exact_match_stderr,none\"\
: 0.01055955866175321\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.12987012987012986,\n \"exact_match_stderr,none\": 0.02717696535667076\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.46113989637305697,\n \"exact_match_stderr,none\"\
: 0.03597524411734576\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.022222222222222223,\n \"exact_match_stderr,none\": 0.01273389971505968\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.2826628989361702,\n \"acc_stderr,none\": 0.004105302726114385\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.4166666666666667,\n\
\ \"acc_norm_stderr,none\": 0.01768575862518651,\n \"alias\": \" -\
\ leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.528,\n \"acc_norm_stderr,none\": 0.031636489531544396\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.31640625,\n \"acc_norm_stderr,none\": 0.02912403057115479\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.408,\n \"acc_norm_stderr,none\": 0.031145209846548512\n\
\ }\n}\n```"
repo_url: https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_navigate
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_snarks
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_gpqa_extended
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_gpqa_main
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_ifeval
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_ifeval_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_mmlu_pro
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_musr_object_placements
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T00-31-08.901515.jsonl'
- config_name: allenai__Llama-3.1-Tulu-3-8B__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_22T00_31_08.901515
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T00-31-08.901515.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T00-31-08.901515.jsonl'
---
# Dataset Card for Evaluation run of allenai/Llama-3.1-Tulu-3-8B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [allenai/Llama-3.1-Tulu-3-8B](https://huggingface.co/allenai/Llama-3.1-Tulu-3-8B)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/allenai__Llama-3.1-Tulu-3-8B-details",
name="allenai__Llama-3.1-Tulu-3-8B__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-22T00-31-08.901515](https://huggingface.co/datasets/open-llm-leaderboard/allenai__Llama-3.1-Tulu-3-8B-details/blob/main/allenai__Llama-3.1-Tulu-3-8B/results_2024-11-22T00-31-08.901515.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"acc_norm,none": 0.38785834738617203,
"acc_norm_stderr,none": 0.005273329157943381,
"inst_level_loose_acc,none": 0.8752997601918465,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.8585131894484412,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.19637462235649547,
"exact_match_stderr,none": 0.009854609082277298,
"acc,none": 0.2826628989361702,
"acc_stderr,none": 0.0041053027261143855,
"prompt_level_strict_acc,none": 0.7948243992606284,
"prompt_level_strict_acc_stderr,none": 0.01737807119675965,
"prompt_level_loose_acc,none": 0.8151571164510166,
"prompt_level_loose_acc_stderr,none": 0.01670417955850395,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.4025342822426662,
"acc_norm_stderr,none": 0.006072426154807149,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.8,
"acc_norm_stderr,none": 0.02534897002097912
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5187165775401069,
"acc_norm_stderr,none": 0.03663608375537843
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.472,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.328,
"acc_norm_stderr,none": 0.029752391824475363
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.256,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.212,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.416,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.688,
"acc_norm_stderr,none": 0.029361067575219852
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.3904109589041096,
"acc_norm_stderr,none": 0.040513109165891854
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.396,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.5224719101123596,
"acc_norm_stderr,none": 0.03754432508487191
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.116,
"acc_norm_stderr,none": 0.020293429803083823
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.136,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.144,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.292,
"acc_norm_stderr,none": 0.02881432040220563
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2986577181208054,
"acc_norm_stderr,none": 0.013264655332365493,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.30303030303030304,
"acc_norm_stderr,none": 0.03274287914026869
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.28205128205128205,
"acc_norm_stderr,none": 0.019275803929950375
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3169642857142857,
"acc_norm_stderr,none": 0.0220076215848248
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7948243992606284,
"prompt_level_strict_acc_stderr,none": 0.01737807119675965,
"inst_level_strict_acc,none": 0.8585131894484412,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.8151571164510166,
"prompt_level_loose_acc_stderr,none": 0.01670417955850395,
"inst_level_loose_acc,none": 0.8752997601918465,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.19637462235649547,
"exact_match_stderr,none": 0.009854609082277298,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.3811074918566775,
"exact_match_stderr,none": 0.02776327166045321
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.11382113821138211,
"exact_match_stderr,none": 0.02875360087323741
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.06060606060606061,
"exact_match_stderr,none": 0.020847129156682045
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.03214285714285714,
"exact_match_stderr,none": 0.01055955866175321
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.12987012987012986,
"exact_match_stderr,none": 0.02717696535667076
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.46113989637305697,
"exact_match_stderr,none": 0.03597524411734576
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.022222222222222223,
"exact_match_stderr,none": 0.01273389971505968
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.2826628989361702,
"acc_stderr,none": 0.004105302726114385
},
"leaderboard_musr": {
"acc_norm,none": 0.4166666666666667,
"acc_norm_stderr,none": 0.01768575862518651,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.528,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.31640625,
"acc_norm_stderr,none": 0.02912403057115479
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.408,
"acc_norm_stderr,none": 0.031145209846548512
}
},
"leaderboard": {
"acc_norm,none": 0.38785834738617203,
"acc_norm_stderr,none": 0.005273329157943381,
"inst_level_loose_acc,none": 0.8752997601918465,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.8585131894484412,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.19637462235649547,
"exact_match_stderr,none": 0.009854609082277298,
"acc,none": 0.2826628989361702,
"acc_stderr,none": 0.0041053027261143855,
"prompt_level_strict_acc,none": 0.7948243992606284,
"prompt_level_strict_acc_stderr,none": 0.01737807119675965,
"prompt_level_loose_acc,none": 0.8151571164510166,
"prompt_level_loose_acc_stderr,none": 0.01670417955850395,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.4025342822426662,
"acc_norm_stderr,none": 0.006072426154807149,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.8,
"acc_norm_stderr,none": 0.02534897002097912
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5187165775401069,
"acc_norm_stderr,none": 0.03663608375537843
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.472,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.328,
"acc_norm_stderr,none": 0.029752391824475363
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.256,
"acc_norm_stderr,none": 0.027657108718204846
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.212,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.416,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.688,
"acc_norm_stderr,none": 0.029361067575219852
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.288,
"acc_norm_stderr,none": 0.028697004587398253
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.3904109589041096,
"acc_norm_stderr,none": 0.040513109165891854
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.456,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.396,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.5224719101123596,
"acc_norm_stderr,none": 0.03754432508487191
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.496,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.116,
"acc_norm_stderr,none": 0.020293429803083823
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.136,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.144,
"acc_norm_stderr,none": 0.022249407735450245
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.292,
"acc_norm_stderr,none": 0.02881432040220563
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2986577181208054,
"acc_norm_stderr,none": 0.013264655332365493,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.30303030303030304,
"acc_norm_stderr,none": 0.03274287914026869
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.28205128205128205,
"acc_norm_stderr,none": 0.019275803929950375
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3169642857142857,
"acc_norm_stderr,none": 0.0220076215848248
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7948243992606284,
"prompt_level_strict_acc_stderr,none": 0.01737807119675965,
"inst_level_strict_acc,none": 0.8585131894484412,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.8151571164510166,
"prompt_level_loose_acc_stderr,none": 0.01670417955850395,
"inst_level_loose_acc,none": 0.8752997601918465,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.19637462235649547,
"exact_match_stderr,none": 0.009854609082277298,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.3811074918566775,
"exact_match_stderr,none": 0.02776327166045321
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.11382113821138211,
"exact_match_stderr,none": 0.02875360087323741
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.06060606060606061,
"exact_match_stderr,none": 0.020847129156682045
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.03214285714285714,
"exact_match_stderr,none": 0.01055955866175321
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.12987012987012986,
"exact_match_stderr,none": 0.02717696535667076
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.46113989637305697,
"exact_match_stderr,none": 0.03597524411734576
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.022222222222222223,
"exact_match_stderr,none": 0.01273389971505968
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.2826628989361702,
"acc_stderr,none": 0.004105302726114385
},
"leaderboard_musr": {
"acc_norm,none": 0.4166666666666667,
"acc_norm_stderr,none": 0.01768575862518651,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.528,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.31640625,
"acc_norm_stderr,none": 0.02912403057115479
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.408,
"acc_norm_stderr,none": 0.031145209846548512
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerCreative-Mix-details | open-llm-leaderboard | "2024-11-22T00:35:43Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:32:12Z" | ---
pretty_name: Evaluation run of ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ZeroXClem/Qwen2.5-7B-HomerCreative-Mix](https://huggingface.co/ZeroXClem/Qwen2.5-7B-HomerCreative-Mix)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerCreative-Mix-details\"\
,\n\tname=\"ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-22T00-32-11.693490](https://huggingface.co/datasets/open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerCreative-Mix-details/blob/main/ZeroXClem__Qwen2.5-7B-HomerCreative-Mix/results_2024-11-22T00-32-11.693490.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"acc,none\": 0.4447307180851064,\n \"acc_stderr,none\"\
: 0.004530535363926051,\n \"inst_level_loose_acc,none\": 0.8285371702637889,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"inst_level_strict_acc,none\"\
: 0.8165467625899281,\n \"inst_level_strict_acc_stderr,none\": \"N/A\"\
,\n \"exact_match,none\": 0.32326283987915405,\n \"exact_match_stderr,none\"\
: 0.011761711608666757,\n \"prompt_level_loose_acc,none\": 0.7634011090573013,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.018288827582625598,\n \
\ \"acc_norm,none\": 0.5014917628745622,\n \"acc_norm_stderr,none\"\
: 0.005340969872084893,\n \"prompt_level_strict_acc,none\": 0.7504621072088724,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.018622404509805804,\n \
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\":\
\ {\n \"acc_norm,none\": 0.5521610831452872,\n \"acc_norm_stderr,none\"\
: 0.006179016832046109,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.86,\n\
\ \"acc_norm_stderr,none\": 0.021989409645240245\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5668449197860963,\n \"acc_norm_stderr,none\"\
: 0.03633267411102591\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.588,\n \"acc_norm_stderr,none\": 0.031191596026022818\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.632,\n\
\ \"acc_norm_stderr,none\": 0.03056207062099311\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.604,\n \"acc_norm_stderr,none\":\
\ 0.030993197854577898\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.556,\n \
\ \"acc_norm_stderr,none\": 0.03148684942554571\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.528,\n \"acc_norm_stderr,none\":\
\ 0.031636489531544396\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.468,\n \"acc_norm_stderr,none\":\
\ 0.03162125257572558\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.784,\n \"acc_norm_stderr,none\":\
\ 0.02607865766373279\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \
\ \"acc_norm,none\": 0.632,\n \"acc_norm_stderr,none\": 0.03056207062099311\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \"\
\ - leaderboard_bbh_navigate\",\n \"acc_norm,none\": 0.7,\n \
\ \"acc_norm_stderr,none\": 0.029040893477575786\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.36,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.5958904109589042,\n \"acc_norm_stderr,none\": 0.0407519857003932\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.628,\n \"acc_norm_stderr,none\": 0.03063032594455827\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.58,\n \
\ \"acc_norm_stderr,none\": 0.03127799950463661\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\":\
\ 0.031603975145223735\n },\n \"leaderboard_bbh_snarks\": {\n \
\ \"alias\": \" - leaderboard_bbh_snarks\",\n \"acc_norm,none\"\
: 0.6966292134831461,\n \"acc_norm_stderr,none\": 0.03455421944400101\n\
\ },\n \"leaderboard_bbh_sports_understanding\": {\n \"\
alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.74,\n \"acc_norm_stderr,none\": 0.027797315752644335\n },\n\
\ \"leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" -\
\ leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.548,\n\
\ \"acc_norm_stderr,none\": 0.03153986449255664\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.212,\n \"acc_norm_stderr,none\": 0.025901884690541117\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.168,\n \"acc_norm_stderr,none\":\
\ 0.023692813205492536\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.24,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\":\
\ \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\": 0.548,\n\
\ \"acc_norm_stderr,none\": 0.03153986449255664\n },\n \
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.29949664429530204,\n\
\ \"acc_norm_stderr,none\": 0.013278959534799928,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.2878787878787879,\n \"acc_norm_stderr,none\": 0.03225883512300998\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.29120879120879123,\n\
\ \"acc_norm_stderr,none\": 0.019460910297288078\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.31473214285714285,\n \"acc_norm_stderr,none\"\
: 0.021965797142222607\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.7504621072088724,\n \"prompt_level_strict_acc_stderr,none\": 0.018622404509805804,\n\
\ \"inst_level_strict_acc,none\": 0.8165467625899281,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.7634011090573013,\n \"prompt_level_loose_acc_stderr,none\": 0.018288827582625598,\n\
\ \"inst_level_loose_acc,none\": 0.8285371702637889,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.32326283987915405,\n \"exact_match_stderr,none\"\
: 0.011761711608666757,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.6091205211726385,\n\
\ \"exact_match_stderr,none\": 0.027894098976471507\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.2032520325203252,\n \"exact_match_stderr,none\": 0.03643325851749072\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.20454545454545456,\n\
\ \"exact_match_stderr,none\": 0.03524251981380333\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\": \"\
\ - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.1392857142857143,\n \"exact_match_stderr,none\": 0.02072911170255923\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.3051948051948052,\n\
\ \"exact_match_stderr,none\": 0.0372284008596668\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.46113989637305697,\n \"exact_match_stderr,none\"\
: 0.03597524411734576\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.1037037037037037,\n \"exact_match_stderr,none\"\
: 0.02633725661744443\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.4447307180851064,\n\
\ \"acc_stderr,none\": 0.004530535363926052\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.43386243386243384,\n \"acc_norm_stderr,none\"\
: 0.01762618265060195,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \"\
\ - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.56,\n\
\ \"acc_norm_stderr,none\": 0.03145724452223569\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.296875,\n \"acc_norm_stderr,none\"\
: 0.028610997088737832\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.448,\n \"acc_norm_stderr,none\": 0.03151438761115349\n\
\ }\n },\n \"leaderboard\": {\n \"acc,none\": 0.4447307180851064,\n\
\ \"acc_stderr,none\": 0.004530535363926051,\n \"inst_level_loose_acc,none\"\
: 0.8285371702637889,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n\
\ \"inst_level_strict_acc,none\": 0.8165467625899281,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"exact_match,none\": 0.32326283987915405,\n \"exact_match_stderr,none\"\
: 0.011761711608666757,\n \"prompt_level_loose_acc,none\": 0.7634011090573013,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.018288827582625598,\n \
\ \"acc_norm,none\": 0.5014917628745622,\n \"acc_norm_stderr,none\": 0.005340969872084893,\n\
\ \"prompt_level_strict_acc,none\": 0.7504621072088724,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.018622404509805804,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.5521610831452872,\n \"acc_norm_stderr,none\"\
: 0.006179016832046109,\n \"alias\": \" - leaderboard_bbh\"\n },\n \
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.86,\n \"acc_norm_stderr,none\": 0.021989409645240245\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5668449197860963,\n \"acc_norm_stderr,none\"\
: 0.03633267411102591\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.588,\n \"acc_norm_stderr,none\": 0.031191596026022818\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.632,\n \"acc_norm_stderr,none\": 0.03056207062099311\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.604,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.528,\n \"acc_norm_stderr,none\": 0.031636489531544396\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.468,\n \"acc_norm_stderr,none\": 0.03162125257572558\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.784,\n \"acc_norm_stderr,none\": 0.02607865766373279\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.632,\n \"acc_norm_stderr,none\": 0.03056207062099311\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.7,\n \"acc_norm_stderr,none\": 0.029040893477575786\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.36,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.5958904109589042,\n\
\ \"acc_norm_stderr,none\": 0.0407519857003932\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.628,\n \"acc_norm_stderr,none\": 0.03063032594455827\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\": 0.031603975145223735\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6966292134831461,\n \"acc_norm_stderr,none\"\
: 0.03455421944400101\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.74,\n \"acc_norm_stderr,none\": 0.027797315752644335\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.548,\n \"acc_norm_stderr,none\": 0.03153986449255664\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.212,\n \"acc_norm_stderr,none\": 0.025901884690541117\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.168,\n \"acc_norm_stderr,none\": 0.023692813205492536\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.24,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.548,\n \"acc_norm_stderr,none\": 0.03153986449255664\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.29949664429530204,\n\
\ \"acc_norm_stderr,none\": 0.013278959534799928,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.2878787878787879,\n\
\ \"acc_norm_stderr,none\": 0.03225883512300998\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.29120879120879123,\n \"acc_norm_stderr,none\": 0.019460910297288078\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.31473214285714285,\n \"acc_norm_stderr,none\"\
: 0.021965797142222607\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.7504621072088724,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.018622404509805804,\n \
\ \"inst_level_strict_acc,none\": 0.8165467625899281,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.7634011090573013,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.018288827582625598,\n \"inst_level_loose_acc,none\"\
: 0.8285371702637889,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.32326283987915405,\n\
\ \"exact_match_stderr,none\": 0.011761711608666757,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.6091205211726385,\n \"exact_match_stderr,none\": 0.027894098976471507\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.2032520325203252,\n \"exact_match_stderr,none\": 0.03643325851749072\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.20454545454545456,\n \"exact_match_stderr,none\"\
: 0.03524251981380333\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.1392857142857143,\n \"exact_match_stderr,none\"\
: 0.02072911170255923\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.3051948051948052,\n \"exact_match_stderr,none\": 0.0372284008596668\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.46113989637305697,\n \"exact_match_stderr,none\"\
: 0.03597524411734576\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.1037037037037037,\n \"exact_match_stderr,none\": 0.02633725661744443\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.4447307180851064,\n \"acc_stderr,none\": 0.004530535363926052\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.43386243386243384,\n\
\ \"acc_norm_stderr,none\": 0.01762618265060195,\n \"alias\": \" -\
\ leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.56,\n \"acc_norm_stderr,none\": 0.03145724452223569\n },\n \"leaderboard_musr_object_placements\"\
: {\n \"alias\": \" - leaderboard_musr_object_placements\",\n \"\
acc_norm,none\": 0.296875,\n \"acc_norm_stderr,none\": 0.028610997088737832\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.448,\n \"acc_norm_stderr,none\": 0.03151438761115349\n\
\ }\n}\n```"
repo_url: https://huggingface.co/ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_navigate
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_snarks
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_gpqa_extended
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_gpqa_main
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_ifeval
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_ifeval_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_mmlu_pro
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_musr_object_placements
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T00-32-11.693490.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_22T00_32_11.693490
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T00-32-11.693490.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T00-32-11.693490.jsonl'
---
# Dataset Card for Evaluation run of ZeroXClem/Qwen2.5-7B-HomerCreative-Mix
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [ZeroXClem/Qwen2.5-7B-HomerCreative-Mix](https://huggingface.co/ZeroXClem/Qwen2.5-7B-HomerCreative-Mix)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerCreative-Mix-details",
name="ZeroXClem__Qwen2.5-7B-HomerCreative-Mix__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-22T00-32-11.693490](https://huggingface.co/datasets/open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerCreative-Mix-details/blob/main/ZeroXClem__Qwen2.5-7B-HomerCreative-Mix/results_2024-11-22T00-32-11.693490.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"acc,none": 0.4447307180851064,
"acc_stderr,none": 0.004530535363926051,
"inst_level_loose_acc,none": 0.8285371702637889,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.8165467625899281,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.32326283987915405,
"exact_match_stderr,none": 0.011761711608666757,
"prompt_level_loose_acc,none": 0.7634011090573013,
"prompt_level_loose_acc_stderr,none": 0.018288827582625598,
"acc_norm,none": 0.5014917628745622,
"acc_norm_stderr,none": 0.005340969872084893,
"prompt_level_strict_acc,none": 0.7504621072088724,
"prompt_level_strict_acc_stderr,none": 0.018622404509805804,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5521610831452872,
"acc_norm_stderr,none": 0.006179016832046109,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.86,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5668449197860963,
"acc_norm_stderr,none": 0.03633267411102591
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.632,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.528,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.468,
"acc_norm_stderr,none": 0.03162125257572558
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.784,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.632,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.7,
"acc_norm_stderr,none": 0.029040893477575786
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.36,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.5958904109589042,
"acc_norm_stderr,none": 0.0407519857003932
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.628,
"acc_norm_stderr,none": 0.03063032594455827
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6966292134831461,
"acc_norm_stderr,none": 0.03455421944400101
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.74,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.212,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.168,
"acc_norm_stderr,none": 0.023692813205492536
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.24,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_gpqa": {
"acc_norm,none": 0.29949664429530204,
"acc_norm_stderr,none": 0.013278959534799928,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2878787878787879,
"acc_norm_stderr,none": 0.03225883512300998
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.29120879120879123,
"acc_norm_stderr,none": 0.019460910297288078
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.31473214285714285,
"acc_norm_stderr,none": 0.021965797142222607
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7504621072088724,
"prompt_level_strict_acc_stderr,none": 0.018622404509805804,
"inst_level_strict_acc,none": 0.8165467625899281,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7634011090573013,
"prompt_level_loose_acc_stderr,none": 0.018288827582625598,
"inst_level_loose_acc,none": 0.8285371702637889,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.32326283987915405,
"exact_match_stderr,none": 0.011761711608666757,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.6091205211726385,
"exact_match_stderr,none": 0.027894098976471507
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.2032520325203252,
"exact_match_stderr,none": 0.03643325851749072
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.20454545454545456,
"exact_match_stderr,none": 0.03524251981380333
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.1392857142857143,
"exact_match_stderr,none": 0.02072911170255923
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.3051948051948052,
"exact_match_stderr,none": 0.0372284008596668
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.46113989637305697,
"exact_match_stderr,none": 0.03597524411734576
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.1037037037037037,
"exact_match_stderr,none": 0.02633725661744443
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.4447307180851064,
"acc_stderr,none": 0.004530535363926052
},
"leaderboard_musr": {
"acc_norm,none": 0.43386243386243384,
"acc_norm_stderr,none": 0.01762618265060195,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.296875,
"acc_norm_stderr,none": 0.028610997088737832
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
}
},
"leaderboard": {
"acc,none": 0.4447307180851064,
"acc_stderr,none": 0.004530535363926051,
"inst_level_loose_acc,none": 0.8285371702637889,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.8165467625899281,
"inst_level_strict_acc_stderr,none": "N/A",
"exact_match,none": 0.32326283987915405,
"exact_match_stderr,none": 0.011761711608666757,
"prompt_level_loose_acc,none": 0.7634011090573013,
"prompt_level_loose_acc_stderr,none": 0.018288827582625598,
"acc_norm,none": 0.5014917628745622,
"acc_norm_stderr,none": 0.005340969872084893,
"prompt_level_strict_acc,none": 0.7504621072088724,
"prompt_level_strict_acc_stderr,none": 0.018622404509805804,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5521610831452872,
"acc_norm_stderr,none": 0.006179016832046109,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.86,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5668449197860963,
"acc_norm_stderr,none": 0.03633267411102591
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.632,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.528,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.468,
"acc_norm_stderr,none": 0.03162125257572558
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.784,
"acc_norm_stderr,none": 0.02607865766373279
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.632,
"acc_norm_stderr,none": 0.03056207062099311
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.7,
"acc_norm_stderr,none": 0.029040893477575786
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.36,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.5958904109589042,
"acc_norm_stderr,none": 0.0407519857003932
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.628,
"acc_norm_stderr,none": 0.03063032594455827
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6966292134831461,
"acc_norm_stderr,none": 0.03455421944400101
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.74,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.212,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.168,
"acc_norm_stderr,none": 0.023692813205492536
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.24,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_gpqa": {
"acc_norm,none": 0.29949664429530204,
"acc_norm_stderr,none": 0.013278959534799928,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2878787878787879,
"acc_norm_stderr,none": 0.03225883512300998
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.29120879120879123,
"acc_norm_stderr,none": 0.019460910297288078
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.31473214285714285,
"acc_norm_stderr,none": 0.021965797142222607
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7504621072088724,
"prompt_level_strict_acc_stderr,none": 0.018622404509805804,
"inst_level_strict_acc,none": 0.8165467625899281,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7634011090573013,
"prompt_level_loose_acc_stderr,none": 0.018288827582625598,
"inst_level_loose_acc,none": 0.8285371702637889,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.32326283987915405,
"exact_match_stderr,none": 0.011761711608666757,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.6091205211726385,
"exact_match_stderr,none": 0.027894098976471507
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.2032520325203252,
"exact_match_stderr,none": 0.03643325851749072
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.20454545454545456,
"exact_match_stderr,none": 0.03524251981380333
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.1392857142857143,
"exact_match_stderr,none": 0.02072911170255923
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.3051948051948052,
"exact_match_stderr,none": 0.0372284008596668
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.46113989637305697,
"exact_match_stderr,none": 0.03597524411734576
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.1037037037037037,
"exact_match_stderr,none": 0.02633725661744443
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.4447307180851064,
"acc_stderr,none": 0.004530535363926052
},
"leaderboard_musr": {
"acc_norm,none": 0.43386243386243384,
"acc_norm_stderr,none": 0.01762618265060195,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.296875,
"acc_norm_stderr,none": 0.028610997088737832
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix-details | open-llm-leaderboard | "2024-11-22T00:37:56Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:34:21Z" | ---
pretty_name: Evaluation run of ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix](https://huggingface.co/ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix-details\"\
,\n\tname=\"ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-22T00-34-20.371295](https://huggingface.co/datasets/open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix-details/blob/main/ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix/results_2024-11-22T00-34-20.371295.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"prompt_level_loose_acc,none\": 0.7578558225508318,\n \"\
prompt_level_loose_acc_stderr,none\": 0.018434587800223168,\n \"acc,none\"\
: 0.4431515957446808,\n \"acc_stderr,none\": 0.00452891098809217,\n \
\ \"acc_norm,none\": 0.5046050071345181,\n \"acc_norm_stderr,none\"\
: 0.005356894928628325,\n \"inst_level_strict_acc,none\": 0.802158273381295,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_strict_acc,none\"\
: 0.7393715341959335,\n \"prompt_level_strict_acc_stderr,none\": 0.018890584986760186,\n\
\ \"exact_match,none\": 0.29531722054380666,\n \"exact_match_stderr,none\"\
: 0.011453860732395094,\n \"inst_level_loose_acc,none\": 0.8201438848920863,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"alias\"\
: \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.551640340218712,\n \"acc_norm_stderr,none\": 0.006182534734432989,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.852,\n \"acc_norm_stderr,none\": 0.022503547243806186\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.5614973262032086,\n\
\ \"acc_norm_stderr,none\": 0.03638341809400991\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.568,\n \"acc_norm_stderr,none\":\
\ 0.03139181076542941\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.6,\n\
\ \"acc_norm_stderr,none\": 0.031046021028253316\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.54,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.552,\n \
\ \"acc_norm_stderr,none\": 0.03151438761115348\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.524,\n \"acc_norm_stderr,none\":\
\ 0.03164968895968774\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.484,\n \"acc_norm_stderr,none\":\
\ 0.03166998503010743\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.78,\n \"acc_norm_stderr,none\": 0.02625179282460579\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.644,\n \"acc_norm_stderr,none\": 0.0303436806571532\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.7,\n \"acc_norm_stderr,none\": 0.029040893477575786\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\"\
: \" - leaderboard_bbh_object_counting\",\n \"acc_norm,none\": 0.364,\n\
\ \"acc_norm_stderr,none\": 0.030491555220405475\n },\n \
\ \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" - leaderboard_bbh_penguins_in_a_table\"\
,\n \"acc_norm,none\": 0.589041095890411,\n \"acc_norm_stderr,none\"\
: 0.04085902451640228\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\":\
\ 0.030491555220405475\n },\n \"leaderboard_bbh_ruin_names\": {\n\
\ \"alias\": \" - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\"\
: 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n },\n\
\ \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" -\
\ leaderboard_bbh_snarks\",\n \"acc_norm,none\": 0.6966292134831461,\n\
\ \"acc_norm_stderr,none\": 0.03455421944400101\n },\n \
\ \"leaderboard_bbh_sports_understanding\": {\n \"alias\": \" - leaderboard_bbh_sports_understanding\"\
,\n \"acc_norm,none\": 0.732,\n \"acc_norm_stderr,none\":\
\ 0.02806876238252672\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \
\ \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.204,\n \"acc_norm_stderr,none\":\
\ 0.025537121574548162\n },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.16,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.248,\n \"acc_norm_stderr,none\":\
\ 0.027367497504863593\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.56,\n \"acc_norm_stderr,none\": 0.03145724452223569\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3196308724832215,\n\
\ \"acc_norm_stderr,none\": 0.013522572199065146,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.3181818181818182,\n \"acc_norm_stderr,none\": 0.0331847733384533\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.3131868131868132,\n\
\ \"acc_norm_stderr,none\": 0.01986656558013767\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.328125,\n \"acc_norm_stderr,none\"\
: 0.0222080353262888\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.7393715341959335,\n \"prompt_level_strict_acc_stderr,none\": 0.018890584986760186,\n\
\ \"inst_level_strict_acc,none\": 0.802158273381295,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.7578558225508318,\n \"prompt_level_loose_acc_stderr,none\": 0.018434587800223168,\n\
\ \"inst_level_loose_acc,none\": 0.8201438848920863,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.29531722054380666,\n \"exact_match_stderr,none\"\
: 0.011453860732395094,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.5635179153094463,\n\
\ \"exact_match_stderr,none\": 0.028351520946552713\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.17073170731707318,\n \"exact_match_stderr,none\": 0.034066279591320504\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.1590909090909091,\n\
\ \"exact_match_stderr,none\": 0.03195667292673137\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\": \"\
\ - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.11785714285714285,\n \"exact_match_stderr,none\": 0.019303911310421605\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.23376623376623376,\n\
\ \"exact_match_stderr,none\": 0.034215730598256215\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.47668393782383417,\n \"exact_match_stderr,none\"\
: 0.03604513672442202\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.1111111111111111,\n \"exact_match_stderr,none\"\
: 0.027148765412512273\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.4431515957446808,\n\
\ \"acc_stderr,none\": 0.00452891098809217\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.43783068783068785,\n \"acc_norm_stderr,none\"\
: 0.017595964155130817,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.556,\n\
\ \"acc_norm_stderr,none\": 0.03148684942554571\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.28515625,\n \"acc_norm_stderr,none\"\
: 0.028273327213286358\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.476,\n \"acc_norm_stderr,none\": 0.03164968895968774\n\
\ }\n },\n \"leaderboard\": {\n \"prompt_level_loose_acc,none\"\
: 0.7578558225508318,\n \"prompt_level_loose_acc_stderr,none\": 0.018434587800223168,\n\
\ \"acc,none\": 0.4431515957446808,\n \"acc_stderr,none\": 0.00452891098809217,\n\
\ \"acc_norm,none\": 0.5046050071345181,\n \"acc_norm_stderr,none\"\
: 0.005356894928628325,\n \"inst_level_strict_acc,none\": 0.802158273381295,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_strict_acc,none\"\
: 0.7393715341959335,\n \"prompt_level_strict_acc_stderr,none\": 0.018890584986760186,\n\
\ \"exact_match,none\": 0.29531722054380666,\n \"exact_match_stderr,none\"\
: 0.011453860732395094,\n \"inst_level_loose_acc,none\": 0.8201438848920863,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"alias\": \"leaderboard\"\
\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.551640340218712,\n\
\ \"acc_norm_stderr,none\": 0.006182534734432989,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.852,\n \"acc_norm_stderr,none\": 0.022503547243806186\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5614973262032086,\n \"acc_norm_stderr,none\"\
: 0.03638341809400991\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.568,\n \"acc_norm_stderr,none\": 0.03139181076542941\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.6,\n \"acc_norm_stderr,none\": 0.031046021028253316\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.54,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.524,\n \"acc_norm_stderr,none\": 0.03164968895968774\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.78,\n \"acc_norm_stderr,none\": 0.02625179282460579\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"\
acc_norm,none\": 0.644,\n \"acc_norm_stderr,none\": 0.0303436806571532\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.7,\n \"acc_norm_stderr,none\": 0.029040893477575786\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.364,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.589041095890411,\n\
\ \"acc_norm_stderr,none\": 0.04085902451640228\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.636,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6966292134831461,\n \"acc_norm_stderr,none\"\
: 0.03455421944400101\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.732,\n \"acc_norm_stderr,none\": 0.02806876238252672\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.204,\n \"acc_norm_stderr,none\": 0.025537121574548162\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.16,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.248,\n \"acc_norm_stderr,none\": 0.027367497504863593\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.56,\n \"acc_norm_stderr,none\": 0.03145724452223569\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.3196308724832215,\n\
\ \"acc_norm_stderr,none\": 0.013522572199065146,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.3181818181818182,\n\
\ \"acc_norm_stderr,none\": 0.0331847733384533\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.3131868131868132,\n \"acc_norm_stderr,none\": 0.01986656558013767\n \
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.328125,\n \"acc_norm_stderr,none\": 0.0222080353262888\n\
\ },\n \"leaderboard_ifeval\": {\n \"alias\": \" - leaderboard_ifeval\"\
,\n \"prompt_level_strict_acc,none\": 0.7393715341959335,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.018890584986760186,\n \"inst_level_strict_acc,none\": 0.802158273381295,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.7578558225508318,\n \"prompt_level_loose_acc_stderr,none\": 0.018434587800223168,\n\
\ \"inst_level_loose_acc,none\": 0.8201438848920863,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\"\n },\n \"leaderboard_math_hard\": {\n \"exact_match,none\"\
: 0.29531722054380666,\n \"exact_match_stderr,none\": 0.011453860732395094,\n\
\ \"alias\": \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.5635179153094463,\n \"exact_match_stderr,none\": 0.028351520946552713\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.17073170731707318,\n \"exact_match_stderr,none\": 0.034066279591320504\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.1590909090909091,\n \"exact_match_stderr,none\"\
: 0.03195667292673137\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.11785714285714285,\n \"exact_match_stderr,none\"\
: 0.019303911310421605\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.23376623376623376,\n \"exact_match_stderr,none\": 0.034215730598256215\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.47668393782383417,\n \"exact_match_stderr,none\"\
: 0.03604513672442202\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.1111111111111111,\n \"exact_match_stderr,none\": 0.027148765412512273\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.4431515957446808,\n \"acc_stderr,none\": 0.00452891098809217\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.43783068783068785,\n\
\ \"acc_norm_stderr,none\": 0.017595964155130817,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.28515625,\n \"acc_norm_stderr,none\": 0.028273327213286358\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.476,\n \"acc_norm_stderr,none\": 0.03164968895968774\n\
\ }\n}\n```"
repo_url: https://huggingface.co/ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_navigate
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_snarks
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_gpqa_extended
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_gpqa_main
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_ifeval
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_ifeval_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_mmlu_pro
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_musr_object_placements
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T00-34-20.371295.jsonl'
- config_name: ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_22T00_34_20.371295
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T00-34-20.371295.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T00-34-20.371295.jsonl'
---
# Dataset Card for Evaluation run of ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix](https://huggingface.co/ZeroXClem/Qwen2.5-7B-HomerAnvita-NerdMix)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix-details",
name="ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-22T00-34-20.371295](https://huggingface.co/datasets/open-llm-leaderboard/ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix-details/blob/main/ZeroXClem__Qwen2.5-7B-HomerAnvita-NerdMix/results_2024-11-22T00-34-20.371295.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"prompt_level_loose_acc,none": 0.7578558225508318,
"prompt_level_loose_acc_stderr,none": 0.018434587800223168,
"acc,none": 0.4431515957446808,
"acc_stderr,none": 0.00452891098809217,
"acc_norm,none": 0.5046050071345181,
"acc_norm_stderr,none": 0.005356894928628325,
"inst_level_strict_acc,none": 0.802158273381295,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.7393715341959335,
"prompt_level_strict_acc_stderr,none": 0.018890584986760186,
"exact_match,none": 0.29531722054380666,
"exact_match_stderr,none": 0.011453860732395094,
"inst_level_loose_acc,none": 0.8201438848920863,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.551640340218712,
"acc_norm_stderr,none": 0.006182534734432989,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.852,
"acc_norm_stderr,none": 0.022503547243806186
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5614973262032086,
"acc_norm_stderr,none": 0.03638341809400991
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.6,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.524,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.78,
"acc_norm_stderr,none": 0.02625179282460579
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.644,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.7,
"acc_norm_stderr,none": 0.029040893477575786
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.364,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.589041095890411,
"acc_norm_stderr,none": 0.04085902451640228
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6966292134831461,
"acc_norm_stderr,none": 0.03455421944400101
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.732,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.204,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.16,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.248,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3196308724832215,
"acc_norm_stderr,none": 0.013522572199065146,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3181818181818182,
"acc_norm_stderr,none": 0.0331847733384533
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.3131868131868132,
"acc_norm_stderr,none": 0.01986656558013767
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.328125,
"acc_norm_stderr,none": 0.0222080353262888
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7393715341959335,
"prompt_level_strict_acc_stderr,none": 0.018890584986760186,
"inst_level_strict_acc,none": 0.802158273381295,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7578558225508318,
"prompt_level_loose_acc_stderr,none": 0.018434587800223168,
"inst_level_loose_acc,none": 0.8201438848920863,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.29531722054380666,
"exact_match_stderr,none": 0.011453860732395094,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.5635179153094463,
"exact_match_stderr,none": 0.028351520946552713
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.17073170731707318,
"exact_match_stderr,none": 0.034066279591320504
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.1590909090909091,
"exact_match_stderr,none": 0.03195667292673137
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.11785714285714285,
"exact_match_stderr,none": 0.019303911310421605
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.23376623376623376,
"exact_match_stderr,none": 0.034215730598256215
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.47668393782383417,
"exact_match_stderr,none": 0.03604513672442202
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.1111111111111111,
"exact_match_stderr,none": 0.027148765412512273
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.4431515957446808,
"acc_stderr,none": 0.00452891098809217
},
"leaderboard_musr": {
"acc_norm,none": 0.43783068783068785,
"acc_norm_stderr,none": 0.017595964155130817,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.28515625,
"acc_norm_stderr,none": 0.028273327213286358
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.476,
"acc_norm_stderr,none": 0.03164968895968774
}
},
"leaderboard": {
"prompt_level_loose_acc,none": 0.7578558225508318,
"prompt_level_loose_acc_stderr,none": 0.018434587800223168,
"acc,none": 0.4431515957446808,
"acc_stderr,none": 0.00452891098809217,
"acc_norm,none": 0.5046050071345181,
"acc_norm_stderr,none": 0.005356894928628325,
"inst_level_strict_acc,none": 0.802158273381295,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.7393715341959335,
"prompt_level_strict_acc_stderr,none": 0.018890584986760186,
"exact_match,none": 0.29531722054380666,
"exact_match_stderr,none": 0.011453860732395094,
"inst_level_loose_acc,none": 0.8201438848920863,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.551640340218712,
"acc_norm_stderr,none": 0.006182534734432989,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.852,
"acc_norm_stderr,none": 0.022503547243806186
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5614973262032086,
"acc_norm_stderr,none": 0.03638341809400991
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.6,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.54,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.524,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.78,
"acc_norm_stderr,none": 0.02625179282460579
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.644,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.7,
"acc_norm_stderr,none": 0.029040893477575786
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.364,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.589041095890411,
"acc_norm_stderr,none": 0.04085902451640228
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.636,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6966292134831461,
"acc_norm_stderr,none": 0.03455421944400101
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.732,
"acc_norm_stderr,none": 0.02806876238252672
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.204,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.16,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.248,
"acc_norm_stderr,none": 0.027367497504863593
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_gpqa": {
"acc_norm,none": 0.3196308724832215,
"acc_norm_stderr,none": 0.013522572199065146,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.3181818181818182,
"acc_norm_stderr,none": 0.0331847733384533
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.3131868131868132,
"acc_norm_stderr,none": 0.01986656558013767
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.328125,
"acc_norm_stderr,none": 0.0222080353262888
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.7393715341959335,
"prompt_level_strict_acc_stderr,none": 0.018890584986760186,
"inst_level_strict_acc,none": 0.802158273381295,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7578558225508318,
"prompt_level_loose_acc_stderr,none": 0.018434587800223168,
"inst_level_loose_acc,none": 0.8201438848920863,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.29531722054380666,
"exact_match_stderr,none": 0.011453860732395094,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.5635179153094463,
"exact_match_stderr,none": 0.028351520946552713
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.17073170731707318,
"exact_match_stderr,none": 0.034066279591320504
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.1590909090909091,
"exact_match_stderr,none": 0.03195667292673137
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.11785714285714285,
"exact_match_stderr,none": 0.019303911310421605
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.23376623376623376,
"exact_match_stderr,none": 0.034215730598256215
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.47668393782383417,
"exact_match_stderr,none": 0.03604513672442202
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.1111111111111111,
"exact_match_stderr,none": 0.027148765412512273
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.4431515957446808,
"acc_stderr,none": 0.00452891098809217
},
"leaderboard_musr": {
"acc_norm,none": 0.43783068783068785,
"acc_norm_stderr,none": 0.017595964155130817,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.28515625,
"acc_norm_stderr,none": 0.028273327213286358
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.476,
"acc_norm_stderr,none": 0.03164968895968774
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
neoneye/simon-arc-solve-color-v17 | neoneye | "2024-11-22T00:37:15Z" | 6 | 0 | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-to-text",
"text-to-image"
] | "2024-11-22T00:36:02Z" | ---
license: mit
task_categories:
- image-to-text
- text-to-image
language:
- en
pretty_name: simons ARC (abstraction & reasoning corpus) solve color version 17
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: data.jsonl
---
# Version 1
ARC-AGI Tasks where the colors gets manipulated.
Currently it's two-color images, where the transformation is to swap colors.
The image sizes are between 1 and 5 pixels.
Predict the number of rows in the output image.
# Version 2
Number of test: 1-2. Previously it was always 1 test.
# Version 3
input image size: 1-3.
Number of tests: 1.
Identify most popular color, and least popular color. The output size is always 1x1.
# Version 4
input image size: 1-4.
Number of tests: 1.
Identify most popular color, and least popular color. The output size is always 1x1.
# Version 5
input image size: 1-5.
Number of tests: 1-2.
Identify most popular color, and least popular color. The output size is always 1x1.
# Version 6
input image size: 1-5.
Number of tests: 1-2.
Identify most popular color, and least popular color. Multiple output sizes: output size is 1x1, and same output size as input size.
Swap colors.
# Version 7
Focus on `generate_task_replace_color`.
image size: 3-6.
padding size: 1-5.
# Version 8
Focus on `generate_task_replace_color`.
image size: 3-8.
padding size: 1-10.
# Version 9
Focus on `generate_task_replace_color`.
image size: 3-10.
padding size: 1-20.
# Version 10
Enabled all the task generators.
# Version 11
Focus on `generate_task_replace_color_pairs_with_different_palettes`.
image size: 3-5.
padding size: 1-4.
# Version 12
Focus on `generate_task_replace_color_pairs_with_different_palettes`.
image size: 3-8.
padding size: 1-10.
# Version 13
Focus on `generate_task_replace_color_pairs_with_different_palettes`.
image size: 3-10.
padding size: 1-20.
# Version 14
Extended `generate_task_replace_color_pairs_with_different_palettes` with 2 new palette modes.
Enabled all transformations.
# Version 15
Earlier predictions added to some of the rows.
# Version 16
Added fields: `arc_task`, `test_index`, `earlier_output`.
# Version 17
Replaced RLE compressed response with raw pixel response.
image size: 1-7.
|
ahmedheakl/ar_sharegpt4v_instruct | ahmedheakl | "2024-11-23T17:05:05Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T01:04:05Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: image_path
dtype: string
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
- name: image
dtype: image
splits:
- name: train
num_bytes: 10201439857.06
num_examples: 45123
download_size: 10157528307
dataset_size: 10201439857.06
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
neoneye/simon-arc-solve-skew-v6 | neoneye | "2024-11-22T08:02:31Z" | 6 | 0 | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-to-text",
"text-to-image"
] | "2024-11-22T01:17:49Z" | ---
license: mit
task_categories:
- image-to-text
- text-to-image
language:
- en
pretty_name: simons ARC (abstraction & reasoning corpus) solve skew version 6
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: data.jsonl
---
# Version 1
ARC-AGI Tasks where the job is to apply skew/unkew in the directions up/down/left/right.
example count: 2-4.
test count: 1-2.
image size: 1-4.
# Version 2
image size: 1-7.
# Version 3
Earlier predictions added to some of the rows.
# Version 4
Added fields: `arc_task`, `test_index`, `earlier_output`.
# Version 5
Replaced RLE compressed response with raw pixel response.
# Version 6
image size: 1-9. |
magnifi/parser_user_v27h | magnifi | "2024-11-22T02:02:43Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T02:02:42Z" | ---
dataset_info:
features:
- name: Query_id
dtype: int64
- name: Query
dtype: string
- name: Elastic_search
dtype: string
- name: virtual_portfolios
dtype: string
- name: Parser_output
dtype: string
splits:
- name: train
num_bytes: 344199
num_examples: 1524
- name: validation
num_bytes: 24775
num_examples: 128
download_size: 137440
dataset_size: 368974
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
mhdang/image_unseen-fewshot_dpo-userprofile_ours_withjpg_num500 | mhdang | "2024-11-22T02:31:57Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T02:31:29Z" | ---
dataset_info:
features:
- name: jpg_model_train
dtype: binary
- name: jpg_model_base
dtype: binary
- name: user_id
dtype: int64
- name: text
dtype: string
- name: emb
sequence:
sequence: float64
- name: preferred_image_uid_0
dtype: string
- name: dispreferred_image_uid_0
dtype: string
- name: caption_0
dtype: string
- name: preferred_image_uid_1
dtype: string
- name: dispreferred_image_uid_1
dtype: string
- name: caption_1
dtype: string
- name: preferred_image_uid_2
dtype: string
- name: dispreferred_image_uid_2
dtype: string
- name: caption_2
dtype: string
- name: preferred_image_uid_3
dtype: string
- name: dispreferred_image_uid_3
dtype: string
- name: caption_3
dtype: string
- name: class
dtype: int64
- name: __index_level_0__
dtype: int64
- name: user_description
dtype: string
- name: caption
dtype: string
- name: preferred_image_uid_0_jpg
dtype: binary
- name: preferred_image_uid_1_jpg
dtype: binary
- name: preferred_image_uid_2_jpg
dtype: binary
- name: preferred_image_uid_3_jpg
dtype: binary
- name: dispreferred_image_uid_0_jpg
dtype: binary
- name: dispreferred_image_uid_1_jpg
dtype: binary
- name: dispreferred_image_uid_2_jpg
dtype: binary
- name: dispreferred_image_uid_3_jpg
dtype: binary
splits:
- name: test
num_bytes: 1549983072
num_examples: 500
download_size: 1091156882
dataset_size: 1549983072
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
mhdang/image_unseen-fewshot_dpo_ours_withjpg_num500 | mhdang | "2024-11-22T02:32:50Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T02:32:23Z" | ---
dataset_info:
features:
- name: jpg_model_train
dtype: binary
- name: jpg_model_base
dtype: binary
- name: user_id
dtype: int64
- name: text
dtype: string
- name: emb
sequence:
sequence: float64
- name: preferred_image_uid_0
dtype: string
- name: dispreferred_image_uid_0
dtype: string
- name: caption_0
dtype: string
- name: preferred_image_uid_1
dtype: string
- name: dispreferred_image_uid_1
dtype: string
- name: caption_1
dtype: string
- name: preferred_image_uid_2
dtype: string
- name: dispreferred_image_uid_2
dtype: string
- name: caption_2
dtype: string
- name: preferred_image_uid_3
dtype: string
- name: dispreferred_image_uid_3
dtype: string
- name: caption_3
dtype: string
- name: class
dtype: int64
- name: __index_level_0__
dtype: int64
- name: user_description
dtype: string
- name: caption
dtype: string
- name: preferred_image_uid_0_jpg
dtype: binary
- name: preferred_image_uid_1_jpg
dtype: binary
- name: preferred_image_uid_2_jpg
dtype: binary
- name: preferred_image_uid_3_jpg
dtype: binary
- name: dispreferred_image_uid_0_jpg
dtype: binary
- name: dispreferred_image_uid_1_jpg
dtype: binary
- name: dispreferred_image_uid_2_jpg
dtype: binary
- name: dispreferred_image_uid_3_jpg
dtype: binary
splits:
- name: test
num_bytes: 1541153964
num_examples: 500
download_size: 1082327374
dataset_size: 1541153964
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
mhdang/image_unseen-fewshot_sc-userprofile_ours_withjpg_num500 | mhdang | "2024-11-22T02:33:44Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T02:33:16Z" | ---
dataset_info:
features:
- name: jpg_model_train
dtype: binary
- name: jpg_model_base
dtype: binary
- name: user_id
dtype: int64
- name: text
dtype: string
- name: emb
sequence:
sequence: float64
- name: preferred_image_uid_0
dtype: string
- name: dispreferred_image_uid_0
dtype: string
- name: caption_0
dtype: string
- name: preferred_image_uid_1
dtype: string
- name: dispreferred_image_uid_1
dtype: string
- name: caption_1
dtype: string
- name: preferred_image_uid_2
dtype: string
- name: dispreferred_image_uid_2
dtype: string
- name: caption_2
dtype: string
- name: preferred_image_uid_3
dtype: string
- name: dispreferred_image_uid_3
dtype: string
- name: caption_3
dtype: string
- name: class
dtype: int64
- name: __index_level_0__
dtype: int64
- name: user_description
dtype: string
- name: caption
dtype: string
- name: preferred_image_uid_0_jpg
dtype: binary
- name: preferred_image_uid_1_jpg
dtype: binary
- name: preferred_image_uid_2_jpg
dtype: binary
- name: preferred_image_uid_3_jpg
dtype: binary
- name: dispreferred_image_uid_0_jpg
dtype: binary
- name: dispreferred_image_uid_1_jpg
dtype: binary
- name: dispreferred_image_uid_2_jpg
dtype: binary
- name: dispreferred_image_uid_3_jpg
dtype: binary
splits:
- name: test
num_bytes: 1539775139
num_examples: 500
download_size: 1080948487
dataset_size: 1539775139
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_1 | ZixuanKe | "2024-11-22T02:34:40Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T02:34:39Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 180901
num_examples: 34
download_size: 23783
dataset_size: 180901
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ahmedheakl/ar_historicalbooks_instruct | ahmedheakl | "2024-11-24T12:43:16Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T02:54:22Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: question
dtype: string
- name: answer
dtype: string
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 10421311.0
num_examples: 40
download_size: 10375788
dataset_size: 10421311.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ar_historicalbooks_instruct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ifrah1/parallel_eng_ur | ifrah1 | "2024-11-22T03:03:42Z" | 6 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T03:03:36Z" | ---
dataset_info:
features:
- name: English
dtype: string
- name: Urdu
dtype: string
splits:
- name: train
num_bytes: 26447748.764883477
num_examples: 85853
- name: test
num_bytes: 6612168.235116524
num_examples: 21464
download_size: 19520727
dataset_size: 33059917.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
hoonikoo/ppdoor_dpo_split | hoonikoo | "2024-11-22T03:13:36Z" | 6 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T03:12:39Z" | ---
license: apache-2.0
---
|
hev832s/inset | hev832s | "2024-11-22T04:06:39Z" | 6 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-11-22T04:06:01Z" | ---
license: apache-2.0
---
|
dgambettaphd/D_gen8_run0_llama2-7b_wiki_doc1000_real32_synt96 | dgambettaphd | "2024-11-22T04:06:12Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T04:06:09Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 523694
num_examples: 1000
download_size: 288332
dataset_size: 523694
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
procit007/treated_0.5 | procit007 | "2024-11-22T04:08:29Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T04:06:14Z" | ---
dataset_info:
features:
- name: gender
dtype: string
- name: accent
dtype: string
- name: speaker_id
dtype: int64
- name: speaker_name
dtype: string
- name: text
dtype: string
- name: normalized_text
dtype: string
- name: audio
dtype: audio
- name: treated
dtype: bool
- name: metrics
struct:
- name: clipping_ratio
dtype: float64
- name: duration
dtype: float64
- name: is_valid
dtype: bool
- name: rms_energy
dtype: float64
- name: sample_rate
dtype: int64
- name: silence_ratio
dtype: float64
- name: snr
dtype: float64
splits:
- name: train
num_bytes: 3188358457.0
num_examples: 10000
download_size: 2987430472
dataset_size: 3188358457.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/processed_image_seen_dpo_ours_withjpg_num500 | Asap7772 | "2024-11-22T04:14:37Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T04:13:26Z" | ---
dataset_info:
features:
- name: user_id
dtype: int64
- name: caption
sequence: string
- name: split
dtype: string
- name: shot_id
dtype: int64
- name: preferred_image
sequence: binary
- name: dispreferred_image
sequence: binary
- name: preferred_image_uid
sequence: string
- name: dispreferred_image_uid
sequence: string
splits:
- name: test
num_bytes: 1053569098
num_examples: 500
download_size: 1047189316
dataset_size: 1053569098
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
TwinDoc/test-multiple-lora-serving_nn_70k | TwinDoc | "2024-11-22T05:20:28Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T05:20:12Z" | ---
dataset_info:
features:
- name: category
dtype: string
- name: title
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 183153410
num_examples: 70000
download_size: 102716289
dataset_size: 183153410
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TwinDoc/test-multiple-lora-serving_nn_70k_classification | TwinDoc | "2024-11-22T05:23:21Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T05:23:10Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 178051903
num_examples: 70000
download_size: 99479613
dataset_size: 178051903
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TwinDoc/test-multiple-lora-serving_nn_70k_generation | TwinDoc | "2024-11-22T05:26:32Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T05:26:17Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 182313410
num_examples: 70000
download_size: 102699112
dataset_size: 182313410
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/oh_v3-1_only_evol_instruct_140k | mlfoundations-dev | "2024-11-22T05:31:36Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T05:31:27Z" | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: source_label_exact
sequence: string
splits:
- name: train
num_bytes: 236898766
num_examples: 73560
download_size: 124062164
dataset_size: 236898766
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen9_run0_llama2-7b_wiki_doc1000_real32_synt96 | dgambettaphd | "2024-11-22T05:44:53Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T05:44:51Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 523661
num_examples: 1000
download_size: 287968
dataset_size: 523661
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hridaydutta123/YT-100K | hridaydutta123 | "2024-11-22T06:55:06Z" | 6 | 0 | [
"task_categories:text-classification",
"task_categories:feature-extraction",
"task_categories:token-classification",
"task_categories:zero-shot-classification",
"task_categories:sentence-similarity",
"task_categories:text-to-speech",
"license:cc",
"size_categories:100K<n<1M",
"modality:text",
"doi:10.57967/hf/3602",
"region:us",
"text",
"summarization"
] | [
"text-classification",
"feature-extraction",
"token-classification",
"zero-shot-classification",
"sentence-similarity",
"text-to-speech"
] | "2024-11-22T06:22:29Z" | ---
license: cc
tags:
- text
- summarization
task_categories:
- text-classification
- feature-extraction
- token-classification
- zero-shot-classification
- sentence-similarity
- text-to-speech
size_categories:
- 100K<n<1M
---
# <span style="color:Red">A larger version of YT-100K dataset -> YT-30M dataset with 30 million YouTube multilingual multicategory comments is also available which can be obtained by directly emailing the author of this dataset.</span>
# Introduction
This work introduces two large-scale multilingual comment datasets, YT-30M (and YT-100K) from YouTube. The code and both the datasets: YT-30M (full) and YT-100K (randomly selected 100K sample from YT-30M) are publicly released for further research. YT-30M (YT-100K) contains 32M (100K) comments posted by YouTube channel belonging to YouTube categories. Each comment is associated with a video ID, comment ID, commenter name, commenter channel ID, comment text, upvotes, original channel ID and category of the YouTube channel (e.g., News & Politics, Science & Technology, etc.).
# Data Description
Each entry in the dataset is related to one commentย for a specific YouTube video in the related categoryย with the following columns: videoID, commentID, commenterName, commenterChannelID, comment, votes,ย originalChannelID, category. Each field is explainedย below:
```
videoID: represents the video ID in YouTube.
commentID: represents the comment ID.
commenterName: represents the name of the commenter.
commenterChannelID: represents the ID of the commenter.
comment: represents the comment text.
votes: represents the upvotes received by that comment.
originalChannelID: represents the original channelย ID who posted the video.
category: represents the category of the YouTube video.
```
# Data Anonymization
The data is anonymized by removing all Personally Identifiable Information (PII).ย
# Data sample
```
{
"videoID": "ab9fe84e2b2406efba4c23385ef9312a",
"commentID": "488b24557cf81ed56e75bab6cbf76fa9",
"commenterName": "b654822a96eae771cbac945e49e43cbd",
"commenterChannelID": "2f1364f249626b3ca514966e3ef3aead",
"comment": "ich fand den Handelwecker am besten",
"votes": 2,
"originalChannelID": "oc_2f1364f249626b3ca514966e3ef3aead",
"category": "entertainment"
}
```
# Multilingual data
| **Language** | **Text** |
|--------------|---------------------------------------------------|
| English | You girls are so awesome!! |
| Russian | ะขะพัะฝะพ ัะฐะบ ะถะต ะฏ ัััะตะปะตั |
| Hindi | เคเค เคญเฅ เคญเคพเค เคส เคเคตเคพเค เคฎเฅเค เคตเคนเฅ เคชเฅเคฐเคพเคจเฅ เคฌเคพเคค เคนเฅ.... |
| Chinese | ็ก่ซๅฆไฝ,ไฝ ๅทฒ็ถๆฏๅฐ็ฃYT่จ้ฑๆธไน้ฆ |
| Bengali | เฆเงเฆฟเฆจ เฆนเฆพเฆฟเฆธเฆจเฆพเงเฆ เฆญเฆพเฆฐเงเฆคเฆฐ ร เฆงเฆพเฆจเฆฎเฆจเง... |
| Spanish | jajajaj esto tiene que ser una brom |
| Portuguese | nossa senhora!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!... |
| Malayalam | เดจเดฎเดธเตเดเดพเดฐเด |
| Telegu | เฐจเฐฎเฐธเฐพเฐเฑเฐฐเฐ |
| Japanese | ใใใซใกใฏ |
# License
[CC] (https://choosealicense.com/licenses/cc-by-4.0/#) |
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_train_chunk_8 | ZixuanKe | "2024-11-22T06:26:21Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T06:26:20Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 5294108
num_examples: 932
download_size: 412031
dataset_size: 5294108
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_8 | ZixuanKe | "2024-11-22T06:36:03Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T06:36:02Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 202258
num_examples: 38
download_size: 34469
dataset_size: 202258
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_train_chunk_2 | ZixuanKe | "2024-11-22T06:36:15Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T06:36:14Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 5423925
num_examples: 918
download_size: 398548
dataset_size: 5423925
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_train_chunk_24 | ZixuanKe | "2024-11-22T06:38:17Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T06:38:14Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 5830285
num_examples: 994
download_size: 406652
dataset_size: 5830285
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_train_chunk_19 | ZixuanKe | "2024-11-22T06:38:49Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T06:38:47Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 5000993
num_examples: 903
download_size: 405686
dataset_size: 5000993
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_2 | ZixuanKe | "2024-11-22T06:46:13Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T06:46:12Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 244634
num_examples: 49
download_size: 21688
dataset_size: 244634
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_19 | ZixuanKe | "2024-11-22T06:49:06Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T06:49:04Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 154519
num_examples: 40
download_size: 45667
dataset_size: 154519
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_24 | ZixuanKe | "2024-11-22T06:49:24Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T06:49:23Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 270318
num_examples: 55
download_size: 43012
dataset_size: 270318
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_14 | ZixuanKe | "2024-11-22T06:51:04Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T06:51:03Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 413238
num_examples: 40
download_size: 26391
dataset_size: 413238
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ahmedheakl/ar_adab_instruct | ahmedheakl | "2024-11-24T12:32:37Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T07:00:46Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 55728905.5
num_examples: 15028
download_size: 17478257
dataset_size: 55728905.5
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ahmedheakl/ar_synthesizear_instruct | ahmedheakl | "2024-11-24T12:47:53Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T07:19:51Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 782137817.686
num_examples: 39069
download_size: 720777071
dataset_size: 782137817.686
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
googlefan/lami-voice | googlefan | "2024-11-22T07:33:02Z" | 6 | 0 | [
"task_categories:text-to-speech",
"language:ja",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-to-speech"
] | "2024-11-22T07:20:21Z" | ---
language:
- ja
pretty_name: lami-voice
task_categories:
- text-to-speech
---
# Credit
Lami
# Website
https://lami.zip/
# License information
For those who had granted permission, may use for any purpose, but don't make this data itself public.
Don't reupload this dataset to anotoher repo/website/or somewhere public. |
dgambettaphd/D_gen10_run0_llama2-7b_wiki_doc1000_real32_synt96 | dgambettaphd | "2024-11-22T07:23:58Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T07:23:55Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 523724
num_examples: 1000
download_size: 287932
dataset_size: 523724
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
procit007/treated_0.8 | procit007 | "2024-11-22T07:42:12Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T07:40:37Z" | ---
dataset_info:
features:
- name: gender
dtype: string
- name: accent
dtype: string
- name: speaker_id
dtype: int64
- name: speaker_name
dtype: string
- name: text
dtype: string
- name: normalized_text
dtype: string
- name: audio
dtype: audio
- name: treated
dtype: bool
- name: metrics
struct:
- name: clipping_ratio
dtype: float64
- name: duration
dtype: float64
- name: is_valid
dtype: bool
- name: rms_energy
dtype: float64
- name: sample_rate
dtype: int64
- name: silence_ratio
dtype: float64
- name: snr
dtype: float64
splits:
- name: train
num_bytes: 2507567161.05
num_examples: 7815
download_size: 2354730617
dataset_size: 2507567161.05
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_train_chunk_6 | ZixuanKe | "2024-11-22T07:56:55Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T07:56:53Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 5354281
num_examples: 978
download_size: 392064
dataset_size: 5354281
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_6 | ZixuanKe | "2024-11-22T08:07:52Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T08:07:51Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 266421
num_examples: 39
download_size: 23296
dataset_size: 266421
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
reflection-gen/ds_coder_reflct_rmsprop_iter2_sppo_hard_new_cn_mining_oj_iter2-binarized | reflection-gen | "2024-11-22T21:59:10Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T08:13:45Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: rejected_traceback
dtype: string
- name: chosen_probs
dtype: float64
- name: chosen_probs_win
dtype: float64
- name: chosen_probs_lose
dtype: float64
splits:
- name: train
num_bytes: 8212019
num_examples: 2156
download_size: 0
dataset_size: 8212019
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_coder_reflct_rmsprop_iter2_sppo_hard_new_cn_mining_oj_iter2-binarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
reflection-gen/ds_coder_reflct_rmsprop_iter2_sppo_hard_new_cn_mining_oj_iter2-binarized_all_pairs | reflection-gen | "2024-11-22T21:59:13Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T08:13:49Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: rejected_traceback
dtype: string
- name: test
dtype: string
splits:
- name: train
num_bytes: 15339021
num_examples: 4030
download_size: 0
dataset_size: 15339021
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_coder_reflct_rmsprop_iter2_sppo_hard_new_cn_mining_oj_iter2-binarized_all_pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
anupam215769/coding-instruct-llama2-1k | anupam215769 | "2024-11-22T08:16:42Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T08:16:38Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 539037
num_examples: 1000
download_size: 272886
dataset_size: 539037
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jacpetro/Code_Vulnerability_Security_DPO | jacpetro | "2024-11-22T08:29:04Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T08:26:24Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 5682673
num_examples: 4656
download_size: 2333743
dataset_size: 5682673
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen0_run0_llama2-7b_wiki_doc1000_real64_synt64 | dgambettaphd | "2024-11-22T08:27:42Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T08:27:39Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 577936
num_examples: 1000
download_size: 360829
dataset_size: 577936
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BlinkVision/BlinkVision_train | BlinkVision | "2024-11-22T08:49:20Z" | 6 | 0 | [
"license:cc-by-4.0",
"region:us"
] | null | "2024-11-22T08:49:20Z" | ---
license: cc-by-4.0
---
|
Asap7772/gsm8k_fewshot_prompt | Asap7772 | "2024-11-22T09:06:59Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T09:05:49Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: fewshot_prompt
dtype: string
splits:
- name: train
num_bytes: 41447996
num_examples: 7473
- name: test
num_bytes: 7219454
num_examples: 1319
download_size: 21288601
dataset_size: 48667450
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
jasong03/tokenized_ds_clm_qwen | jasong03 | "2024-11-22T09:19:18Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T09:08:08Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 2868583600
num_examples: 53860
download_size: 725191852
dataset_size: 2868583600
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
procit007/STT_2.0.0_rc0 | procit007 | "2024-11-22T09:37:35Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T09:11:41Z" | ---
dataset_info:
features:
- name: gender
dtype: string
- name: accent
dtype: string
- name: speaker_id
dtype: int64
- name: speaker_name
dtype: string
- name: text
dtype: string
- name: normalized_text
dtype: string
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 22382774642.4
num_examples: 70252
- name: validation
num_bytes: 2797687526.8307576
num_examples: 8781
- name: test
num_bytes: 2798006133.7692423
num_examples: 8782
download_size: 26241622527
dataset_size: 27978468303.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
Summer926/chonglaimimangzhiwang | Summer926 | "2024-11-22T09:36:37Z" | 6 | 0 | [
"license:cc-by-4.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T09:34:21Z" | ---
license: cc-by-4.0
---
|
AlexKarap/AsylK | AlexKarap | "2024-11-22T09:47:38Z" | 6 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T09:45:44Z" | ---
license: apache-2.0
---
|
dgambettaphd/D_gen1_run0_llama2-7b_wiki_doc1000_real64_synt64 | dgambettaphd | "2024-11-22T09:46:45Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T09:46:43Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 581274
num_examples: 1000
download_size: 356852
dataset_size: 581274
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
LLMsForHepth/astro | LLMsForHepth | "2024-11-22T09:51:00Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T09:50:55Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: abstract
dtype: string
splits:
- name: test
num_bytes: 39639451
num_examples: 32624
download_size: 22929802
dataset_size: 39639451
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
jfcalvo/argilla-testing-export-01 | jfcalvo | "2024-11-22T09:57:47Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T09:57:42Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
sequence: string
splits:
- name: train
num_bytes: 13165429
num_examples: 10000
download_size: 8347440
dataset_size: 13165429
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jfcalvo/argilla-testing-export-04 | jfcalvo | "2024-11-22T10:03:04Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T10:03:00Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: status
dtype: string
- name: _server_id
dtype: string
- name: text
dtype: string
- name: label.responses
sequence: string
- name: label.responses.users
sequence: string
- name: label.responses.status
sequence: string
splits:
- name: train
num_bytes: 13894725
num_examples: 10000
download_size: 8796662
dataset_size: 13894725
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jfcalvo/argilla-testing-export-13 | jfcalvo | "2024-11-22T10:43:33Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T10:43:28Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: status
dtype: string
- name: _server_id
dtype: string
- name: text
dtype: string
- name: label.responses
sequence: string
- name: label.responses.users
sequence: string
- name: label.responses.status
sequence: string
splits:
- name: train
num_bytes: 13898631
num_examples: 10000
download_size: 8801079
dataset_size: 13898631
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
donghuna/gsm8k_with_plan | donghuna | "2024-11-22T11:01:03Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T11:01:00Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: plan
dtype: string
splits:
- name: train
num_bytes: 1333566
num_examples: 1000
download_size: 582394
dataset_size: 1333566
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
data-is-better-together/imgsys-results-prompts-style_v2_part2_loaded | data-is-better-together | "2024-11-22T11:34:41Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T11:17:29Z" | ---
dataset_info:
features:
- name: quality_prompt
dtype: string
- name: category
dtype: string
- name: subcategory
dtype: string
- name: style_prompt
dtype: string
- name: simplified_prompt
dtype: string
- name: __index_level_0__
dtype: int64
- name: image_quality_dev
struct:
- name: path
dtype: string
- name: grouped_model_name
sequence: string
- name: prompt
dtype: string
- name: image_simplified_dev
struct:
- name: path
dtype: string
- name: image_quality_sd
struct:
- name: path
dtype: string
- name: image_simplified_sd
struct:
- name: path
dtype: string
- name: distilabel_metadata
struct:
- name: raw_input_image_gen_quality_dev
struct:
- name: prompt
dtype: string
- name: raw_input_image_gen_quality_sd
struct:
- name: prompt
dtype: string
- name: raw_input_image_gen_simplified_dev
struct:
- name: prompt
dtype: string
- name: raw_input_image_gen_simplified_sd
struct:
- name: prompt
dtype: string
- name: raw_output_image_gen_quality_dev
struct:
- name: image
dtype: string
- name: raw_output_image_gen_quality_sd
struct:
- name: image
dtype: string
- name: raw_output_image_gen_simplified_dev
struct:
- name: image
dtype: string
- name: raw_output_image_gen_simplified_sd
struct:
- name: image
dtype: string
- name: image_quality_dev_loaded
dtype: image
- name: image_simplified_dev_loaded
dtype: image
- name: image_quality_sd_loaded
dtype: image
- name: image_simplified_sd_loaded
dtype: image
splits:
- name: train
num_bytes: 19171262392.552002
num_examples: 14587
download_size: 19177786250
dataset_size: 19171262392.552002
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
<p align="center">
<img src="https://huggingface.co/blog/assets/community-datasets/thumbnail.png" width="500px"/>
</p>
<p align="center">๐ค <a href="https://huggingface.co/DIBT" target="_blank">Spaces & Datasets</a></p>
# Data is Better Together
> If you are working on a valuable community-developed dataset but are limited by available resources, please reach out to us on the Hugging Face discord. We may be able to provide support to enhance your project.
Data is Better Together is a collaboration between ๐ค Hugging Face, ๐ Argilla, and the Open-Source ML community. We aim to empower the open-source community to build impactful datasets collectively. This initiative consists of two main components: the community efforts and the cookbook efforts.
<details open>
<summary><strong>Community Efforts</strong>: They were guided by the HF Team, hands-on projects focused on creating valuable datasets. These projects required the participation of the community and have been successfully completed.</summary>
<ul>
<details>
<summary><strong>Prompt ranking</strong></summary>
- **Goal**: This project aimed to create a dataset of 10k prompts ranked by quality. These prompts included both synthetic and human-generated from various datasets. The intention was to use the final dataset for prompt ranking tasks or synthetic data generation. You can find more information about this project in the [prompt ranking README](community-efforts/prompt_ranking/README.md)
- **How**: First, we prepared a dataset with the prompts to be ranked using Argilla in a Hugging Face Space. Then, we invited the community to rank the prompts based on their quality. Finally, we collected the annotations and released the dataset.
- **Result**: Over 385 people joined this initiative! Thanks to their contribution, we released [DIBT/10k_prompts_ranked](https://huggingface.co/datasets/DIBT/10k_prompts_ranked). This dataset can be used for different tasks as you can filter the higher-quality prompts (for instance, see the MPEP project) and generate the corresponding completions. You can also find some models built on top of it [here](https://huggingface.co/models?dataset=dataset:DIBT/10k_prompts_ranked).
</details>
<details>
<summary><strong>Multilingual Prompt Evaluation Project (MPEP)</strong></summary>
- **Goal**: There are not enough language-specific benchmarks for open LLMs! So, we wanted to create a leaderboard for more languages by leveraging the community. This way, we could evaluate the performance of models using [AlpacaEval](https://github.com/tatsu-lab/alpaca_eval). You can find more information about this project in the [MPEP README](community-efforts/prompt_translation/README.md).
- **How**: We selected a subset of 500 high-quality prompts from the [DIBT/10k_prompts_ranked](https://huggingface.co/datasets/DIBT/10k_prompts_ranked) (see the prompt ranking project) and asked the community to help us translate this curated prompt dataset into different languages.
- **Result**: We achieved to translate the whole dataset for Dutch and Russian, and almost finished with Spanish. Many other languages have also joined this initiative. You can take a look at the resulting datasets [here](https://huggingface.co/datasets?search=MPEP).
</details>
<details>
<summary><strong>Image Preferences</strong></summary>
- **Goal**: This project aims to create 10K text to image preference pairs. These pairs can be used to evaluate the performance of image generation models across a wide variety of common image categories, based on prompt with varying levels of difficulty. You can find more information about this project in the [image preferences README](community-efforts/image_preferences/README.md)
- **How**: We use the prompts from [fal/imgsys-results](https://huggingface.co/datasets/fal/imgsys-results), these prompts are evolved based on complexity and quality for various image categories. We then asked the community to annotate the preference between two generated images for each prompt.
- **Result**: We achieved to annotate 10K preference pairs. You can take a look at the resulting dataset [here](https://huggingface.co/datasets/DIBT/image_preferences).
</details>
</ul>
<details open>
<summary><strong>Cookbook Efforts</strong>: They aim to create guides and tools that help the community in building valuable datasets. They are not guided by the HF team and expected to be handled standalone, allowing you to freely contribute or use them to create your own unique dataset.</summary>
<ul>
<details>
<summary><strong>Domain Specific Datasets</strong></summary>
This project aims to bootstrap the creation of more domain-specific datasets for training models. The **goal** is to create a set of tools that help users to collaborate with domain experts. Find out more in the [Domain Specific Datasets README.](cookbook-efforts/domain-specific-datasets/README.md)
</details>
<details>
<summary><strong>DPO/ORPO Datasets</strong></summary>
Many languages do not have DPO datasets openly shared on the Hugging Face Hub. The [DIBT/preference_data_by_language](https://huggingface.co/spaces/DIBT/preference_data_by_language) Space gives you an overview of language coverage of DPO datasets for different languages. The **goal** of this project is to help foster a community of people building more DPO-style datasets for different languages. Find out more in this [DPO/ORPO datasets README](cookbook-efforts/dpo-orpo-preference/README.md).
</details>
<details>
<summary><strong>KTO Datasets</strong></summary>
KTO is another type of preference dataset that can be used to train models to make decisions. Unlike DPO, it doesn't require two candidate responses. Instead, it relies on a simple binary preference, i.e. ๐๐. Thus, data is easier to collect and annotate. The **goal** of this project is to help the community create their own KTO dataset. Find out more in this [KTO datasets README](cookbook-efforts/kto-preference/README.md)
</details>
</ul>
**๐คโ How can I contribute to the cookbook efforts?** That's easy! You can contribute by following the instructions in the README of the project you are interested in. Then, share your results with the community!
</details>
|
dgambettaphd/D_gen2_run0_llama2-7b_wiki_doc1000_real64_synt64 | dgambettaphd | "2024-11-22T11:22:41Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T11:22:38Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 581412
num_examples: 1000
download_size: 354179
dataset_size: 581412
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
FiscaAI/icd10cm-prompt | FiscaAI | "2024-11-22T12:10:10Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T12:07:10Z" | ---
dataset_info:
features:
- name: system
dtype: string
- name: user
dtype: string
- name: assistant
dtype: string
- name: codes
sequence: string
splits:
- name: train
num_bytes: 47452636
num_examples: 74260
download_size: 3398522
dataset_size: 47452636
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Nash-pAnDiTa/Casablanca-EG | Nash-pAnDiTa | "2024-11-22T13:14:59Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T12:46:37Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 1250014939.682
num_examples: 1658
download_size: 1122263634
dataset_size: 1250014939.682
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
oserikov/pmi-selkup | oserikov | "2024-11-22T14:29:36Z" | 6 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T14:29:30Z" | ---
dataset_info:
features:
- name: all
struct:
- name: interlinear-text
list:
- name: item
struct:
- name: source
dtype: string
- name: paragraph
list:
- name: item
struct:
- name: speaker
dtype: string
- name: phrase
list:
- name: item
struct:
- name: ft
dtype: string
- name: id
dtype: string
- name: participant
dtype: string
- name: timestamp
sequence: string
- name: word
list:
list:
- name: item
struct:
- name: grammar_tags
sequence: string
- name: translation
sequence: string
- name: txt
dtype: string
- name: morph
list:
- name: item
struct:
- name: gls
dtype: string
- name: id
dtype: string
- name: txt
dtype: string
- name: item
dtype: 'null'
splits:
- name: train
num_bytes: 29025
num_examples: 1
download_size: 23291
dataset_size: 29025
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jfcalvo/argilla-testing-export-19 | jfcalvo | "2024-11-22T15:28:26Z" | 6 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T15:28:23Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: status
dtype:
class_label:
names:
'0': pending
'1': completed
- name: _server_id
dtype: string
splits:
- name: train
num_bytes: 618890
num_examples: 10000
download_size: 446821
dataset_size: 618890
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
None1145/Rosmontis | None1145 | "2024-11-22T15:48:27Z" | 6 | 1 | [
"task_categories:text-to-speech",
"language:zh",
"language:ja",
"language:ko",
"license:mit",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us",
"Rosmontis",
"Arknights",
"่ฟท่ฟญ้ฆ",
"ๆๆฅๆน่ใ"
] | [
"text-to-speech"
] | "2024-11-22T15:29:21Z" | ---
license: mit
language:
- zh
- ja
- ko
tags:
- Rosmontis
- Arknights
- ่ฟท่ฟญ้ฆ
- ๆๆฅๆน่ใ
task_categories:
- text-to-speech
pretty_name: Rosmontis
--- |
dgambettaphd/D_gen5_run0_llama2-7b_wiki_doc1000_real64_synt64 | dgambettaphd | "2024-11-22T15:34:46Z" | 6 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T15:34:43Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 579853
num_examples: 1000
download_size: 352412
dataset_size: 579853
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|