datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
2.54M
| likes
int64 0
6.35k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
40
⌀ | createdAt
unknown | card
stringlengths 19
1M
|
---|---|---|---|---|---|---|---|---|
reflection-gen/ds_coder6.7b_rmsprop_iter4_sppo_hard_new_cn_mining_oj_iter4-binarized_all_pairs | reflection-gen | "2024-11-21T12:22:53Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T12:22:52Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: rejected_traceback
dtype: string
- name: test
dtype: string
splits:
- name: train
num_bytes: 11177269
num_examples: 3268
download_size: 3417576
dataset_size: 11177269
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_coder6.7b_rmsprop_iter4_sppo_hard_new_cn_mining_oj_iter4-binarized_all_pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/FuseAI__FuseChat-7B-v2.0-details | open-llm-leaderboard | "2024-11-21T12:34:34Z" | 5 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T12:31:38Z" | ---
pretty_name: Evaluation run of FuseAI/FuseChat-7B-v2.0
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [FuseAI/FuseChat-7B-v2.0](https://huggingface.co/FuseAI/FuseChat-7B-v2.0)\nThe\
\ dataset is composed of 38 configuration(s), each one corresponding to one of the\
\ evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be\
\ found as a specific split in each configuration, the split being named using the\
\ timestamp of the run.The \"train\" split is always pointing to the latest results.\n\
\nAn additional configuration \"results\" store all the aggregated results of the\
\ run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/FuseAI__FuseChat-7B-v2.0-details\"\
,\n\tname=\"FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_boolean_expressions\",\n\t\
split=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results from\
\ run 2024-11-21T12-31-37.629340](https://huggingface.co/datasets/open-llm-leaderboard/FuseAI__FuseChat-7B-v2.0-details/blob/main/FuseAI__FuseChat-7B-v2.0/results_2024-11-21T12-31-37.629340.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"acc_norm,none\": 0.46166818004929305,\n \"acc_norm_stderr,none\"\
: 0.005333627579637958,\n \"prompt_level_loose_acc,none\": 0.28650646950092423,\n\
\ \"prompt_level_loose_acc_stderr,none\": 0.01945652858321169,\n \
\ \"inst_level_loose_acc,none\": 0.43764988009592326,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_strict_acc,none\": 0.266173752310536,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.019018766847290668,\n \
\ \"inst_level_strict_acc,none\": 0.4184652278177458,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"acc,none\": 0.3162400265957447,\n \"acc_stderr,none\"\
: 0.004239448779714145,\n \"exact_match,none\": 0.0634441087613293,\n\
\ \"exact_match_stderr,none\": 0.00659129853658391,\n \"alias\"\
: \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.4924492275646589,\n \"acc_norm_stderr,none\": 0.006149524947613364,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.792,\n \"acc_norm_stderr,none\": 0.025721398901416368\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.6149732620320856,\n\
\ \"acc_norm_stderr,none\": 0.03567936280544673\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.468,\n \"acc_norm_stderr,none\":\
\ 0.03162125257572558\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.588,\n \"acc_norm_stderr,none\": 0.031191596026022818\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.568,\n\
\ \"acc_norm_stderr,none\": 0.03139181076542941\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.312,\n \"acc_norm_stderr,none\":\
\ 0.02936106757521985\n },\n \"leaderboard_bbh_hyperbaton\": {\n \
\ \"alias\": \" - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\"\
: 0.788,\n \"acc_norm_stderr,none\": 0.025901884690541117\n },\n\
\ \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.4,\n \"acc_norm_stderr,none\": 0.031046021028253316\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.616,\n \"acc_norm_stderr,none\": 0.030821679117375447\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.76,\n \"acc_norm_stderr,none\": 0.027065293652238982\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.452,\n \"acc_norm_stderr,none\":\
\ 0.03153986449255664\n },\n \"leaderboard_bbh_object_counting\":\
\ {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.376,\n \"acc_norm_stderr,none\": 0.03069633626739458\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.4794520547945205,\n \"acc_norm_stderr,none\": 0.041487661809251744\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.444,\n \"acc_norm_stderr,none\": 0.03148684942554571\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.548,\n \
\ \"acc_norm_stderr,none\": 0.03153986449255664\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" -\
\ leaderboard_bbh_snarks\",\n \"acc_norm,none\": 0.651685393258427,\n\
\ \"acc_norm_stderr,none\": 0.035811144737534356\n },\n \
\ \"leaderboard_bbh_sports_understanding\": {\n \"alias\": \" - leaderboard_bbh_sports_understanding\"\
,\n \"acc_norm,none\": 0.78,\n \"acc_norm_stderr,none\": 0.02625179282460579\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.236,\n\
\ \"acc_norm_stderr,none\": 0.026909337594953852\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.176,\n \"acc_norm_stderr,none\": 0.024133497525457123\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.18,\n \"acc_norm_stderr,none\": 0.02434689065029351\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.3,\n \"acc_norm_stderr,none\": 0.029040893477575783\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\":\
\ \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\": 0.48,\n \
\ \"acc_norm_stderr,none\": 0.03166085340849512\n },\n \"\
leaderboard_gpqa\": {\n \"acc_norm,none\": 0.30201342281879195,\n \
\ \"acc_norm_stderr,none\": 0.01330822752388189,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.2777777777777778,\n \"acc_norm_stderr,none\": 0.03191178226713548\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.31684981684981683,\n\
\ \"acc_norm_stderr,none\": 0.019929048938214563\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.29464285714285715,\n \"acc_norm_stderr,none\"\
: 0.021562481080109767\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.266173752310536,\n \"prompt_level_strict_acc_stderr,none\": 0.019018766847290668,\n\
\ \"inst_level_strict_acc,none\": 0.4184652278177458,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.28650646950092423,\n \"prompt_level_loose_acc_stderr,none\": 0.01945652858321169,\n\
\ \"inst_level_loose_acc,none\": 0.43764988009592326,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.0634441087613293,\n \"exact_match_stderr,none\"\
: 0.00659129853658391,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.13029315960912052,\n\
\ \"exact_match_stderr,none\": 0.019243609597826783\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.04065040650406504,\n \"exact_match_stderr,none\": 0.017878907564437465\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.022727272727272728,\n\
\ \"exact_match_stderr,none\": 0.0130210469090637\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\": \"\
\ - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.014285714285714285,\n \"exact_match_stderr,none\": 0.0071043508939153165\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.03896103896103896,\n\
\ \"exact_match_stderr,none\": 0.015643720451650286\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.10880829015544041,\n \"exact_match_stderr,none\"\
: 0.02247325333276875\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.037037037037037035,\n \"exact_match_stderr,none\"\
: 0.016314377626726044\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.3162400265957447,\n\
\ \"acc_stderr,none\": 0.004239448779714145\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.47883597883597884,\n \"acc_norm_stderr,none\"\
: 0.017936118983800375,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.584,\n\
\ \"acc_norm_stderr,none\": 0.031235856237014505\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.375,\n \"acc_norm_stderr,none\":\
\ 0.03031695312954162\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.48,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ }\n },\n \"leaderboard\": {\n \"acc_norm,none\": 0.46166818004929305,\n\
\ \"acc_norm_stderr,none\": 0.005333627579637958,\n \"prompt_level_loose_acc,none\"\
: 0.28650646950092423,\n \"prompt_level_loose_acc_stderr,none\": 0.01945652858321169,\n\
\ \"inst_level_loose_acc,none\": 0.43764988009592326,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_strict_acc,none\": 0.266173752310536,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.019018766847290668,\n \"inst_level_strict_acc,none\"\
: 0.4184652278177458,\n \"inst_level_strict_acc_stderr,none\": \"N/A\",\n\
\ \"acc,none\": 0.3162400265957447,\n \"acc_stderr,none\": 0.004239448779714145,\n\
\ \"exact_match,none\": 0.0634441087613293,\n \"exact_match_stderr,none\"\
: 0.00659129853658391,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.4924492275646589,\n \"acc_norm_stderr,none\"\
: 0.006149524947613364,\n \"alias\": \" - leaderboard_bbh\"\n },\n \
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.792,\n \"acc_norm_stderr,none\": 0.025721398901416368\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6149732620320856,\n \"acc_norm_stderr,none\"\
: 0.03567936280544673\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.468,\n \"acc_norm_stderr,none\": 0.03162125257572558\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.588,\n \"acc_norm_stderr,none\": 0.031191596026022818\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.568,\n \"acc_norm_stderr,none\": 0.03139181076542941\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.312,\n \"acc_norm_stderr,none\": 0.02936106757521985\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.788,\n \"acc_norm_stderr,none\": 0.025901884690541117\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.4,\n \"acc_norm_stderr,none\": 0.031046021028253316\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.616,\n \"acc_norm_stderr,none\": 0.030821679117375447\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.76,\n \"acc_norm_stderr,none\": 0.027065293652238982\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.452,\n \"acc_norm_stderr,none\": 0.03153986449255664\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.376,\n \"acc_norm_stderr,none\": 0.03069633626739458\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.4794520547945205,\n\
\ \"acc_norm_stderr,none\": 0.041487661809251744\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.444,\n \"acc_norm_stderr,none\": 0.03148684942554571\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.548,\n \"acc_norm_stderr,none\": 0.03153986449255664\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.651685393258427,\n \"acc_norm_stderr,none\"\
: 0.035811144737534356\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.78,\n \"acc_norm_stderr,none\": 0.02625179282460579\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \"\
acc_norm,none\": 0.236,\n \"acc_norm_stderr,none\": 0.026909337594953852\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.176,\n \"acc_norm_stderr,none\": 0.024133497525457123\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.18,\n \"acc_norm_stderr,none\": 0.02434689065029351\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.3,\n \"acc_norm_stderr,none\": 0.029040893477575783\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.48,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.30201342281879195,\n\
\ \"acc_norm_stderr,none\": 0.01330822752388189,\n \"alias\": \" -\
\ leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"alias\"\
: \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.2777777777777778,\n\
\ \"acc_norm_stderr,none\": 0.03191178226713548\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.31684981684981683,\n \"acc_norm_stderr,none\": 0.019929048938214563\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.29464285714285715,\n \"acc_norm_stderr,none\"\
: 0.021562481080109767\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.266173752310536,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.019018766847290668,\n \
\ \"inst_level_strict_acc,none\": 0.4184652278177458,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.28650646950092423,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.01945652858321169,\n \"inst_level_loose_acc,none\"\
: 0.43764988009592326,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n\
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.0634441087613293,\n\
\ \"exact_match_stderr,none\": 0.00659129853658391,\n \"alias\": \"\
\ - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.13029315960912052,\n \"exact_match_stderr,none\": 0.019243609597826783\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.04065040650406504,\n \"exact_match_stderr,none\": 0.017878907564437465\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.022727272727272728,\n \"exact_match_stderr,none\"\
: 0.0130210469090637\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.014285714285714285,\n \"exact_match_stderr,none\"\
: 0.0071043508939153165\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.03896103896103896,\n \"exact_match_stderr,none\": 0.015643720451650286\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.10880829015544041,\n \"exact_match_stderr,none\"\
: 0.02247325333276875\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.037037037037037035,\n \"exact_match_stderr,none\": 0.016314377626726044\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.3162400265957447,\n \"acc_stderr,none\": 0.004239448779714145\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.47883597883597884,\n\
\ \"acc_norm_stderr,none\": 0.017936118983800375,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.584,\n \"acc_norm_stderr,none\": 0.031235856237014505\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.375,\n \"acc_norm_stderr,none\": 0.03031695312954162\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.48,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ }\n}\n```"
repo_url: https://huggingface.co/FuseAI/FuseChat-7B-v2.0
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_navigate
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_snarks
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_gpqa_extended
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_gpqa_main
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_gpqa_main_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_ifeval
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_ifeval_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_mmlu_pro
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_musr_object_placements
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-21T12-31-37.629340.jsonl'
- config_name: FuseAI__FuseChat-7B-v2.0__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_21T12_31_37.629340
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-21T12-31-37.629340.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-21T12-31-37.629340.jsonl'
---
# Dataset Card for Evaluation run of FuseAI/FuseChat-7B-v2.0
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [FuseAI/FuseChat-7B-v2.0](https://huggingface.co/FuseAI/FuseChat-7B-v2.0)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/FuseAI__FuseChat-7B-v2.0-details",
name="FuseAI__FuseChat-7B-v2.0__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-21T12-31-37.629340](https://huggingface.co/datasets/open-llm-leaderboard/FuseAI__FuseChat-7B-v2.0-details/blob/main/FuseAI__FuseChat-7B-v2.0/results_2024-11-21T12-31-37.629340.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"acc_norm,none": 0.46166818004929305,
"acc_norm_stderr,none": 0.005333627579637958,
"prompt_level_loose_acc,none": 0.28650646950092423,
"prompt_level_loose_acc_stderr,none": 0.01945652858321169,
"inst_level_loose_acc,none": 0.43764988009592326,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.266173752310536,
"prompt_level_strict_acc_stderr,none": 0.019018766847290668,
"inst_level_strict_acc,none": 0.4184652278177458,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.3162400265957447,
"acc_stderr,none": 0.004239448779714145,
"exact_match,none": 0.0634441087613293,
"exact_match_stderr,none": 0.00659129853658391,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.4924492275646589,
"acc_norm_stderr,none": 0.006149524947613364,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.792,
"acc_norm_stderr,none": 0.025721398901416368
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6149732620320856,
"acc_norm_stderr,none": 0.03567936280544673
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.468,
"acc_norm_stderr,none": 0.03162125257572558
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.312,
"acc_norm_stderr,none": 0.02936106757521985
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.788,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.4,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.616,
"acc_norm_stderr,none": 0.030821679117375447
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.76,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.452,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.376,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4794520547945205,
"acc_norm_stderr,none": 0.041487661809251744
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.444,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.651685393258427,
"acc_norm_stderr,none": 0.035811144737534356
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.78,
"acc_norm_stderr,none": 0.02625179282460579
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.236,
"acc_norm_stderr,none": 0.026909337594953852
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.176,
"acc_norm_stderr,none": 0.024133497525457123
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.18,
"acc_norm_stderr,none": 0.02434689065029351
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.3,
"acc_norm_stderr,none": 0.029040893477575783
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_gpqa": {
"acc_norm,none": 0.30201342281879195,
"acc_norm_stderr,none": 0.01330822752388189,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2777777777777778,
"acc_norm_stderr,none": 0.03191178226713548
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.31684981684981683,
"acc_norm_stderr,none": 0.019929048938214563
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.29464285714285715,
"acc_norm_stderr,none": 0.021562481080109767
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.266173752310536,
"prompt_level_strict_acc_stderr,none": 0.019018766847290668,
"inst_level_strict_acc,none": 0.4184652278177458,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.28650646950092423,
"prompt_level_loose_acc_stderr,none": 0.01945652858321169,
"inst_level_loose_acc,none": 0.43764988009592326,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.0634441087613293,
"exact_match_stderr,none": 0.00659129853658391,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.13029315960912052,
"exact_match_stderr,none": 0.019243609597826783
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.04065040650406504,
"exact_match_stderr,none": 0.017878907564437465
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.022727272727272728,
"exact_match_stderr,none": 0.0130210469090637
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.014285714285714285,
"exact_match_stderr,none": 0.0071043508939153165
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.03896103896103896,
"exact_match_stderr,none": 0.015643720451650286
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.10880829015544041,
"exact_match_stderr,none": 0.02247325333276875
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.037037037037037035,
"exact_match_stderr,none": 0.016314377626726044
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.3162400265957447,
"acc_stderr,none": 0.004239448779714145
},
"leaderboard_musr": {
"acc_norm,none": 0.47883597883597884,
"acc_norm_stderr,none": 0.017936118983800375,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.584,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.375,
"acc_norm_stderr,none": 0.03031695312954162
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
}
},
"leaderboard": {
"acc_norm,none": 0.46166818004929305,
"acc_norm_stderr,none": 0.005333627579637958,
"prompt_level_loose_acc,none": 0.28650646950092423,
"prompt_level_loose_acc_stderr,none": 0.01945652858321169,
"inst_level_loose_acc,none": 0.43764988009592326,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.266173752310536,
"prompt_level_strict_acc_stderr,none": 0.019018766847290668,
"inst_level_strict_acc,none": 0.4184652278177458,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.3162400265957447,
"acc_stderr,none": 0.004239448779714145,
"exact_match,none": 0.0634441087613293,
"exact_match_stderr,none": 0.00659129853658391,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.4924492275646589,
"acc_norm_stderr,none": 0.006149524947613364,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.792,
"acc_norm_stderr,none": 0.025721398901416368
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6149732620320856,
"acc_norm_stderr,none": 0.03567936280544673
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.468,
"acc_norm_stderr,none": 0.03162125257572558
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.588,
"acc_norm_stderr,none": 0.031191596026022818
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.568,
"acc_norm_stderr,none": 0.03139181076542941
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.312,
"acc_norm_stderr,none": 0.02936106757521985
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.788,
"acc_norm_stderr,none": 0.025901884690541117
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.4,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.616,
"acc_norm_stderr,none": 0.030821679117375447
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.76,
"acc_norm_stderr,none": 0.027065293652238982
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.452,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.376,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4794520547945205,
"acc_norm_stderr,none": 0.041487661809251744
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.444,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.651685393258427,
"acc_norm_stderr,none": 0.035811144737534356
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.78,
"acc_norm_stderr,none": 0.02625179282460579
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.236,
"acc_norm_stderr,none": 0.026909337594953852
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.176,
"acc_norm_stderr,none": 0.024133497525457123
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.18,
"acc_norm_stderr,none": 0.02434689065029351
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.3,
"acc_norm_stderr,none": 0.029040893477575783
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_gpqa": {
"acc_norm,none": 0.30201342281879195,
"acc_norm_stderr,none": 0.01330822752388189,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2777777777777778,
"acc_norm_stderr,none": 0.03191178226713548
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.31684981684981683,
"acc_norm_stderr,none": 0.019929048938214563
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.29464285714285715,
"acc_norm_stderr,none": 0.021562481080109767
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.266173752310536,
"prompt_level_strict_acc_stderr,none": 0.019018766847290668,
"inst_level_strict_acc,none": 0.4184652278177458,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.28650646950092423,
"prompt_level_loose_acc_stderr,none": 0.01945652858321169,
"inst_level_loose_acc,none": 0.43764988009592326,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.0634441087613293,
"exact_match_stderr,none": 0.00659129853658391,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.13029315960912052,
"exact_match_stderr,none": 0.019243609597826783
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.04065040650406504,
"exact_match_stderr,none": 0.017878907564437465
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.022727272727272728,
"exact_match_stderr,none": 0.0130210469090637
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.014285714285714285,
"exact_match_stderr,none": 0.0071043508939153165
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.03896103896103896,
"exact_match_stderr,none": 0.015643720451650286
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.10880829015544041,
"exact_match_stderr,none": 0.02247325333276875
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.037037037037037035,
"exact_match_stderr,none": 0.016314377626726044
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.3162400265957447,
"acc_stderr,none": 0.004239448779714145
},
"leaderboard_musr": {
"acc_norm,none": 0.47883597883597884,
"acc_norm_stderr,none": 0.017936118983800375,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.584,
"acc_norm_stderr,none": 0.031235856237014505
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.375,
"acc_norm_stderr,none": 0.03031695312954162
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
juliadollis/mistral_toxic_hatespeech3 | juliadollis | "2024-11-21T12:38:20Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T12:38:18Z" | ---
dataset_info:
features:
- name: nome_arquivo
dtype: string
- name: file_id
dtype: string
- name: user_id
dtype: int64
- name: subforum_id
dtype: int64
- name: num_contexts
dtype: int64
- name: label
dtype: string
- name: text
dtype: string
- name: is_toxic
dtype: int64
- name: predicted_is_toxic
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1693
num_examples: 10
download_size: 7545
dataset_size: 1693
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
juliadollis/mistral_toxic_hatespeech5 | juliadollis | "2024-11-21T12:41:18Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T12:41:14Z" | ---
dataset_info:
features:
- name: nome_arquivo
dtype: string
- name: file_id
dtype: string
- name: user_id
dtype: int64
- name: subforum_id
dtype: int64
- name: num_contexts
dtype: int64
- name: label
dtype: string
- name: text
dtype: string
- name: is_toxic
dtype: int64
- name: predicted_is_toxic
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1693
num_examples: 10
download_size: 7545
dataset_size: 1693
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
juliadollis/mistral_toxic_hatespeech6 | juliadollis | "2024-11-21T12:42:36Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T12:42:34Z" | ---
dataset_info:
features:
- name: nome_arquivo
dtype: string
- name: file_id
dtype: string
- name: user_id
dtype: int64
- name: subforum_id
dtype: int64
- name: num_contexts
dtype: int64
- name: label
dtype: string
- name: text
dtype: string
- name: is_toxic
dtype: int64
- name: predicted_is_toxic
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 3291
num_examples: 20
download_size: 8496
dataset_size: 3291
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
juliadollis/mistral_ImplicitHateCorpus1 | juliadollis | "2024-11-21T12:44:04Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T12:44:02Z" | ---
dataset_info:
features:
- name: text_en
dtype: string
- name: class
dtype: string
- name: is_toxic
dtype: int64
- name: text
dtype: string
- name: predicted_is_toxic
dtype: int64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 4890
num_examples: 20
download_size: 7892
dataset_size: 4890
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
reflection-gen/ds_coder_rmsprop_iter3_sppo_hard_new_cn_mining_oj_iter3-full_response_traceback | reflection-gen | "2024-11-21T12:53:40Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T12:53:39Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: test
dtype: string
- name: tag
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: text_prompt
dtype: string
- name: text_chosen
dtype: string
- name: text_rejected
dtype: string
- name: generate_0
dtype: string
- name: generate_0_score
dtype: int64
- name: traceback_0
dtype: string
- name: generate_1
dtype: string
- name: generate_1_score
dtype: int64
- name: traceback_1
dtype: string
- name: generate_2
dtype: string
- name: generate_2_score
dtype: int64
- name: traceback_2
dtype: string
- name: generate_3
dtype: string
- name: generate_3_score
dtype: int64
- name: traceback_3
dtype: string
- name: probability
sequence:
sequence: float64
- name: rm_scores
sequence: int64
splits:
- name: train
num_bytes: 18336113
num_examples: 1924
download_size: 6400516
dataset_size: 18336113
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_coder_rmsprop_iter3_sppo_hard_new_cn_mining_oj_iter3-full_response_traceback"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
reflection-gen/ds_coder_rmsprop_iter3_sppo_hard_new_cn_mining_oj_iter3-binarized_all_pairs | reflection-gen | "2024-11-21T12:53:42Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T12:53:40Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: rejected_traceback
dtype: string
- name: test
dtype: string
splits:
- name: train
num_bytes: 14718348
num_examples: 3632
download_size: 4183612
dataset_size: 14718348
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_coder_rmsprop_iter3_sppo_hard_new_cn_mining_oj_iter3-binarized_all_pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
reflection-gen/ds_chat_rmsprop_iter4_sigmoid_cn_mining_oj_iter4-full_response_traceback | reflection-gen | "2024-11-21T13:08:03Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T13:08:01Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: test
dtype: string
- name: tag
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: text_prompt
dtype: string
- name: text_chosen
dtype: string
- name: text_rejected
dtype: string
- name: generate_0
dtype: string
- name: generate_0_score
dtype: int64
- name: traceback_0
dtype: string
- name: generate_1
dtype: string
- name: generate_1_score
dtype: int64
- name: traceback_1
dtype: string
- name: generate_2
dtype: string
- name: generate_2_score
dtype: int64
- name: traceback_2
dtype: string
- name: generate_3
dtype: string
- name: generate_3_score
dtype: int64
- name: traceback_3
dtype: string
- name: probability
sequence:
sequence: float64
- name: rm_scores
sequence: int64
splits:
- name: train
num_bytes: 16332133
num_examples: 2771
download_size: 5963278
dataset_size: 16332133
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_chat_rmsprop_iter4_sigmoid_cn_mining_oj_iter4-full_response_traceback"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
reflection-gen/ds_chat_rmsprop_iter4_sigmoid_cn_mining_oj_iter4-binarized_all_pairs | reflection-gen | "2024-11-21T13:08:04Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T13:08:03Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: rejected_traceback
dtype: string
- name: test
dtype: string
splits:
- name: train
num_bytes: 14324633
num_examples: 5640
download_size: 3977994
dataset_size: 14324633
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_chat_rmsprop_iter4_sigmoid_cn_mining_oj_iter4-binarized_all_pairs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyang816/FLIP_AAV_two-vs-rest | tyang816 | "2024-11-21T13:18:58Z" | 5 | 0 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T13:18:31Z" | ---
license: apache-2.0
---
|
tyang816/FLIP_AAV_mut-des | tyang816 | "2024-11-21T13:31:15Z" | 5 | 0 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T13:30:07Z" | ---
license: apache-2.0
---
|
tyang816/FLIP_AAV_des-mut | tyang816 | "2024-11-21T13:33:09Z" | 5 | 0 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T13:31:38Z" | ---
license: apache-2.0
---
|
tyang816/FLIP_AAV_seven-vs-rest | tyang816 | "2024-11-21T13:34:25Z" | 5 | 0 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T13:33:59Z" | ---
license: apache-2.0
---
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_2sec_PERFECT_chunk_40 | HamdanXI | "2024-11-21T13:37:57Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T13:36:21Z" | ---
dataset_info:
features:
- name: audio_clip
sequence: float64
- name: layer0_prediction
sequence: float64
- name: predicted_text
dtype: string
- name: speaker_id
dtype: string
splits:
- name: train
num_bytes: 2646635245
num_examples: 100
download_size: 2025426108
dataset_size: 2646635245
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_2sec_PERFECT_chunk_40"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyang816/FLIP_AAV_low-vs-high | tyang816 | "2024-11-21T13:37:27Z" | 5 | 0 | [
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T13:36:31Z" | ---
license: apache-2.0
---
|
tyang816/FLIP_AAV_sampled | tyang816 | "2024-11-21T13:38:16Z" | 5 | 0 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T13:37:31Z" | ---
license: apache-2.0
---
|
tyang816/FLIP_GB1_one-vs-rest | tyang816 | "2024-11-21T13:39:58Z" | 5 | 0 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T13:39:46Z" | ---
license: apache-2.0
---
|
open-llm-leaderboard/ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B-details | open-llm-leaderboard | "2024-11-21T13:48:17Z" | 5 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T13:44:54Z" | ---
pretty_name: Evaluation run of ZeroXClem/Qwen-2.5-Aether-SlerpFusion-7B
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [ZeroXClem/Qwen-2.5-Aether-SlerpFusion-7B](https://huggingface.co/ZeroXClem/Qwen-2.5-Aether-SlerpFusion-7B)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B-details\"\
,\n\tname=\"ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-21T13-44-53.382689](https://huggingface.co/datasets/open-llm-leaderboard/ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B-details/blob/main/ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B/results_2024-11-21T13-44-53.382689.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"prompt_level_loose_acc,none\": 0.5970425138632163,\n \"\
prompt_level_loose_acc_stderr,none\": 0.02110743025673167,\n \"exact_match,none\"\
: 0.24169184290030213,\n \"exact_match_stderr,none\": 0.010888216300269935,\n\
\ \"inst_level_strict_acc,none\": 0.6774580335731415,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"acc,none\": 0.43267952127659576,\n\
\ \"acc_stderr,none\": 0.004516963042571534,\n \"inst_level_loose_acc,none\"\
: 0.6990407673860911,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\
,\n \"acc_norm,none\": 0.49306005967051497,\n \"acc_norm_stderr,none\"\
: 0.005306328123826936,\n \"prompt_level_strict_acc,none\": 0.5748613678373382,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.02127403980535566,\n \
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n\
\ \"acc_norm,none\": 0.5433084533935081,\n \"acc_norm_stderr,none\"\
: 0.006133836853018902,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.864,\n\
\ \"acc_norm_stderr,none\": 0.021723342617052086\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5882352941176471,\n \"acc_norm_stderr,none\"\
: 0.036086405630856196\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.62,\n \"acc_norm_stderr,none\": 0.030760116042626098\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.596,\n\
\ \"acc_norm_stderr,none\": 0.03109668818482536\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.6,\n \"acc_norm_stderr,none\": 0.031046021028253316\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\"\
: \" - leaderboard_bbh_geometric_shapes\",\n \"acc_norm,none\": 0.46,\n\
\ \"acc_norm_stderr,none\": 0.031584653891499004\n },\n \
\ \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.604,\n \"acc_norm_stderr,none\":\
\ 0.030993197854577898\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.528,\n \"acc_norm_stderr,none\":\
\ 0.031636489531544396\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.484,\n \"acc_norm_stderr,none\":\
\ 0.03166998503010743\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.804,\n \"acc_norm_stderr,none\":\
\ 0.025156857313255922\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \
\ \"acc_norm,none\": 0.692,\n \"acc_norm_stderr,none\": 0.02925692860650181\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \"\
\ - leaderboard_bbh_navigate\",\n \"acc_norm,none\": 0.716,\n \
\ \"acc_norm_stderr,none\": 0.028576958730437443\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.396,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.5547945205479452,\n \"acc_norm_stderr,none\": 0.04127264774457449\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.608,\n \"acc_norm_stderr,none\": 0.030938207620401222\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.464,\n \
\ \"acc_norm_stderr,none\": 0.03160397514522374\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\":\
\ 0.031563285061213475\n },\n \"leaderboard_bbh_snarks\": {\n \
\ \"alias\": \" - leaderboard_bbh_snarks\",\n \"acc_norm,none\"\
: 0.7303370786516854,\n \"acc_norm_stderr,none\": 0.03335689818443925\n\
\ },\n \"leaderboard_bbh_sports_understanding\": {\n \"\
alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.764,\n \"acc_norm_stderr,none\": 0.026909337594953852\n },\n\
\ \"leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" -\
\ leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.336,\n\
\ \"acc_norm_stderr,none\": 0.02993325909419153\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.188,\n \"acc_norm_stderr,none\": 0.024760377727750513\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.172,\n \"acc_norm_stderr,none\":\
\ 0.02391551394448624\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.26,\n \"acc_norm_stderr,none\": 0.027797315752644335\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\":\
\ \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\": 0.536,\n\
\ \"acc_norm_stderr,none\": 0.031603975145223735\n },\n \
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2986577181208054,\n\
\ \"acc_norm_stderr,none\": 0.013265362908440905,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.31313131313131315,\n \"acc_norm_stderr,none\": 0.033042050878136546\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.28205128205128205,\n\
\ \"acc_norm_stderr,none\": 0.019275803929950375\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3125,\n \"acc_norm_stderr,none\"\
: 0.021923384489444957\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.5748613678373382,\n \"prompt_level_strict_acc_stderr,none\": 0.021274039805355655,\n\
\ \"inst_level_strict_acc,none\": 0.6774580335731415,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.5970425138632163,\n \"prompt_level_loose_acc_stderr,none\": 0.02110743025673167,\n\
\ \"inst_level_loose_acc,none\": 0.6990407673860911,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.24169184290030213,\n \"exact_match_stderr,none\"\
: 0.010888216300269935,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.44625407166123776,\n\
\ \"exact_match_stderr,none\": 0.028417486054945495\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.2032520325203252,\n \"exact_match_stderr,none\": 0.03643325851749072\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.09090909090909091,\n\
\ \"exact_match_stderr,none\": 0.0251172256361608\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\": \"\
\ - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.07142857142857142,\n \"exact_match_stderr,none\": 0.015418479185779361\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.22077922077922077,\n\
\ \"exact_match_stderr,none\": 0.033532323343787154\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.43523316062176165,\n \"exact_match_stderr,none\"\
: 0.03578038165008584\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.05925925925925926,\n \"exact_match_stderr,none\"\
: 0.02039673654232189\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.43267952127659576,\n\
\ \"acc_stderr,none\": 0.004516963042571534\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.4166666666666667,\n \"acc_norm_stderr,none\"\
: 0.017479222590443398,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.556,\n\
\ \"acc_norm_stderr,none\": 0.03148684942554571\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.27734375,\n \"acc_norm_stderr,none\"\
: 0.02803528549328419\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ }\n },\n \"leaderboard\": {\n \"prompt_level_loose_acc,none\"\
: 0.5970425138632163,\n \"prompt_level_loose_acc_stderr,none\": 0.02110743025673167,\n\
\ \"exact_match,none\": 0.24169184290030213,\n \"exact_match_stderr,none\"\
: 0.010888216300269935,\n \"inst_level_strict_acc,none\": 0.6774580335731415,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"acc,none\":\
\ 0.43267952127659576,\n \"acc_stderr,none\": 0.004516963042571534,\n \
\ \"inst_level_loose_acc,none\": 0.6990407673860911,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"acc_norm,none\": 0.49306005967051497,\n \"acc_norm_stderr,none\"\
: 0.005306328123826936,\n \"prompt_level_strict_acc,none\": 0.5748613678373382,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.02127403980535566,\n \
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.5433084533935081,\n \"acc_norm_stderr,none\": 0.006133836853018902,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"\
acc_norm,none\": 0.864,\n \"acc_norm_stderr,none\": 0.021723342617052086\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5882352941176471,\n \"acc_norm_stderr,none\"\
: 0.036086405630856196\n },\n \"leaderboard_bbh_date_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.62,\n \"acc_norm_stderr,none\": 0.030760116042626098\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.596,\n \"acc_norm_stderr,none\": 0.03109668818482536\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.6,\n \"acc_norm_stderr,none\": 0.031046021028253316\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.46,\n \"acc_norm_stderr,none\": 0.031584653891499004\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.604,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.528,\n \"acc_norm_stderr,none\": 0.031636489531544396\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.804,\n \"acc_norm_stderr,none\": 0.025156857313255922\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.692,\n \"acc_norm_stderr,none\": 0.02925692860650181\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.716,\n \"acc_norm_stderr,none\": 0.028576958730437443\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.396,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.5547945205479452,\n\
\ \"acc_norm_stderr,none\": 0.04127264774457449\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.608,\n \"acc_norm_stderr,none\": 0.030938207620401222\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.544,\n \"acc_norm_stderr,none\": 0.031563285061213475\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.7303370786516854,\n \"acc_norm_stderr,none\"\
: 0.03335689818443925\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.764,\n \"acc_norm_stderr,none\": 0.026909337594953852\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.336,\n \"acc_norm_stderr,none\": 0.02993325909419153\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.188,\n \"acc_norm_stderr,none\": 0.024760377727750513\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.172,\n \"acc_norm_stderr,none\": 0.02391551394448624\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.26,\n \"acc_norm_stderr,none\": 0.027797315752644335\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\": 0.031603975145223735\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2986577181208054,\n\
\ \"acc_norm_stderr,none\": 0.013265362908440905,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.31313131313131315,\n\
\ \"acc_norm_stderr,none\": 0.033042050878136546\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.28205128205128205,\n \"acc_norm_stderr,none\": 0.019275803929950375\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.3125,\n \"acc_norm_stderr,none\": 0.021923384489444957\n\
\ },\n \"leaderboard_ifeval\": {\n \"alias\": \" - leaderboard_ifeval\"\
,\n \"prompt_level_strict_acc,none\": 0.5748613678373382,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.021274039805355655,\n \"inst_level_strict_acc,none\": 0.6774580335731415,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.5970425138632163,\n \"prompt_level_loose_acc_stderr,none\": 0.02110743025673167,\n\
\ \"inst_level_loose_acc,none\": 0.6990407673860911,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\"\n },\n \"leaderboard_math_hard\": {\n \"exact_match,none\"\
: 0.24169184290030213,\n \"exact_match_stderr,none\": 0.010888216300269935,\n\
\ \"alias\": \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.44625407166123776,\n \"exact_match_stderr,none\": 0.028417486054945495\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.2032520325203252,\n \"exact_match_stderr,none\": 0.03643325851749072\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.09090909090909091,\n \"exact_match_stderr,none\"\
: 0.0251172256361608\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.07142857142857142,\n \"exact_match_stderr,none\"\
: 0.015418479185779361\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.22077922077922077,\n \"exact_match_stderr,none\": 0.033532323343787154\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.43523316062176165,\n \"exact_match_stderr,none\"\
: 0.03578038165008584\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.05925925925925926,\n \"exact_match_stderr,none\": 0.02039673654232189\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.43267952127659576,\n \"acc_stderr,none\": 0.004516963042571534\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.4166666666666667,\n\
\ \"acc_norm_stderr,none\": 0.017479222590443398,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.27734375,\n \"acc_norm_stderr,none\": 0.02803528549328419\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.42,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ }\n}\n```"
repo_url: https://huggingface.co/ZeroXClem/Qwen-2.5-Aether-SlerpFusion-7B
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_navigate
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_snarks
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_gpqa_extended
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_gpqa_main
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_gpqa_main_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_ifeval
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_ifeval_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_mmlu_pro
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_musr_object_placements
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-21T13-44-53.382689.jsonl'
- config_name: ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_21T13_44_53.382689
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-21T13-44-53.382689.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-21T13-44-53.382689.jsonl'
---
# Dataset Card for Evaluation run of ZeroXClem/Qwen-2.5-Aether-SlerpFusion-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [ZeroXClem/Qwen-2.5-Aether-SlerpFusion-7B](https://huggingface.co/ZeroXClem/Qwen-2.5-Aether-SlerpFusion-7B)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B-details",
name="ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-21T13-44-53.382689](https://huggingface.co/datasets/open-llm-leaderboard/ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B-details/blob/main/ZeroXClem__Qwen-2.5-Aether-SlerpFusion-7B/results_2024-11-21T13-44-53.382689.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"prompt_level_loose_acc,none": 0.5970425138632163,
"prompt_level_loose_acc_stderr,none": 0.02110743025673167,
"exact_match,none": 0.24169184290030213,
"exact_match_stderr,none": 0.010888216300269935,
"inst_level_strict_acc,none": 0.6774580335731415,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.43267952127659576,
"acc_stderr,none": 0.004516963042571534,
"inst_level_loose_acc,none": 0.6990407673860911,
"inst_level_loose_acc_stderr,none": "N/A",
"acc_norm,none": 0.49306005967051497,
"acc_norm_stderr,none": 0.005306328123826936,
"prompt_level_strict_acc,none": 0.5748613678373382,
"prompt_level_strict_acc_stderr,none": 0.02127403980535566,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5433084533935081,
"acc_norm_stderr,none": 0.006133836853018902,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.864,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5882352941176471,
"acc_norm_stderr,none": 0.036086405630856196
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.62,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.596,
"acc_norm_stderr,none": 0.03109668818482536
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.6,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.528,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.804,
"acc_norm_stderr,none": 0.025156857313255922
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.692,
"acc_norm_stderr,none": 0.02925692860650181
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.716,
"acc_norm_stderr,none": 0.028576958730437443
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.396,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.5547945205479452,
"acc_norm_stderr,none": 0.04127264774457449
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.7303370786516854,
"acc_norm_stderr,none": 0.03335689818443925
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.764,
"acc_norm_stderr,none": 0.026909337594953852
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.336,
"acc_norm_stderr,none": 0.02993325909419153
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.172,
"acc_norm_stderr,none": 0.02391551394448624
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.26,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2986577181208054,
"acc_norm_stderr,none": 0.013265362908440905,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.31313131313131315,
"acc_norm_stderr,none": 0.033042050878136546
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.28205128205128205,
"acc_norm_stderr,none": 0.019275803929950375
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3125,
"acc_norm_stderr,none": 0.021923384489444957
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.5748613678373382,
"prompt_level_strict_acc_stderr,none": 0.021274039805355655,
"inst_level_strict_acc,none": 0.6774580335731415,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.5970425138632163,
"prompt_level_loose_acc_stderr,none": 0.02110743025673167,
"inst_level_loose_acc,none": 0.6990407673860911,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.24169184290030213,
"exact_match_stderr,none": 0.010888216300269935,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.44625407166123776,
"exact_match_stderr,none": 0.028417486054945495
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.2032520325203252,
"exact_match_stderr,none": 0.03643325851749072
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.09090909090909091,
"exact_match_stderr,none": 0.0251172256361608
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.07142857142857142,
"exact_match_stderr,none": 0.015418479185779361
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.22077922077922077,
"exact_match_stderr,none": 0.033532323343787154
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.43523316062176165,
"exact_match_stderr,none": 0.03578038165008584
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.05925925925925926,
"exact_match_stderr,none": 0.02039673654232189
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.43267952127659576,
"acc_stderr,none": 0.004516963042571534
},
"leaderboard_musr": {
"acc_norm,none": 0.4166666666666667,
"acc_norm_stderr,none": 0.017479222590443398,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.27734375,
"acc_norm_stderr,none": 0.02803528549328419
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
}
},
"leaderboard": {
"prompt_level_loose_acc,none": 0.5970425138632163,
"prompt_level_loose_acc_stderr,none": 0.02110743025673167,
"exact_match,none": 0.24169184290030213,
"exact_match_stderr,none": 0.010888216300269935,
"inst_level_strict_acc,none": 0.6774580335731415,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.43267952127659576,
"acc_stderr,none": 0.004516963042571534,
"inst_level_loose_acc,none": 0.6990407673860911,
"inst_level_loose_acc_stderr,none": "N/A",
"acc_norm,none": 0.49306005967051497,
"acc_norm_stderr,none": 0.005306328123826936,
"prompt_level_strict_acc,none": 0.5748613678373382,
"prompt_level_strict_acc_stderr,none": 0.02127403980535566,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5433084533935081,
"acc_norm_stderr,none": 0.006133836853018902,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.864,
"acc_norm_stderr,none": 0.021723342617052086
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5882352941176471,
"acc_norm_stderr,none": 0.036086405630856196
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.62,
"acc_norm_stderr,none": 0.030760116042626098
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.596,
"acc_norm_stderr,none": 0.03109668818482536
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.6,
"acc_norm_stderr,none": 0.031046021028253316
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.46,
"acc_norm_stderr,none": 0.031584653891499004
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.528,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.804,
"acc_norm_stderr,none": 0.025156857313255922
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.692,
"acc_norm_stderr,none": 0.02925692860650181
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.716,
"acc_norm_stderr,none": 0.028576958730437443
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.396,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.5547945205479452,
"acc_norm_stderr,none": 0.04127264774457449
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.544,
"acc_norm_stderr,none": 0.031563285061213475
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.7303370786516854,
"acc_norm_stderr,none": 0.03335689818443925
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.764,
"acc_norm_stderr,none": 0.026909337594953852
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.336,
"acc_norm_stderr,none": 0.02993325909419153
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.172,
"acc_norm_stderr,none": 0.02391551394448624
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.26,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2986577181208054,
"acc_norm_stderr,none": 0.013265362908440905,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.31313131313131315,
"acc_norm_stderr,none": 0.033042050878136546
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.28205128205128205,
"acc_norm_stderr,none": 0.019275803929950375
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.3125,
"acc_norm_stderr,none": 0.021923384489444957
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.5748613678373382,
"prompt_level_strict_acc_stderr,none": 0.021274039805355655,
"inst_level_strict_acc,none": 0.6774580335731415,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.5970425138632163,
"prompt_level_loose_acc_stderr,none": 0.02110743025673167,
"inst_level_loose_acc,none": 0.6990407673860911,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.24169184290030213,
"exact_match_stderr,none": 0.010888216300269935,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.44625407166123776,
"exact_match_stderr,none": 0.028417486054945495
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.2032520325203252,
"exact_match_stderr,none": 0.03643325851749072
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.09090909090909091,
"exact_match_stderr,none": 0.0251172256361608
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.07142857142857142,
"exact_match_stderr,none": 0.015418479185779361
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.22077922077922077,
"exact_match_stderr,none": 0.033532323343787154
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.43523316062176165,
"exact_match_stderr,none": 0.03578038165008584
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.05925925925925926,
"exact_match_stderr,none": 0.02039673654232189
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.43267952127659576,
"acc_stderr,none": 0.004516963042571534
},
"leaderboard_musr": {
"acc_norm,none": 0.4166666666666667,
"acc_norm_stderr,none": 0.017479222590443398,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.27734375,
"acc_norm_stderr,none": 0.02803528549328419
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.42,
"acc_norm_stderr,none": 0.03127799950463661
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
open-llm-leaderboard/AALF__FuseChat-Llama-3.1-8B-SFT-preview-details | open-llm-leaderboard | "2024-11-21T13:49:00Z" | 5 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T13:45:15Z" | ---
pretty_name: Evaluation run of AALF/FuseChat-Llama-3.1-8B-SFT-preview
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [AALF/FuseChat-Llama-3.1-8B-SFT-preview](https://huggingface.co/AALF/FuseChat-Llama-3.1-8B-SFT-preview)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/AALF__FuseChat-Llama-3.1-8B-SFT-preview-details\"\
,\n\tname=\"AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-21T13-45-14.605333](https://huggingface.co/datasets/open-llm-leaderboard/AALF__FuseChat-Llama-3.1-8B-SFT-preview-details/blob/main/AALF__FuseChat-Llama-3.1-8B-SFT-preview/results_2024-11-21T13-45-14.605333.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"acc_norm,none\": 0.476585808794915,\n \"acc_norm_stderr,none\"\
: 0.0053560289532095535,\n \"prompt_level_strict_acc,none\": 0.6839186691312384,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.020008050377239083,\n \
\ \"prompt_level_loose_acc,none\": 0.7134935304990758,\n \"\
prompt_level_loose_acc_stderr,none\": 0.01945652858321169,\n \"inst_level_loose_acc,none\"\
: 0.7985611510791367,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\
,\n \"inst_level_strict_acc,none\": 0.7721822541966427,\n \
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"acc,none\": 0.37433510638297873,\n\
\ \"acc_stderr,none\": 0.004412150413939243,\n \"exact_match,none\"\
: 0.11404833836858005,\n \"exact_match_stderr,none\": 0.008442541000689963,\n\
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.522131574379448,\n \"acc_norm_stderr,none\"\
: 0.006222219134039073,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.824,\n\
\ \"acc_norm_stderr,none\": 0.024133497525457123\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6042780748663101,\n \"acc_norm_stderr,none\"\
: 0.035855600715925424\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.512,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.644,\n\
\ \"acc_norm_stderr,none\": 0.0303436806571532\n },\n \"\
leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.556,\n \"acc_norm_stderr,none\":\
\ 0.03148684942554571\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.316,\n \"acc_norm_stderr,none\": 0.029462657598578648\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.7,\n \
\ \"acc_norm_stderr,none\": 0.029040893477575786\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.396,\n \"acc_norm_stderr,none\":\
\ 0.030993197854577898\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.384,\n \"acc_norm_stderr,none\":\
\ 0.030821679117375447\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.64,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.608,\n \"acc_norm_stderr,none\": 0.030938207620401222\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.672,\n \"acc_norm_stderr,none\":\
\ 0.029752391824475363\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.34,\n \"acc_norm_stderr,none\": 0.030020073605457873\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.5,\n \"acc_norm_stderr,none\": 0.041522739926869986\n },\n\
\ \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \"\
alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n \"\
acc_norm,none\": 0.672,\n \"acc_norm_stderr,none\": 0.029752391824475363\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.564,\n \
\ \"acc_norm_stderr,none\": 0.03142556706028136\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\":\
\ 0.03167708558254714\n },\n \"leaderboard_bbh_snarks\": {\n \
\ \"alias\": \" - leaderboard_bbh_snarks\",\n \"acc_norm,none\"\
: 0.6404494382022472,\n \"acc_norm_stderr,none\": 0.03606913914074032\n\
\ },\n \"leaderboard_bbh_sports_understanding\": {\n \"\
alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.776,\n \"acc_norm_stderr,none\": 0.026421361687347884\n },\n\
\ \"leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" -\
\ leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.448,\n\
\ \"acc_norm_stderr,none\": 0.03151438761115349\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.2,\n \"acc_norm_stderr,none\": 0.02534897002097912\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.208,\n \"acc_norm_stderr,none\":\
\ 0.02572139890141637\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.364,\n \"acc_norm_stderr,none\":\
\ 0.030491555220405475\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.30453020134228187,\n\
\ \"acc_norm_stderr,none\": 0.013343363032004148,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.32323232323232326,\n \"acc_norm_stderr,none\": 0.03332299921070644\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.3058608058608059,\n\
\ \"acc_norm_stderr,none\": 0.019737263843674822\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.29464285714285715,\n \"acc_norm_stderr,none\"\
: 0.021562481080109767\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.6839186691312384,\n \"prompt_level_strict_acc_stderr,none\": 0.020008050377239083,\n\
\ \"inst_level_strict_acc,none\": 0.7721822541966427,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.7134935304990758,\n \"prompt_level_loose_acc_stderr,none\": 0.01945652858321169,\n\
\ \"inst_level_loose_acc,none\": 0.7985611510791367,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.11404833836858005,\n \"exact_match_stderr,none\"\
: 0.008442541000689963,\n \"alias\": \" - leaderboard_math_hard\"\n \
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.21498371335504887,\n\
\ \"exact_match_stderr,none\": 0.0234845044411588\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \" -\
\ leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\":\
\ 0.056910569105691054,\n \"exact_match_stderr,none\": 0.020974566219895126\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.07575757575757576,\n\
\ \"exact_match_stderr,none\": 0.023119068741795586\n },\n \
\ \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\":\
\ \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.017857142857142856,\n \"exact_match_stderr,none\": 0.007928503387888855\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\"\
: \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\": 0.06493506493506493,\n\
\ \"exact_match_stderr,none\": 0.01992116854149014\n },\n \
\ \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.22797927461139897,\n \"exact_match_stderr,none\"\
: 0.030276909945178256\n },\n \"leaderboard_math_precalculus_hard\"\
: {\n \"alias\": \" - leaderboard_math_precalculus_hard\",\n \
\ \"exact_match,none\": 0.06666666666666667,\n \"exact_match_stderr,none\"\
: 0.021548664505181808\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.37433510638297873,\n\
\ \"acc_stderr,none\": 0.004412150413939243\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.4007936507936508,\n \"acc_norm_stderr,none\"\
: 0.017089741838981102,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\":\
\ \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.592,\n\
\ \"acc_norm_stderr,none\": 0.03114520984654851\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.25,\n \"acc_norm_stderr,none\": 0.02711630722733202\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\"\
: \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\": 0.364,\n\
\ \"acc_norm_stderr,none\": 0.030491555220405475\n }\n },\n\
\ \"leaderboard\": {\n \"acc_norm,none\": 0.476585808794915,\n \
\ \"acc_norm_stderr,none\": 0.0053560289532095535,\n \"prompt_level_strict_acc,none\"\
: 0.6839186691312384,\n \"prompt_level_strict_acc_stderr,none\": 0.020008050377239083,\n\
\ \"prompt_level_loose_acc,none\": 0.7134935304990758,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.01945652858321169,\n \"inst_level_loose_acc,none\": 0.7985611510791367,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"inst_level_strict_acc,none\"\
: 0.7721822541966427,\n \"inst_level_strict_acc_stderr,none\": \"N/A\",\n\
\ \"acc,none\": 0.37433510638297873,\n \"acc_stderr,none\": 0.004412150413939243,\n\
\ \"exact_match,none\": 0.11404833836858005,\n \"exact_match_stderr,none\"\
: 0.008442541000689963,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.522131574379448,\n \"acc_norm_stderr,none\"\
: 0.006222219134039073,\n \"alias\": \" - leaderboard_bbh\"\n },\n \
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.824,\n \"acc_norm_stderr,none\": 0.024133497525457123\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.6042780748663101,\n \"acc_norm_stderr,none\"\
: 0.035855600715925424\n },\n \"leaderboard_bbh_date_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.512,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.644,\n \"acc_norm_stderr,none\": 0.0303436806571532\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.316,\n \"acc_norm_stderr,none\": 0.029462657598578648\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.7,\n \"acc_norm_stderr,none\": 0.029040893477575786\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.396,\n \"acc_norm_stderr,none\": 0.030993197854577898\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.384,\n \"acc_norm_stderr,none\": 0.030821679117375447\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.64,\n \"acc_norm_stderr,none\": 0.03041876402517494\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"\
acc_norm,none\": 0.608,\n \"acc_norm_stderr,none\": 0.030938207620401222\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.672,\n \"acc_norm_stderr,none\": 0.029752391824475363\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.34,\n \"acc_norm_stderr,none\": 0.030020073605457873\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.5,\n \
\ \"acc_norm_stderr,none\": 0.041522739926869986\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.672,\n \"acc_norm_stderr,none\": 0.029752391824475363\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.564,\n \"acc_norm_stderr,none\": 0.03142556706028136\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6404494382022472,\n \"acc_norm_stderr,none\"\
: 0.03606913914074032\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.776,\n \"acc_norm_stderr,none\": 0.026421361687347884\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.448,\n \"acc_norm_stderr,none\": 0.03151438761115349\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.2,\n \"acc_norm_stderr,none\": 0.02534897002097912\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.208,\n \"acc_norm_stderr,none\": 0.02572139890141637\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.364,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.52,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.30453020134228187,\n\
\ \"acc_norm_stderr,none\": 0.013343363032004148,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.32323232323232326,\n\
\ \"acc_norm_stderr,none\": 0.03332299921070644\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.3058608058608059,\n \"acc_norm_stderr,none\": 0.019737263843674822\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.29464285714285715,\n \"acc_norm_stderr,none\"\
: 0.021562481080109767\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.6839186691312384,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.020008050377239083,\n \
\ \"inst_level_strict_acc,none\": 0.7721822541966427,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.7134935304990758,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.01945652858321169,\n \"inst_level_loose_acc,none\"\
: 0.7985611510791367,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.11404833836858005,\n\
\ \"exact_match_stderr,none\": 0.008442541000689963,\n \"alias\":\
\ \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.21498371335504887,\n \"exact_match_stderr,none\": 0.0234845044411588\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.056910569105691054,\n \"exact_match_stderr,none\": 0.020974566219895126\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.07575757575757576,\n \"exact_match_stderr,none\"\
: 0.023119068741795586\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.017857142857142856,\n \"exact_match_stderr,none\"\
: 0.007928503387888855\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.06493506493506493,\n \"exact_match_stderr,none\": 0.01992116854149014\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.22797927461139897,\n \"exact_match_stderr,none\"\
: 0.030276909945178256\n },\n \"leaderboard_math_precalculus_hard\": {\n \
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.06666666666666667,\n \"exact_match_stderr,none\": 0.021548664505181808\n\
\ },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \" - leaderboard_mmlu_pro\"\
,\n \"acc,none\": 0.37433510638297873,\n \"acc_stderr,none\": 0.004412150413939243\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.4007936507936508,\n\
\ \"acc_norm_stderr,none\": 0.017089741838981102,\n \"alias\": \"\
\ - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\": {\n \
\ \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\"\
: 0.592,\n \"acc_norm_stderr,none\": 0.03114520984654851\n },\n \"\
leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.25,\n \"acc_norm_stderr,none\": 0.02711630722733202\n\
\ },\n \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.364,\n \"acc_norm_stderr,none\": 0.030491555220405475\n\
\ }\n}\n```"
repo_url: https://huggingface.co/AALF/FuseChat-Llama-3.1-8B-SFT-preview
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_navigate
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_snarks
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_gpqa_extended
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_gpqa_main
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_gpqa_main_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_ifeval
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_ifeval_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_mmlu_pro
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_musr_object_placements
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-21T13-45-14.605333.jsonl'
- config_name: AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_21T13_45_14.605333
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-21T13-45-14.605333.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-21T13-45-14.605333.jsonl'
---
# Dataset Card for Evaluation run of AALF/FuseChat-Llama-3.1-8B-SFT-preview
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [AALF/FuseChat-Llama-3.1-8B-SFT-preview](https://huggingface.co/AALF/FuseChat-Llama-3.1-8B-SFT-preview)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/AALF__FuseChat-Llama-3.1-8B-SFT-preview-details",
name="AALF__FuseChat-Llama-3.1-8B-SFT-preview__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-21T13-45-14.605333](https://huggingface.co/datasets/open-llm-leaderboard/AALF__FuseChat-Llama-3.1-8B-SFT-preview-details/blob/main/AALF__FuseChat-Llama-3.1-8B-SFT-preview/results_2024-11-21T13-45-14.605333.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"acc_norm,none": 0.476585808794915,
"acc_norm_stderr,none": 0.0053560289532095535,
"prompt_level_strict_acc,none": 0.6839186691312384,
"prompt_level_strict_acc_stderr,none": 0.020008050377239083,
"prompt_level_loose_acc,none": 0.7134935304990758,
"prompt_level_loose_acc_stderr,none": 0.01945652858321169,
"inst_level_loose_acc,none": 0.7985611510791367,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.7721822541966427,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.37433510638297873,
"acc_stderr,none": 0.004412150413939243,
"exact_match,none": 0.11404833836858005,
"exact_match_stderr,none": 0.008442541000689963,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.522131574379448,
"acc_norm_stderr,none": 0.006222219134039073,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.824,
"acc_norm_stderr,none": 0.024133497525457123
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6042780748663101,
"acc_norm_stderr,none": 0.035855600715925424
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.644,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.7,
"acc_norm_stderr,none": 0.029040893477575786
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.396,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.384,
"acc_norm_stderr,none": 0.030821679117375447
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.64,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.672,
"acc_norm_stderr,none": 0.029752391824475363
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.34,
"acc_norm_stderr,none": 0.030020073605457873
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.5,
"acc_norm_stderr,none": 0.041522739926869986
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.672,
"acc_norm_stderr,none": 0.029752391824475363
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.564,
"acc_norm_stderr,none": 0.03142556706028136
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6404494382022472,
"acc_norm_stderr,none": 0.03606913914074032
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.776,
"acc_norm_stderr,none": 0.026421361687347884
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.2,
"acc_norm_stderr,none": 0.02534897002097912
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.208,
"acc_norm_stderr,none": 0.02572139890141637
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.364,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_gpqa": {
"acc_norm,none": 0.30453020134228187,
"acc_norm_stderr,none": 0.013343363032004148,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.32323232323232326,
"acc_norm_stderr,none": 0.03332299921070644
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.3058608058608059,
"acc_norm_stderr,none": 0.019737263843674822
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.29464285714285715,
"acc_norm_stderr,none": 0.021562481080109767
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.6839186691312384,
"prompt_level_strict_acc_stderr,none": 0.020008050377239083,
"inst_level_strict_acc,none": 0.7721822541966427,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7134935304990758,
"prompt_level_loose_acc_stderr,none": 0.01945652858321169,
"inst_level_loose_acc,none": 0.7985611510791367,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.11404833836858005,
"exact_match_stderr,none": 0.008442541000689963,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.21498371335504887,
"exact_match_stderr,none": 0.0234845044411588
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.056910569105691054,
"exact_match_stderr,none": 0.020974566219895126
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.07575757575757576,
"exact_match_stderr,none": 0.023119068741795586
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.017857142857142856,
"exact_match_stderr,none": 0.007928503387888855
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.06493506493506493,
"exact_match_stderr,none": 0.01992116854149014
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.22797927461139897,
"exact_match_stderr,none": 0.030276909945178256
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.06666666666666667,
"exact_match_stderr,none": 0.021548664505181808
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.37433510638297873,
"acc_stderr,none": 0.004412150413939243
},
"leaderboard_musr": {
"acc_norm,none": 0.4007936507936508,
"acc_norm_stderr,none": 0.017089741838981102,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.25,
"acc_norm_stderr,none": 0.02711630722733202
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.364,
"acc_norm_stderr,none": 0.030491555220405475
}
},
"leaderboard": {
"acc_norm,none": 0.476585808794915,
"acc_norm_stderr,none": 0.0053560289532095535,
"prompt_level_strict_acc,none": 0.6839186691312384,
"prompt_level_strict_acc_stderr,none": 0.020008050377239083,
"prompt_level_loose_acc,none": 0.7134935304990758,
"prompt_level_loose_acc_stderr,none": 0.01945652858321169,
"inst_level_loose_acc,none": 0.7985611510791367,
"inst_level_loose_acc_stderr,none": "N/A",
"inst_level_strict_acc,none": 0.7721822541966427,
"inst_level_strict_acc_stderr,none": "N/A",
"acc,none": 0.37433510638297873,
"acc_stderr,none": 0.004412150413939243,
"exact_match,none": 0.11404833836858005,
"exact_match_stderr,none": 0.008442541000689963,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.522131574379448,
"acc_norm_stderr,none": 0.006222219134039073,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.824,
"acc_norm_stderr,none": 0.024133497525457123
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.6042780748663101,
"acc_norm_stderr,none": 0.035855600715925424
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.512,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.644,
"acc_norm_stderr,none": 0.0303436806571532
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.316,
"acc_norm_stderr,none": 0.029462657598578648
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.7,
"acc_norm_stderr,none": 0.029040893477575786
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.396,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.384,
"acc_norm_stderr,none": 0.030821679117375447
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.64,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.608,
"acc_norm_stderr,none": 0.030938207620401222
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.672,
"acc_norm_stderr,none": 0.029752391824475363
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.34,
"acc_norm_stderr,none": 0.030020073605457873
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.5,
"acc_norm_stderr,none": 0.041522739926869986
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.672,
"acc_norm_stderr,none": 0.029752391824475363
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.564,
"acc_norm_stderr,none": 0.03142556706028136
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6404494382022472,
"acc_norm_stderr,none": 0.03606913914074032
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.776,
"acc_norm_stderr,none": 0.026421361687347884
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.448,
"acc_norm_stderr,none": 0.03151438761115349
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.2,
"acc_norm_stderr,none": 0.02534897002097912
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.208,
"acc_norm_stderr,none": 0.02572139890141637
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.364,
"acc_norm_stderr,none": 0.030491555220405475
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.52,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_gpqa": {
"acc_norm,none": 0.30453020134228187,
"acc_norm_stderr,none": 0.013343363032004148,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.32323232323232326,
"acc_norm_stderr,none": 0.03332299921070644
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.3058608058608059,
"acc_norm_stderr,none": 0.019737263843674822
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.29464285714285715,
"acc_norm_stderr,none": 0.021562481080109767
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.6839186691312384,
"prompt_level_strict_acc_stderr,none": 0.020008050377239083,
"inst_level_strict_acc,none": 0.7721822541966427,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.7134935304990758,
"prompt_level_loose_acc_stderr,none": 0.01945652858321169,
"inst_level_loose_acc,none": 0.7985611510791367,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.11404833836858005,
"exact_match_stderr,none": 0.008442541000689963,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.21498371335504887,
"exact_match_stderr,none": 0.0234845044411588
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.056910569105691054,
"exact_match_stderr,none": 0.020974566219895126
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.07575757575757576,
"exact_match_stderr,none": 0.023119068741795586
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.017857142857142856,
"exact_match_stderr,none": 0.007928503387888855
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.06493506493506493,
"exact_match_stderr,none": 0.01992116854149014
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.22797927461139897,
"exact_match_stderr,none": 0.030276909945178256
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.06666666666666667,
"exact_match_stderr,none": 0.021548664505181808
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.37433510638297873,
"acc_stderr,none": 0.004412150413939243
},
"leaderboard_musr": {
"acc_norm,none": 0.4007936507936508,
"acc_norm_stderr,none": 0.017089741838981102,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.592,
"acc_norm_stderr,none": 0.03114520984654851
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.25,
"acc_norm_stderr,none": 0.02711630722733202
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.364,
"acc_norm_stderr,none": 0.030491555220405475
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_2sec_PERFECT_chunk_50 | HamdanXI | "2024-11-21T14:01:15Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T13:59:37Z" | ---
dataset_info:
features:
- name: audio_clip
sequence: float64
- name: layer0_prediction
sequence: float64
- name: predicted_text
dtype: string
- name: speaker_id
dtype: string
splits:
- name: train
num_bytes: 2646635380
num_examples: 100
download_size: 1948402683
dataset_size: 2646635380
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_2sec_PERFECT_chunk_50"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
SeppeV/joke_gen_of_mistral_ft_mean_score_dpo_w_ex_reasoning_prompt_wo_ex | SeppeV | "2024-11-21T14:02:32Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T14:02:30Z" | ---
dataset_info:
features:
- name: jokeText
dtype: string
- name: userId
dtype: int64
splits:
- name: train
num_bytes: 173754
num_examples: 125
download_size: 90163
dataset_size: 173754
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
procit007/treated_0.2 | procit007 | "2024-11-21T14:07:12Z" | 5 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T14:05:30Z" | ---
dataset_info:
features:
- name: gender
dtype: string
- name: accent
dtype: string
- name: speaker_id
dtype: int64
- name: speaker_name
dtype: string
- name: text
dtype: string
- name: normalized_text
dtype: string
- name: audio
dtype: audio
- name: treated
dtype: bool
- name: metrics
struct:
- name: clipping_ratio
dtype: float64
- name: duration
dtype: float64
- name: is_valid
dtype: bool
- name: rms_energy
dtype: float64
- name: sample_rate
dtype: int64
- name: silence_ratio
dtype: float64
- name: snr
dtype: float64
splits:
- name: train
num_bytes: 3172095162.0
num_examples: 10000
download_size: 2973864857
dataset_size: 3172095162.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yuri-no/onboarding_introduction_metaphors | yuri-no | "2024-11-21T14:12:30Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T14:12:28Z" | ---
dataset_info:
features:
- name: value
dtype: string
- name: definition
dtype: string
- name: metaphor
dtype: string
splits:
- name: train
num_bytes: 85308
num_examples: 116
download_size: 48998
dataset_size: 85308
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HamdanXI/libriTTS_dev_wav2vec2_latent_layer0_2sec_PERFECT_chunk_59 | HamdanXI | "2024-11-21T14:18:28Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T14:17:56Z" | ---
dataset_info:
features:
- name: audio_clip
sequence: float64
- name: layer0_prediction
sequence: float64
- name: predicted_text
dtype: string
- name: speaker_id
dtype: string
splits:
- name: train
num_bytes: 952788826
num_examples: 36
download_size: 699798568
dataset_size: 952788826
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "libriTTS_dev_wav2vec2_latent_layer0_2sec_PERFECT_chunk_59"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
dgambettaphd/P_wiki_doc5000_real96 | dgambettaphd | "2024-11-21T14:27:09Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T14:27:07Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 2628049
num_examples: 5000
download_size: 1706149
dataset_size: 2628049
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
juliadollis/mistral_toxigen-data-test_zeroshot_curto_limiar3 | juliadollis | "2024-11-21T14:42:49Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T14:42:47Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: target_group
dtype: string
- name: factual?
dtype: string
- name: ingroup_effect
dtype: string
- name: lewd
dtype: string
- name: framing
dtype: string
- name: predicted_group
dtype: string
- name: stereotyping
dtype: string
- name: intent
dtype: float64
- name: toxicity_ai
dtype: float64
- name: toxicity_human
dtype: float64
- name: predicted_author
dtype: string
- name: actual_method
dtype: string
- name: is_toxic
dtype: int64
- name: predicted_is_toxic
dtype: int64
- name: y_true
dtype: int64
splits:
- name: train
num_bytes: 393176
num_examples: 940
download_size: 85196
dataset_size: 393176
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mrlyle/img-nov-21 | mrlyle | "2024-11-21T14:54:59Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T14:54:58Z" | ---
dataset_info:
features:
- name: image
dtype: image
splits:
- name: train
num_bytes: 8926.0
num_examples: 16
download_size: 9755
dataset_size: 8926.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
VladLoPG/alice_gpt | VladLoPG | "2024-11-21T15:12:11Z" | 5 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-11-21T15:12:10Z" | ---
license: apache-2.0
---
|
Metaskepsis/accept | Metaskepsis | "2024-11-21T15:17:39Z" | 5 | 0 | [
"region:us"
] | null | "2024-11-21T15:17:30Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: model_response
dtype: string
- name: is_correct
dtype: bool
- name: problem
dtype: string
- name: solution
dtype: string
splits:
- name: train
num_bytes: 106327060
num_examples: 26320
download_size: 46430332
dataset_size: 106327060
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nishadsinghi/MATH_GPT4o-mini_temp_0.7_128samples | nishadsinghi | "2024-11-21T15:18:37Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T15:18:34Z" | ---
dataset_info:
features:
- name: gt_answer
dtype: string
- name: prompt
dtype: string
- name: question
dtype: string
- name: samples
sequence: string
splits:
- name: train
num_bytes: 101824610
num_examples: 500
download_size: 28112425
dataset_size: 101824610
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
1231czx/ver2_rebuttal_eaf_rm_bon8_01 | 1231czx | "2024-11-21T15:39:37Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T15:39:36Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: generator
dtype: string
splits:
- name: train
num_bytes: 1346012
num_examples: 805
download_size: 812464
dataset_size: 1346012
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Priya098098/Raj | Priya098098 | "2024-11-21T15:47:12Z" | 5 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-11-21T15:47:12Z" | ---
license: apache-2.0
---
|
Tensorists/SD3_5_Turbo_images_combined | Tensorists | "2024-11-21T16:13:20Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T16:08:05Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Airplane
'1': Automobile
'2': Bird
'3': Cat
'4': Deer
'5': Dog
'6': Frog
'7': Horse
'8': Ship
'9': Truck
splits:
- name: train
num_bytes: 4472583093.935
num_examples: 2985
download_size: 4472859758
dataset_size: 4472583093.935
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Tensorists/SD3_images | Tensorists | "2024-11-21T16:20:46Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T16:19:54Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': Dog
'1': Frog
'2': Horse
'3': Ship
'4': Truck
splits:
- name: train
num_bytes: 619872098.741
num_examples: 1489
download_size: 619156629
dataset_size: 619872098.741
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
juliadollis/mistral_toxigen-data-test_2fewshot_limiar3 | juliadollis | "2024-11-21T16:27:19Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T16:27:17Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: target_group
dtype: string
- name: factual?
dtype: string
- name: ingroup_effect
dtype: string
- name: lewd
dtype: string
- name: framing
dtype: string
- name: predicted_group
dtype: string
- name: stereotyping
dtype: string
- name: intent
dtype: float64
- name: toxicity_ai
dtype: float64
- name: toxicity_human
dtype: float64
- name: predicted_author
dtype: string
- name: actual_method
dtype: string
- name: is_toxic
dtype: int64
- name: predicted_is_toxic
dtype: int64
- name: y_true
dtype: int64
splits:
- name: train
num_bytes: 393176
num_examples: 940
download_size: 85177
dataset_size: 393176
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Vinisf/Vinicin | Vinisf | "2024-11-21T16:41:48Z" | 5 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-11-21T16:41:17Z" | ---
license: openrail
---
|
dgambettaphd/P_wiki_doc10000_real64 | dgambettaphd | "2024-11-21T16:55:03Z" | 5 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T16:55:00Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 3528280
num_examples: 10000
download_size: 2306183
dataset_size: 3528280
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jazzysnake01/quizgen-chat-lg | jazzysnake01 | "2024-11-21T17:41:28Z" | 5 | 2 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T17:41:22Z" | ---
dataset_info:
features:
- name: formatted_conversation
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 40246123
num_examples: 22738
- name: test
num_bytes: 5033874
num_examples: 2845
- name: validation
num_bytes: 5003205
num_examples: 2842
download_size: 15870318
dataset_size: 50283202
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
GabrielML/SteamGRS | GabrielML | "2024-11-21T18:09:22Z" | 5 | 0 | [
"task_categories:text-classification",
"language:de",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | "2024-11-21T18:07:10Z" | ---
license: apache-2.0
task_categories:
- text-classification
language:
- de
size_categories:
- 1K<n<10K
---
Steam German Review Sentiment (SteamGRS) for seminar work (LoRA experiments). |
RyanYr/self-reflect_mini8Bit-t0_mistlarge-t12_om2-140k_binlabel_correction | RyanYr | "2024-11-21T18:20:00Z" | 5 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T18:19:54Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: response@0_correctness
dtype: bool
- name: response@2_correctness
dtype: bool
splits:
- name: train
num_bytes: 379427234
num_examples: 87678
download_size: 140157279
dataset_size: 379427234
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sumuks/e1v0.1-single-shot-questions-deduplicated | sumuks | "2024-11-21T18:40:09Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T18:40:07Z" | ---
dataset_info:
features:
- name: chunk_uuid
dtype: string
- name: generator_model
dtype: string
- name: question_type
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: document_analysis
dtype: string
- name: chunk_analysis
dtype: string
- name: potential_question_directions
sequence: string
- name: best_direction
dtype: string
- name: reasoning
dtype: string
- name: estimated_difficulty
dtype: int64
- name: testable_concepts
sequence: string
- name: difficulty_justification
dtype: string
- name: quote_context
dtype: string
- name: supporting_quotes
sequence: string
splits:
- name: train
num_bytes: 464953
num_examples: 249
download_size: 206005
dataset_size: 464953
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sumuks/e1v0.1-single-shot-questions-multihop-deduplicated | sumuks | "2024-11-21T18:53:46Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T18:53:43Z" | ---
dataset_info:
features:
- name: chunk_ids
sequence: string
- name: generator_model
dtype: string
- name: question_type
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: document_analysis
dtype: string
- name: chunk_analysis
sequence: string
- name: potential_question_directions
sequence: string
- name: best_direction
dtype: string
- name: reasoning
dtype: string
- name: estimated_difficulty
dtype: int64
- name: testable_concepts
sequence: string
- name: difficulty_justification
dtype: string
- name: quote_context
dtype: string
- name: supporting_quotes
sequence: string
splits:
- name: train
num_bytes: 106858
num_examples: 38
download_size: 60376
dataset_size: 106858
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
neoneye/simon-arc-solve-scale-v8 | neoneye | "2024-11-21T21:12:29Z" | 5 | 0 | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-to-text",
"text-to-image"
] | "2024-11-21T21:11:08Z" | ---
license: mit
task_categories:
- image-to-text
- text-to-image
language:
- en
pretty_name: simons ARC (abstraction & reasoning corpus) solve scale version 8
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: data.jsonl
---
# Version 1
ARC-AGI Tasks where the images gets scaled up/down in both x and y direction.
example count: 2-4.
test count: 1-2.
image size: 3-10.
scale factor: 1-3.
# Version 2
image size: 1-20.
scale factor: 1-7.
# Version 3
image size: 1-30.
scale factor: 1-7.
# Version 4
Added a few noise to the images.
image size: 1-10.
scale factor: 1-7.
Only scale down.
Number of noise pixels per pixel cell: 0-2.
# Version 5
More noisy images for down scaling.
image size: 1-12.
Number of noise pixels per pixel cell: 0-half.
# Version 6
Earlier predictions added to some of the rows.
# Version 7
Added fields: `arc_task`, `test_index`, `earlier_output`.
# Version 8
Replaced RLE compressed response with raw pixel response.
image size: 1-5.
|
Harshgup16/finetuning_laptop_recommendation | Harshgup16 | "2024-11-21T21:16:57Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T21:16:55Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 14940
num_examples: 22
download_size: 8251
dataset_size: 14940
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
juliadollis/ner_stf_fuzzy | juliadollis | "2024-11-21T21:37:15Z" | 5 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T21:34:37Z" | ---
dataset_info:
features:
- name: inteiro_teor
dtype: string
- name: url_download
dtype: string
- name: dataDecisao
dtype: timestamp[ns]
- name: dataPublicacao
dtype: timestamp[ns]
- name: decisao
dtype: string
- name: descricaoClasse
dtype: string
- name: ementa
dtype: string
- name: id
dtype: string
- name: jurisprudenciaCitada
dtype: string
- name: ministroRelator
dtype: string
- name: nomeOrgaoJulgador
dtype: string
- name: numeroProcesso
dtype: string
- name: referenciasLegislativas
sequence: string
- name: siglaClasse
dtype: string
- name: tipoDeDecisao
dtype: string
- name: titulo
dtype: string
- name: acordaosSimilares
sequence: string
- name: partes_lista_texto
dtype: string
- name: temaProcs
sequence: string
- name: ner_results
struct:
- name: JURISPRUDENCIA
sequence: string
- name: LEGISLACAO
sequence: string
- name: LOCAL
sequence: string
- name: ORGANIZACAO
sequence: string
- name: PESSOA
sequence: string
- name: TEMPO
sequence: string
- name: desambiguacao
list:
- name: class
dtype: string
- name: count
dtype: int64
- name: elements
sequence: string
- name: entity
dtype: string
splits:
- name: train
num_bytes: 6654843253
num_examples: 78477
download_size: 1425175145
dataset_size: 6654843253
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ydmztang/cardinality | ydmztang | "2024-11-21T21:41:02Z" | 5 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-21T21:40:22Z" | ---
license: mit
---
|
IntelligentEstate/The_Key | IntelligentEstate | "2024-11-21T21:48:45Z" | 5 | 0 | [
"language:en",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2024-11-21T21:44:18Z" | ---
license: cc-by-nc-4.0
language:
- en
---
** The Key is a Unique Spacial awareness dataset with problemsolving in diverse scenarios with an infered reflection in the form of a Haiku --This is experimental--
---
** For Comercial Use please get in touch with intelligentestate@gmail.com for all else Enjoy and please respond with results.
--- |
DeveloperBishal/LLama3.2_Fine-Tune_web-development | DeveloperBishal | "2024-11-21T22:01:08Z" | 5 | 0 | [
"language:en",
"license:llama3.2",
"size_categories:1M<n<10M",
"region:us",
"code"
] | null | "2024-11-21T21:57:42Z" | ---
license: llama3.2
language:
- en
tags:
- code
pretty_name: Web-Dev-Fine-Tune
size_categories:
- 1M<n<10M
--- |
huggingface/my-distiset-555f6edc | huggingface | "2024-11-21T22:50:05Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-11-21T22:50:02Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: system_prompt
dtype: string
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 2521
num_examples: 1
download_size: 13149
dataset_size: 2521
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-555f6edc
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/huggingface/my-distiset-555f6edc/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/huggingface/my-distiset-555f6edc/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"completion": "To create a comprehensive dataset for a personal finance app, I recommend including the following essential data fields:\n\n1. **User Information**:\n * User ID (unique identifier)\n * User name\n * Email address\n * Password (hashed for security)\n * Date of birth\n2. **Financial Accounts**:\n * Account type (checking, savings, credit card, loan, investment, etc.)\n * Account number (masked for security)\n * Account balance (updated periodically)\n * Interest rate (if applicable)\n * Minimum payment (if applicable)\n3. **Transactions**:\n * Transaction ID (unique identifier)\n * Date\n * Description\n * Category (income, expense, transfer, etc.)\n * Type (deposit, withdrawal, payment, etc.)\n * Amount\n * Currency\n4. **Income**:\n * Income source (job, investment, etc.)\n * Income amount\n * Frequency (monthly, bi-weekly, etc.)\n * Start date\n5. **Expenses**:\n * Expense category (housing, food, transportation, etc.)\n * Expense amount\n * Frequency (monthly, bi-weekly, etc.)\n * Due date\n6. **Budgeting**:\n * Budget category (income, expenses, savings, debt repayment, etc.)\n * Budget amount\n * Target percentage\n7. **Debt**:\n * Debt type (credit card, loan, etc.)\n * Balance\n * Interest rate\n * Minimum payment\n * Due date\n8. **Savings**:\n * Savings goal (emergency fund, retirement, etc.)\n * Target amount\n * Current savings amount\n * Progress percentage\n9. **Notifications**:\n * Notification type (expense alert, payment due, etc.)\n * Trigger date\n * Description\n10. **Settings**:\n * Currency\n * Currency symbol\n * Language\n * Unit of measurement (e.g., pounds, dollars, etc.)",
"prompt": "I\u0027d like to create a dataset for a personal finance app to help users manage their budget and expenses. What are the essential data fields I should include? \n\n",
"system_prompt": "You are an AI assistant responsible for building comprehensive datasets for various purposes. Your purpose is to assist users in designing and generating datasets that accurately reflect their requirements, needs, and objectives. Offer clear guidance on dataset creation, provide suggestions for relevant data points, and ensure the dataset is well-structured, organized, and scalable. Assist users in ensuring data quality, relevance, and consistency while anticipating potential data requirements and maintenance. User questions are direct and concise."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("huggingface/my-distiset-555f6edc", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("huggingface/my-distiset-555f6edc")
```
</details>
|
WARSO46/ForgetRetainBooks | WARSO46 | "2024-11-22T00:29:08Z" | 5 | 0 | [
"language:en",
"license:mit",
"size_categories:1M<n<10M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | null | "2024-11-21T22:55:31Z" | ---
license: mit
language:
- en
---
# ForgetRetainBooks
This dataset is derived from the NarrativeQA dataset, created by Kocisky et al. (2018). NarrativeQA is a dataset for evaluating reading comprehension and narrative understanding.
This dataset is an extraction of the book content from the original NarrativeQA dataset.
## Citation
If you want to use this dataset, please also cite the original NarrativeQA dataset.
```Bibtex
@article{narrativeqa,
author = {Tom\'a\v s Ko\v cisk\'y and Jonathan Schwarz and Phil Blunsom and
Chris Dyer and Karl Moritz Hermann and G\'abor Melis and
Edward Grefenstette},
title = {The {NarrativeQA} Reading Comprehension Challenge},
journal = {Transactions of the Association for Computational Linguistics},
url = {https://TBD},
volume = {TBD},
year = {2018},
pages = {TBD},
}
``` |
TSOWatch/1001NightsBesiegedCity | TSOWatch | "2024-11-22T00:14:26Z" | 5 | 0 | [
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:14:05Z" | ---
license: creativeml-openrail-m
---
|
TSOWatch/1001NightsSeaSnake | TSOWatch | "2024-11-22T00:19:25Z" | 5 | 0 | [
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:19:10Z" | ---
license: creativeml-openrail-m
---
|
TSOWatch/1001NightsTreasureWisdom | TSOWatch | "2024-11-22T00:24:26Z" | 5 | 0 | [
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:24:05Z" | ---
license: creativeml-openrail-m
---
|
TSOWatch/1001NightsFirstThief | TSOWatch | "2024-11-22T00:25:06Z" | 5 | 0 | [
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:24:54Z" | ---
license: creativeml-openrail-m
---
|
TSOWatch/1001NightsSecondThief | TSOWatch | "2024-11-22T00:25:46Z" | 5 | 0 | [
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:25:35Z" | ---
license: creativeml-openrail-m
---
|
TSOWatch/1001NightsThirdThief | TSOWatch | "2024-11-22T00:26:27Z" | 5 | 0 | [
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T00:26:15Z" | ---
license: creativeml-openrail-m
---
|
nuprl-staging/training_classifier_intermediate_depth2 | nuprl-staging | "2024-11-22T01:03:19Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T01:03:17Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: prompt
dtype: string
- name: answer
dtype: string
- name: pythoncode
dtype: string
- name: depth0
dtype: string
- name: depth1
dtype: string
- name: depth2
dtype: string
- name: depth3
dtype: string
- name: depth4
dtype: string
- name: depth5
dtype: string
- name: depth6
dtype: string
- name: depth7
dtype: string
- name: depthn0
dtype: string
- name: depthn1
dtype: string
- name: depthn2
dtype: string
- name: depthn3
dtype: string
- name: depthn4
dtype: string
- name: depthn5
dtype: string
- name: depthn6
dtype: string
- name: depthn7
dtype: string
- name: num_nodes
dtype: int64
- name: num_edges
dtype: int64
- name: num_classes
dtype: int64
- name: path_length
dtype: int64
- name: num_cycle
dtype: int64
- name: correctness
dtype: bool
- name: one_correct
dtype: bool
splits:
- name: train
num_bytes: 4222364
num_examples: 3000
- name: test
num_bytes: 698164
num_examples: 500
download_size: 1082498
dataset_size: 4920528
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Tippawan/Finetune-mt-story-telling-221124-messages2 | Tippawan | "2024-11-22T01:42:19Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T01:42:17Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 1090514
num_examples: 5629
download_size: 370384
dataset_size: 1090514
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_18 | ZixuanKe | "2024-11-22T02:32:53Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T02:32:52Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 147908
num_examples: 24
download_size: 19032
dataset_size: 147908
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_23 | ZixuanKe | "2024-11-22T02:33:56Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T02:33:55Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 167913
num_examples: 33
download_size: 26156
dataset_size: 167913
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mhdang/image_unseen-fewshot_sc_ours_withjpg_num500 | mhdang | "2024-11-22T02:34:37Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T02:34:10Z" | ---
dataset_info:
features:
- name: jpg_model_train
dtype: binary
- name: jpg_model_base
dtype: binary
- name: user_id
dtype: int64
- name: text
dtype: string
- name: emb
sequence:
sequence: float64
- name: preferred_image_uid_0
dtype: string
- name: dispreferred_image_uid_0
dtype: string
- name: caption_0
dtype: string
- name: preferred_image_uid_1
dtype: string
- name: dispreferred_image_uid_1
dtype: string
- name: caption_1
dtype: string
- name: preferred_image_uid_2
dtype: string
- name: dispreferred_image_uid_2
dtype: string
- name: caption_2
dtype: string
- name: preferred_image_uid_3
dtype: string
- name: dispreferred_image_uid_3
dtype: string
- name: caption_3
dtype: string
- name: class
dtype: int64
- name: __index_level_0__
dtype: int64
- name: user_description
dtype: string
- name: caption
dtype: string
- name: preferred_image_uid_0_jpg
dtype: binary
- name: preferred_image_uid_1_jpg
dtype: binary
- name: preferred_image_uid_2_jpg
dtype: binary
- name: preferred_image_uid_3_jpg
dtype: binary
- name: dispreferred_image_uid_0_jpg
dtype: binary
- name: dispreferred_image_uid_1_jpg
dtype: binary
- name: dispreferred_image_uid_2_jpg
dtype: binary
- name: dispreferred_image_uid_3_jpg
dtype: binary
splits:
- name: test
num_bytes: 1537893962
num_examples: 500
download_size: 1079067223
dataset_size: 1537893962
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_28 | ZixuanKe | "2024-11-22T02:35:30Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T02:35:29Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 216320
num_examples: 33
download_size: 37707
dataset_size: 216320
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_13 | ZixuanKe | "2024-11-22T02:36:46Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T02:36:45Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 195300
num_examples: 35
download_size: 28241
dataset_size: 195300
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mhdang/image_seen_sc-userprofile_ours_withjpg_num500 | mhdang | "2024-11-22T02:38:17Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T02:37:48Z" | ---
dataset_info:
features:
- name: jpg_model_train
dtype: binary
- name: jpg_model_base
dtype: binary
- name: user_id
dtype: int64
- name: text
dtype: string
- name: emb
sequence:
sequence: float64
- name: preferred_image_uid_0
dtype: string
- name: dispreferred_image_uid_0
dtype: string
- name: caption_0
dtype: string
- name: preferred_image_uid_1
dtype: string
- name: dispreferred_image_uid_1
dtype: string
- name: caption_1
dtype: string
- name: preferred_image_uid_2
dtype: string
- name: dispreferred_image_uid_2
dtype: string
- name: caption_2
dtype: string
- name: preferred_image_uid_3
dtype: string
- name: dispreferred_image_uid_3
dtype: string
- name: caption_3
dtype: string
- name: class
dtype: int64
- name: __index_level_0__
dtype: int64
- name: user_description
dtype: string
- name: caption
dtype: string
- name: preferred_image_uid_0_jpg
dtype: binary
- name: preferred_image_uid_1_jpg
dtype: binary
- name: preferred_image_uid_2_jpg
dtype: binary
- name: preferred_image_uid_3_jpg
dtype: binary
- name: dispreferred_image_uid_0_jpg
dtype: binary
- name: dispreferred_image_uid_1_jpg
dtype: binary
- name: dispreferred_image_uid_2_jpg
dtype: binary
- name: dispreferred_image_uid_3_jpg
dtype: binary
splits:
- name: test
num_bytes: 1471788174
num_examples: 500
download_size: 1146811656
dataset_size: 1471788174
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
mhdang/image_seen_sc_ours_withjpg_num500 | mhdang | "2024-11-22T02:39:13Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T02:38:44Z" | ---
dataset_info:
features:
- name: jpg_model_train
dtype: binary
- name: jpg_model_base
dtype: binary
- name: user_id
dtype: int64
- name: text
dtype: string
- name: emb
sequence:
sequence: float64
- name: preferred_image_uid_0
dtype: string
- name: dispreferred_image_uid_0
dtype: string
- name: caption_0
dtype: string
- name: preferred_image_uid_1
dtype: string
- name: dispreferred_image_uid_1
dtype: string
- name: caption_1
dtype: string
- name: preferred_image_uid_2
dtype: string
- name: dispreferred_image_uid_2
dtype: string
- name: caption_2
dtype: string
- name: preferred_image_uid_3
dtype: string
- name: dispreferred_image_uid_3
dtype: string
- name: caption_3
dtype: string
- name: class
dtype: int64
- name: __index_level_0__
dtype: int64
- name: user_description
dtype: string
- name: caption
dtype: string
- name: preferred_image_uid_0_jpg
dtype: binary
- name: preferred_image_uid_1_jpg
dtype: binary
- name: preferred_image_uid_2_jpg
dtype: binary
- name: preferred_image_uid_3_jpg
dtype: binary
- name: dispreferred_image_uid_0_jpg
dtype: binary
- name: dispreferred_image_uid_1_jpg
dtype: binary
- name: dispreferred_image_uid_2_jpg
dtype: binary
- name: dispreferred_image_uid_3_jpg
dtype: binary
splits:
- name: test
num_bytes: 1468520892
num_examples: 500
download_size: 1143544216
dataset_size: 1468520892
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
ifrah1/your_dataset_name | ifrah1 | "2024-11-22T02:55:18Z" | 5 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T02:55:16Z" | ---
dataset_info:
features:
- name: English
dtype: string
- name: Urdu
dtype: string
splits:
- name: train
num_bytes: 33059917
num_examples: 107317
download_size: 16879027
dataset_size: 33059917
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_32 | ZixuanKe | "2024-11-22T03:56:11Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T03:56:10Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 285573
num_examples: 50
download_size: 24937
dataset_size: 285573
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/processed_image_unseen-fewshot_sc_ours_withjpg_num500 | Asap7772 | "2024-11-22T04:12:03Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T04:10:48Z" | ---
dataset_info:
features:
- name: user_id
dtype: int64
- name: caption
sequence: string
- name: split
dtype: string
- name: shot_id
dtype: int64
- name: preferred_image
sequence: binary
- name: dispreferred_image
sequence: binary
- name: preferred_image_uid
sequence: string
- name: dispreferred_image_uid
sequence: string
splits:
- name: test
num_bytes: 1119635843
num_examples: 500
download_size: 1100260023
dataset_size: 1119635843
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Asap7772/processed_image_seen_dpo-userprofile_ours_withjpg_num500 | Asap7772 | "2024-11-22T04:13:25Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T04:12:05Z" | ---
dataset_info:
features:
- name: user_id
dtype: int64
- name: caption
sequence: string
- name: split
dtype: string
- name: shot_id
dtype: int64
- name: preferred_image
sequence: binary
- name: dispreferred_image
sequence: binary
- name: preferred_image_uid
sequence: string
- name: dispreferred_image_uid
sequence: string
splits:
- name: test
num_bytes: 1062834635
num_examples: 500
download_size: 1056453709
dataset_size: 1062834635
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
kiranshivaraju/train_v4 | kiranshivaraju | "2024-11-22T04:44:35Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T04:40:55Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': bad
'1': good
splits:
- name: train
num_bytes: 162663726.26
num_examples: 1302
- name: test
num_bytes: 8203399.0
num_examples: 75
download_size: 139701507
dataset_size: 170867125.26
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
added duplicate synthetic defect images
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_train_chunk_31 | ZixuanKe | "2024-11-22T04:43:31Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T04:43:29Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 943719
num_examples: 163
download_size: 87108
dataset_size: 943719
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
TwinDoc/test-multiple-lora-serving_nn_70k_summarization | TwinDoc | "2024-11-22T05:24:46Z" | 5 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T05:24:36Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 182313410
num_examples: 70000
download_size: 102699118
dataset_size: 182313410
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ahmedheakl/ar_patd_instruct | ahmedheakl | "2024-11-22T06:39:23Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T06:06:18Z" | ---
dataset_info:
features:
- name: category
dtype: string
- name: image
dtype: image
- name: content
list:
- name: author
dtype: string
- name: content
dtype: string
- name: id
dtype: string
- name: subtitle
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 8157751729
num_examples: 2550
download_size: 8124913276
dataset_size: 8157751729
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jassiyu/poetry-gutenberg5000 | jassiyu | "2024-11-22T06:16:10Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T06:16:07Z" | ---
dataset_info:
features:
- name: line
dtype: string
- name: gutenberg_id
dtype: int64
splits:
- name: train
num_bytes: 253970
num_examples: 5000
download_size: 180447
dataset_size: 253970
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Asap7772/processed_image_unseen-fewshot_dpo_ours_withjpg_num500_winrategpt-4o-mini | Asap7772 | "2024-11-22T06:35:02Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T06:34:07Z" | ---
dataset_info:
features:
- name: user_id
dtype: int64
- name: caption
sequence: string
- name: split
dtype: string
- name: shot_id
dtype: int64
- name: preferred_image
sequence: binary
- name: dispreferred_image
sequence: binary
- name: preferred_image_uid
sequence: string
- name: dispreferred_image_uid
sequence: string
- name: score
dtype: int64
- name: text_pref_first
dtype: string
- name: text_pref_second
dtype: string
splits:
- name: test
num_bytes: 1122938244
num_examples: 500
download_size: 874479072
dataset_size: 1122938244
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Asap7772/processed_image_unseen-fewshot_dpo-userprofile_ours_withjpg_num500_winrategpt-4o-mini | Asap7772 | "2024-11-22T06:35:43Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T06:34:45Z" | ---
dataset_info:
features:
- name: user_id
dtype: int64
- name: caption
sequence: string
- name: split
dtype: string
- name: shot_id
dtype: int64
- name: preferred_image
sequence: binary
- name: dispreferred_image
sequence: binary
- name: preferred_image_uid
sequence: string
- name: dispreferred_image_uid
sequence: string
- name: score
dtype: int64
- name: text_pref_first
dtype: string
- name: text_pref_second
dtype: string
splits:
- name: test
num_bytes: 1131766399
num_examples: 500
download_size: 883304082
dataset_size: 1131766399
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Asap7772/processed_image_unseen-fewshot_sc-userprofile_ours_withjpg_num500_winrategpt-4o-mini | Asap7772 | "2024-11-22T06:37:58Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T06:37:03Z" | ---
dataset_info:
features:
- name: user_id
dtype: int64
- name: caption
sequence: string
- name: split
dtype: string
- name: shot_id
dtype: int64
- name: preferred_image
sequence: binary
- name: dispreferred_image
sequence: binary
- name: preferred_image_uid
sequence: string
- name: dispreferred_image_uid
sequence: string
- name: score
dtype: int64
- name: text_pref_first
dtype: string
- name: text_pref_second
dtype: string
splits:
- name: test
num_bytes: 1121558293
num_examples: 500
download_size: 873096066
dataset_size: 1121558293
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
JJuny/llama2_SYC_mess_stopremoved_train | JJuny | "2024-11-22T07:13:16Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T07:13:11Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 298294
num_examples: 71
download_size: 115610
dataset_size: 298294
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
JJuny/llama2_SYC_mess_stopremoved_eval | JJuny | "2024-11-22T07:13:31Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T07:13:25Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 6175
num_examples: 4
download_size: 14122
dataset_size: 6175
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HWISEONGSS/llm_ko_politec | HWISEONGSS | "2024-11-22T07:15:20Z" | 5 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-11-22T07:15:19Z" | ---
license: apache-2.0
---
|
open-llm-leaderboard/netcat420__MFANN3bv0.24-details | open-llm-leaderboard | "2024-11-22T07:25:14Z" | 5 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T07:21:47Z" | ---
pretty_name: Evaluation run of netcat420/MFANN3bv0.24
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [netcat420/MFANN3bv0.24](https://huggingface.co/netcat420/MFANN3bv0.24)\nThe dataset\
\ is composed of 38 configuration(s), each one corresponding to one of the evaluated\
\ task.\n\nThe dataset has been created from 1 run(s). Each run can be found as\
\ a specific split in each configuration, the split being named using the timestamp\
\ of the run.The \"train\" split is always pointing to the latest results.\n\nAn\
\ additional configuration \"results\" store all the aggregated results of the run.\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/netcat420__MFANN3bv0.24-details\"\
,\n\tname=\"netcat420__MFANN3bv0.24__leaderboard_bbh_boolean_expressions\",\n\t\
split=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results from\
\ run 2024-11-22T07-21-46.503028](https://huggingface.co/datasets/open-llm-leaderboard/netcat420__MFANN3bv0.24-details/blob/main/netcat420__MFANN3bv0.24/results_2024-11-22T07-21-46.503028.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"inst_level_loose_acc,none\": 0.2997601918465228,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\",\n \"prompt_level_strict_acc,none\"\
: 0.15711645101663585,\n \"prompt_level_strict_acc_stderr,none\": 0.01566021568347361,\n\
\ \"acc,none\": 0.23520611702127658,\n \"acc_stderr,none\"\
: 0.0038667460057111433,\n \"inst_level_strict_acc,none\": 0.2829736211031175,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"acc_norm,none\"\
: 0.40147879102347905,\n \"acc_norm_stderr,none\": 0.005245619055895268,\n\
\ \"prompt_level_loose_acc,none\": 0.16820702402957485,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.01609655018806301,\n \"exact_match,none\"\
: 0.010574018126888218,\n \"exact_match_stderr,none\": 0.0028001672170684297,\n\
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.43759763929873285,\n \"acc_norm_stderr,none\"\
: 0.006137862286453202,\n \"alias\": \" - leaderboard_bbh\"\n \
\ },\n \"leaderboard_bbh_boolean_expressions\": {\n \"alias\"\
: \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.844,\n\
\ \"acc_norm_stderr,none\": 0.022995023034068682\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5775401069518716,\n \"acc_norm_stderr,none\"\
: 0.0362182402075336\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.344,\n \"acc_norm_stderr,none\": 0.03010450339231644\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.612,\n\
\ \"acc_norm_stderr,none\": 0.030881038748993974\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\":\
\ 0.031603975145223735\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.244,\n \"acc_norm_stderr,none\": 0.02721799546455311\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.556,\n \
\ \"acc_norm_stderr,none\": 0.03148684942554571\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.472,\n \"acc_norm_stderr,none\":\
\ 0.031636489531544396\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.396,\n \"acc_norm_stderr,none\":\
\ 0.030993197854577898\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.74,\n \"acc_norm_stderr,none\": 0.027797315752644335\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.404,\n \"acc_norm_stderr,none\": 0.03109668818482536\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.476,\n \"acc_norm_stderr,none\":\
\ 0.03164968895968774\n },\n \"leaderboard_bbh_object_counting\":\
\ {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.372,\n \"acc_norm_stderr,none\": 0.03063032594455827\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.3972602739726027,\n \"acc_norm_stderr,none\": 0.04063670403888034\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.312,\n \"acc_norm_stderr,none\": 0.02936106757521985\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.488,\n \
\ \"acc_norm_stderr,none\": 0.03167708558254714\n },\n \"leaderboard_bbh_salient_translation_error_detection\"\
: {\n \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\"\
,\n \"acc_norm,none\": 0.36,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" -\
\ leaderboard_bbh_snarks\",\n \"acc_norm,none\": 0.6348314606741573,\n\
\ \"acc_norm_stderr,none\": 0.03619005678691264\n },\n \
\ \"leaderboard_bbh_sports_understanding\": {\n \"alias\": \" - leaderboard_bbh_sports_understanding\"\
,\n \"acc_norm,none\": 0.472,\n \"acc_norm_stderr,none\":\
\ 0.031636489531544396\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \
\ \"acc_norm,none\": 0.176,\n \"acc_norm_stderr,none\": 0.024133497525457123\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.188,\n \"acc_norm_stderr,none\":\
\ 0.024760377727750513\n },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.12,\n \"acc_norm_stderr,none\": 0.020593600596839998\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.32,\n \"acc_norm_stderr,none\": 0.029561724955240978\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\":\
\ \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\": 0.536,\n\
\ \"acc_norm_stderr,none\": 0.031603975145223735\n },\n \
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.25838926174496646,\n\
\ \"acc_norm_stderr,none\": 0.0126896026637416,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.23232323232323232,\n \"acc_norm_stderr,none\": 0.030088629490217445\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.25824175824175827,\n\
\ \"acc_norm_stderr,none\": 0.01874762138022973\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.2700892857142857,\n \"acc_norm_stderr,none\"\
: 0.021000749078822437\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.15711645101663585,\n \"prompt_level_strict_acc_stderr,none\": 0.01566021568347361,\n\
\ \"inst_level_strict_acc,none\": 0.2829736211031175,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.16820702402957485,\n \"prompt_level_loose_acc_stderr,none\": 0.016096550188063007,\n\
\ \"inst_level_loose_acc,none\": 0.2997601918465228,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.010574018126888218,\n \"exact_match_stderr,none\"\
: 0.0028001672170684297,\n \"alias\": \" - leaderboard_math_hard\"\n\
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.013029315960912053,\n\
\ \"exact_match_stderr,none\": 0.006482644725390246\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_geometry_hard\"\
: {\n \"alias\": \" - leaderboard_math_geometry_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n \
\ },\n \"leaderboard_math_intermediate_algebra_hard\": {\n \
\ \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.007142857142857143,\n \"exact_match_stderr,none\"\
: 0.005041703051390571\n },\n \"leaderboard_math_num_theory_hard\"\
: {\n \"alias\": \" - leaderboard_math_num_theory_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_prealgebra_hard\",\n \"exact_match,none\": 0.03626943005181347,\n\
\ \"exact_match_stderr,none\": 0.013492659751295115\n },\n \
\ \"leaderboard_math_precalculus_hard\": {\n \"alias\": \" - leaderboard_math_precalculus_hard\"\
,\n \"exact_match,none\": 0.007407407407407408,\n \"exact_match_stderr,none\"\
: 0.007407407407407408\n },\n \"leaderboard_mmlu_pro\": {\n \
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.23520611702127658,\n\
\ \"acc_stderr,none\": 0.0038667460057111433\n },\n \"\
leaderboard_musr\": {\n \"acc_norm,none\": 0.35185185185185186,\n \
\ \"acc_norm_stderr,none\": 0.01653013512637592,\n \"alias\"\
: \" - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\"\
: {\n \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \
\ \"acc_norm,none\": 0.548,\n \"acc_norm_stderr,none\": 0.03153986449255664\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\"\
: \" - leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.32421875,\n\
\ \"acc_norm_stderr,none\": 0.029312444800629493\n },\n \
\ \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.184,\n \"acc_norm_stderr,none\":\
\ 0.02455581299422255\n }\n },\n \"leaderboard\": {\n \"inst_level_loose_acc,none\"\
: 0.2997601918465228,\n \"inst_level_loose_acc_stderr,none\": \"N/A\",\n\
\ \"prompt_level_strict_acc,none\": 0.15711645101663585,\n \"prompt_level_strict_acc_stderr,none\"\
: 0.01566021568347361,\n \"acc,none\": 0.23520611702127658,\n \"acc_stderr,none\"\
: 0.0038667460057111433,\n \"inst_level_strict_acc,none\": 0.2829736211031175,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"acc_norm,none\"\
: 0.40147879102347905,\n \"acc_norm_stderr,none\": 0.005245619055895268,\n\
\ \"prompt_level_loose_acc,none\": 0.16820702402957485,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.01609655018806301,\n \"exact_match,none\": 0.010574018126888218,\n \
\ \"exact_match_stderr,none\": 0.0028001672170684297,\n \"alias\": \"\
leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.43759763929873285,\n\
\ \"acc_norm_stderr,none\": 0.006137862286453202,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.844,\n \"acc_norm_stderr,none\": 0.022995023034068682\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5775401069518716,\n \"acc_norm_stderr,none\"\
: 0.0362182402075336\n },\n \"leaderboard_bbh_date_understanding\": {\n \
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.344,\n \"acc_norm_stderr,none\": 0.03010450339231644\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.612,\n \"acc_norm_stderr,none\": 0.030881038748993974\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\": 0.031603975145223735\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.244,\n \"acc_norm_stderr,none\": 0.02721799546455311\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.556,\n \"acc_norm_stderr,none\": 0.03148684942554571\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.472,\n \"acc_norm_stderr,none\": 0.031636489531544396\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.396,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.74,\n \"acc_norm_stderr,none\": 0.027797315752644335\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.404,\n \"acc_norm_stderr,none\": 0.03109668818482536\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.476,\n \"acc_norm_stderr,none\": 0.03164968895968774\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.372,\n \"acc_norm_stderr,none\": 0.03063032594455827\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.3972602739726027,\n\
\ \"acc_norm_stderr,none\": 0.04063670403888034\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.312,\n \"acc_norm_stderr,none\": 0.02936106757521985\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.36,\n \"acc_norm_stderr,none\": 0.03041876402517494\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6348314606741573,\n \"acc_norm_stderr,none\"\
: 0.03619005678691264\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.472,\n \"acc_norm_stderr,none\": 0.031636489531544396\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.176,\n \"acc_norm_stderr,none\": 0.024133497525457123\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.188,\n \"acc_norm_stderr,none\": 0.024760377727750513\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.12,\n \"acc_norm_stderr,none\": 0.020593600596839998\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.32,\n \"acc_norm_stderr,none\": 0.029561724955240978\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.536,\n \"acc_norm_stderr,none\": 0.031603975145223735\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.25838926174496646,\n\
\ \"acc_norm_stderr,none\": 0.0126896026637416,\n \"alias\": \" -\
\ leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"alias\"\
: \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.23232323232323232,\n\
\ \"acc_norm_stderr,none\": 0.030088629490217445\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.25824175824175827,\n \"acc_norm_stderr,none\": 0.01874762138022973\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.2700892857142857,\n \"acc_norm_stderr,none\"\
: 0.021000749078822437\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.15711645101663585,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.01566021568347361,\n \
\ \"inst_level_strict_acc,none\": 0.2829736211031175,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.16820702402957485,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.016096550188063007,\n \"inst_level_loose_acc,none\"\
: 0.2997601918465228,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.010574018126888218,\n\
\ \"exact_match_stderr,none\": 0.0028001672170684297,\n \"alias\"\
: \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.013029315960912053,\n \"exact_match_stderr,none\": 0.006482644725390246\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_geometry_hard\"\
: {\n \"alias\": \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.007142857142857143,\n \"exact_match_stderr,none\"\
: 0.005041703051390571\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_prealgebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \"exact_match,none\"\
: 0.03626943005181347,\n \"exact_match_stderr,none\": 0.013492659751295115\n\
\ },\n \"leaderboard_math_precalculus_hard\": {\n \"alias\": \" -\
\ leaderboard_math_precalculus_hard\",\n \"exact_match,none\": 0.007407407407407408,\n\
\ \"exact_match_stderr,none\": 0.007407407407407408\n },\n \"leaderboard_mmlu_pro\"\
: {\n \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.23520611702127658,\n\
\ \"acc_stderr,none\": 0.0038667460057111433\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.35185185185185186,\n \"acc_norm_stderr,none\"\
: 0.01653013512637592,\n \"alias\": \" - leaderboard_musr\"\n },\n \
\ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\
,\n \"acc_norm,none\": 0.548,\n \"acc_norm_stderr,none\": 0.03153986449255664\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\
\ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.32421875,\n\
\ \"acc_norm_stderr,none\": 0.029312444800629493\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\
: 0.184,\n \"acc_norm_stderr,none\": 0.02455581299422255\n }\n}\n```"
repo_url: https://huggingface.co/netcat420/MFANN3bv0.24
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_navigate
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_snarks
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_gpqa_extended
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_gpqa_main
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_ifeval
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_ifeval_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_mmlu_pro
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_musr_object_placements
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T07-21-46.503028.jsonl'
- config_name: netcat420__MFANN3bv0.24__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_22T07_21_46.503028
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T07-21-46.503028.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T07-21-46.503028.jsonl'
---
# Dataset Card for Evaluation run of netcat420/MFANN3bv0.24
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [netcat420/MFANN3bv0.24](https://huggingface.co/netcat420/MFANN3bv0.24)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/netcat420__MFANN3bv0.24-details",
name="netcat420__MFANN3bv0.24__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-22T07-21-46.503028](https://huggingface.co/datasets/open-llm-leaderboard/netcat420__MFANN3bv0.24-details/blob/main/netcat420__MFANN3bv0.24/results_2024-11-22T07-21-46.503028.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"inst_level_loose_acc,none": 0.2997601918465228,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.15711645101663585,
"prompt_level_strict_acc_stderr,none": 0.01566021568347361,
"acc,none": 0.23520611702127658,
"acc_stderr,none": 0.0038667460057111433,
"inst_level_strict_acc,none": 0.2829736211031175,
"inst_level_strict_acc_stderr,none": "N/A",
"acc_norm,none": 0.40147879102347905,
"acc_norm_stderr,none": 0.005245619055895268,
"prompt_level_loose_acc,none": 0.16820702402957485,
"prompt_level_loose_acc_stderr,none": 0.01609655018806301,
"exact_match,none": 0.010574018126888218,
"exact_match_stderr,none": 0.0028001672170684297,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.43759763929873285,
"acc_norm_stderr,none": 0.006137862286453202,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.844,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5775401069518716,
"acc_norm_stderr,none": 0.0362182402075336
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.344,
"acc_norm_stderr,none": 0.03010450339231644
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.612,
"acc_norm_stderr,none": 0.030881038748993974
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.244,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.472,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.396,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.74,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.404,
"acc_norm_stderr,none": 0.03109668818482536
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.476,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.372,
"acc_norm_stderr,none": 0.03063032594455827
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.3972602739726027,
"acc_norm_stderr,none": 0.04063670403888034
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.312,
"acc_norm_stderr,none": 0.02936106757521985
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.36,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6348314606741573,
"acc_norm_stderr,none": 0.03619005678691264
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.472,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.176,
"acc_norm_stderr,none": 0.024133497525457123
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.12,
"acc_norm_stderr,none": 0.020593600596839998
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.32,
"acc_norm_stderr,none": 0.029561724955240978
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_gpqa": {
"acc_norm,none": 0.25838926174496646,
"acc_norm_stderr,none": 0.0126896026637416,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.23232323232323232,
"acc_norm_stderr,none": 0.030088629490217445
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.25824175824175827,
"acc_norm_stderr,none": 0.01874762138022973
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.2700892857142857,
"acc_norm_stderr,none": 0.021000749078822437
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.15711645101663585,
"prompt_level_strict_acc_stderr,none": 0.01566021568347361,
"inst_level_strict_acc,none": 0.2829736211031175,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.16820702402957485,
"prompt_level_loose_acc_stderr,none": 0.016096550188063007,
"inst_level_loose_acc,none": 0.2997601918465228,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.010574018126888218,
"exact_match_stderr,none": 0.0028001672170684297,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.013029315960912053,
"exact_match_stderr,none": 0.006482644725390246
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.007142857142857143,
"exact_match_stderr,none": 0.005041703051390571
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.03626943005181347,
"exact_match_stderr,none": 0.013492659751295115
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.007407407407407408,
"exact_match_stderr,none": 0.007407407407407408
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.23520611702127658,
"acc_stderr,none": 0.0038667460057111433
},
"leaderboard_musr": {
"acc_norm,none": 0.35185185185185186,
"acc_norm_stderr,none": 0.01653013512637592,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.32421875,
"acc_norm_stderr,none": 0.029312444800629493
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.184,
"acc_norm_stderr,none": 0.02455581299422255
}
},
"leaderboard": {
"inst_level_loose_acc,none": 0.2997601918465228,
"inst_level_loose_acc_stderr,none": "N/A",
"prompt_level_strict_acc,none": 0.15711645101663585,
"prompt_level_strict_acc_stderr,none": 0.01566021568347361,
"acc,none": 0.23520611702127658,
"acc_stderr,none": 0.0038667460057111433,
"inst_level_strict_acc,none": 0.2829736211031175,
"inst_level_strict_acc_stderr,none": "N/A",
"acc_norm,none": 0.40147879102347905,
"acc_norm_stderr,none": 0.005245619055895268,
"prompt_level_loose_acc,none": 0.16820702402957485,
"prompt_level_loose_acc_stderr,none": 0.01609655018806301,
"exact_match,none": 0.010574018126888218,
"exact_match_stderr,none": 0.0028001672170684297,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.43759763929873285,
"acc_norm_stderr,none": 0.006137862286453202,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.844,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5775401069518716,
"acc_norm_stderr,none": 0.0362182402075336
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.344,
"acc_norm_stderr,none": 0.03010450339231644
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.612,
"acc_norm_stderr,none": 0.030881038748993974
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.244,
"acc_norm_stderr,none": 0.02721799546455311
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.556,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.472,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.396,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.74,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.404,
"acc_norm_stderr,none": 0.03109668818482536
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.476,
"acc_norm_stderr,none": 0.03164968895968774
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.372,
"acc_norm_stderr,none": 0.03063032594455827
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.3972602739726027,
"acc_norm_stderr,none": 0.04063670403888034
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.312,
"acc_norm_stderr,none": 0.02936106757521985
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.36,
"acc_norm_stderr,none": 0.03041876402517494
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6348314606741573,
"acc_norm_stderr,none": 0.03619005678691264
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.472,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.176,
"acc_norm_stderr,none": 0.024133497525457123
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.12,
"acc_norm_stderr,none": 0.020593600596839998
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.32,
"acc_norm_stderr,none": 0.029561724955240978
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.536,
"acc_norm_stderr,none": 0.031603975145223735
},
"leaderboard_gpqa": {
"acc_norm,none": 0.25838926174496646,
"acc_norm_stderr,none": 0.0126896026637416,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.23232323232323232,
"acc_norm_stderr,none": 0.030088629490217445
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.25824175824175827,
"acc_norm_stderr,none": 0.01874762138022973
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.2700892857142857,
"acc_norm_stderr,none": 0.021000749078822437
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.15711645101663585,
"prompt_level_strict_acc_stderr,none": 0.01566021568347361,
"inst_level_strict_acc,none": 0.2829736211031175,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.16820702402957485,
"prompt_level_loose_acc_stderr,none": 0.016096550188063007,
"inst_level_loose_acc,none": 0.2997601918465228,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.010574018126888218,
"exact_match_stderr,none": 0.0028001672170684297,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.013029315960912053,
"exact_match_stderr,none": 0.006482644725390246
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.007142857142857143,
"exact_match_stderr,none": 0.005041703051390571
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.03626943005181347,
"exact_match_stderr,none": 0.013492659751295115
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.007407407407407408,
"exact_match_stderr,none": 0.007407407407407408
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.23520611702127658,
"acc_stderr,none": 0.0038667460057111433
},
"leaderboard_musr": {
"acc_norm,none": 0.35185185185185186,
"acc_norm_stderr,none": 0.01653013512637592,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.548,
"acc_norm_stderr,none": 0.03153986449255664
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.32421875,
"acc_norm_stderr,none": 0.029312444800629493
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.184,
"acc_norm_stderr,none": 0.02455581299422255
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_train_chunk_12 | ZixuanKe | "2024-11-22T07:57:20Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T07:57:19Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 5832266
num_examples: 991
download_size: 427232
dataset_size: 5832266
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
open-llm-leaderboard/TheHierophant__Underground-Cognitive-V0.3-test-details | open-llm-leaderboard | "2024-11-22T08:08:08Z" | 5 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T08:04:52Z" | ---
pretty_name: Evaluation run of TheHierophant/Underground-Cognitive-V0.3-test
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [TheHierophant/Underground-Cognitive-V0.3-test](https://huggingface.co/TheHierophant/Underground-Cognitive-V0.3-test)\n\
The dataset is composed of 38 configuration(s), each one corresponding to one of\
\ the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can\
\ be found as a specific split in each configuration, the split being named using\
\ the timestamp of the run.The \"train\" split is always pointing to the latest\
\ results.\n\nAn additional configuration \"results\" store all the aggregated results\
\ of the run.\n\nTo load the details from a run, you can for instance do the following:\n\
```python\nfrom datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/TheHierophant__Underground-Cognitive-V0.3-test-details\"\
,\n\tname=\"TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_boolean_expressions\"\
,\n\tsplit=\"latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results\
\ from run 2024-11-22T08-04-51.528750](https://huggingface.co/datasets/open-llm-leaderboard/TheHierophant__Underground-Cognitive-V0.3-test-details/blob/main/TheHierophant__Underground-Cognitive-V0.3-test/results_2024-11-22T08-04-51.528750.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"prompt_level_strict_acc,none\": 0.4232902033271719,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.021261842325248494,\n \"\
acc,none\": 0.331781914893617,\n \"acc_stderr,none\": 0.004292740186210186,\n\
\ \"prompt_level_loose_acc,none\": 0.45286506469500926,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.02142075394952955,\n \"inst_level_strict_acc,none\"\
: 0.5383693045563549,\n \"inst_level_strict_acc_stderr,none\": \"N/A\"\
,\n \"acc_norm,none\": 0.48268257880399534,\n \"acc_norm_stderr,none\"\
: 0.0053367471046366485,\n \"inst_level_loose_acc,none\": 0.5683453237410072,\n\
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"exact_match,none\"\
: 0.006042296072507553,\n \"exact_match_stderr,none\": 0.0021310320096616354,\n\
\ \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.5271654226696754,\n \"acc_norm_stderr,none\"\
: 0.00618516517577893,\n \"alias\": \" - leaderboard_bbh\"\n },\n\
\ \"leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" \
\ - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\": 0.84,\n\
\ \"acc_norm_stderr,none\": 0.023232714782060626\n },\n \
\ \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5882352941176471,\n \"acc_norm_stderr,none\"\
: 0.036086405630856196\n },\n \"leaderboard_bbh_date_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_date_understanding\",\n \
\ \"acc_norm,none\": 0.508,\n \"acc_norm_stderr,none\": 0.03168215643141386\n\
\ },\n \"leaderboard_bbh_disambiguation_qa\": {\n \"alias\"\
: \" - leaderboard_bbh_disambiguation_qa\",\n \"acc_norm,none\": 0.66,\n\
\ \"acc_norm_stderr,none\": 0.030020073605457876\n },\n \
\ \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.572,\n \"acc_norm_stderr,none\":\
\ 0.031355968923772626\n },\n \"leaderboard_bbh_geometric_shapes\"\
: {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\",\n \
\ \"acc_norm,none\": 0.472,\n \"acc_norm_stderr,none\": 0.031636489531544396\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \"\
\ - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\": 0.628,\n \
\ \"acc_norm_stderr,none\": 0.03063032594455827\n },\n \"leaderboard_bbh_logical_deduction_five_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_five_objects\"\
,\n \"acc_norm,none\": 0.464,\n \"acc_norm_stderr,none\":\
\ 0.03160397514522374\n },\n \"leaderboard_bbh_logical_deduction_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.428,\n \"acc_norm_stderr,none\":\
\ 0.031355968923772626\n },\n \"leaderboard_bbh_logical_deduction_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\"\
,\n \"acc_norm,none\": 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.688,\n \"acc_norm_stderr,none\": 0.029361067575219852\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.604,\n \"acc_norm_stderr,none\":\
\ 0.030993197854577898\n },\n \"leaderboard_bbh_object_counting\"\
: {\n \"alias\": \" - leaderboard_bbh_object_counting\",\n \
\ \"acc_norm,none\": 0.376,\n \"acc_norm_stderr,none\": 0.03069633626739458\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"\
alias\": \" - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\"\
: 0.4794520547945205,\n \"acc_norm_stderr,none\": 0.041487661809251744\n\
\ },\n \"leaderboard_bbh_reasoning_about_colored_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\",\n\
\ \"acc_norm,none\": 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \"\
\ - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\": 0.616,\n \
\ \"acc_norm_stderr,none\": 0.030821679117375447\n },\n \"\
leaderboard_bbh_salient_translation_error_detection\": {\n \"alias\"\
: \" - leaderboard_bbh_salient_translation_error_detection\",\n \"acc_norm,none\"\
: 0.56,\n \"acc_norm_stderr,none\": 0.03145724452223569\n },\n\
\ \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6966292134831461,\n \"acc_norm_stderr,none\"\
: 0.03455421944400101\n },\n \"leaderboard_bbh_sports_understanding\"\
: {\n \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \
\ \"acc_norm,none\": 0.86,\n \"acc_norm_stderr,none\": 0.021989409645240245\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.444,\n\
\ \"acc_norm_stderr,none\": 0.03148684942554571\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.156,\n \"acc_norm_stderr,none\": 0.022995023034068682\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.176,\n \"acc_norm_stderr,none\":\
\ 0.024133497525457123\n },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.264,\n \"acc_norm_stderr,none\":\
\ 0.027934518957690866\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2986577181208054,\n\
\ \"acc_norm_stderr,none\": 0.013247871854324349,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.23737373737373738,\n \"acc_norm_stderr,none\": 0.030313710538198924\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.31684981684981683,\n\
\ \"acc_norm_stderr,none\": 0.019929048938214563\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.30357142857142855,\n \"acc_norm_stderr,none\"\
: 0.021747782232917543\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.4232902033271719,\n \"prompt_level_strict_acc_stderr,none\": 0.021261842325248494,\n\
\ \"inst_level_strict_acc,none\": 0.5383693045563549,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.45286506469500926,\n \"prompt_level_loose_acc_stderr,none\": 0.02142075394952955,\n\
\ \"inst_level_loose_acc,none\": 0.5683453237410072,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.006042296072507553,\n \"exact_match_stderr,none\"\
: 0.0021310320096616354,\n \"alias\": \" - leaderboard_math_hard\"\n\
\ },\n \"leaderboard_math_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\": 0.009771986970684038,\n\
\ \"exact_match_stderr,none\": 0.005623391633915856\n },\n \
\ \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\": \"\
\ - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.016260162601626018,\n \"exact_match_stderr,none\": 0.011450452676925654\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.0,\n\
\ \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n\
\ \"exact_match,none\": 0.007142857142857143,\n \"exact_match_stderr,none\"\
: 0.005041703051390571\n },\n \"leaderboard_math_num_theory_hard\"\
: {\n \"alias\": \" - leaderboard_math_num_theory_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_prealgebra_hard\",\n \"exact_match,none\": 0.0051813471502590676,\n\
\ \"exact_match_stderr,none\": 0.0051813471502590676\n },\n \
\ \"leaderboard_math_precalculus_hard\": {\n \"alias\": \" - leaderboard_math_precalculus_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0\n },\n \"leaderboard_mmlu_pro\": {\n \"alias\": \"\
\ - leaderboard_mmlu_pro\",\n \"acc,none\": 0.331781914893617,\n \
\ \"acc_stderr,none\": 0.004292740186210186\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.43386243386243384,\n \"acc_norm_stderr,none\"\
: 0.01742507032186109,\n \"alias\": \" - leaderboard_musr\"\n \
\ },\n \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \"\
\ - leaderboard_musr_murder_mysteries\",\n \"acc_norm,none\": 0.596,\n\
\ \"acc_norm_stderr,none\": 0.03109668818482536\n },\n \
\ \"leaderboard_musr_object_placements\": {\n \"alias\": \" - leaderboard_musr_object_placements\"\
,\n \"acc_norm,none\": 0.27734375,\n \"acc_norm_stderr,none\"\
: 0.02803528549328419\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \
\ \"acc_norm,none\": 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n\
\ }\n },\n \"leaderboard\": {\n \"prompt_level_strict_acc,none\"\
: 0.4232902033271719,\n \"prompt_level_strict_acc_stderr,none\": 0.021261842325248494,\n\
\ \"acc,none\": 0.331781914893617,\n \"acc_stderr,none\": 0.004292740186210186,\n\
\ \"prompt_level_loose_acc,none\": 0.45286506469500926,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.02142075394952955,\n \"inst_level_strict_acc,none\": 0.5383693045563549,\n\
\ \"inst_level_strict_acc_stderr,none\": \"N/A\",\n \"acc_norm,none\"\
: 0.48268257880399534,\n \"acc_norm_stderr,none\": 0.0053367471046366485,\n\
\ \"inst_level_loose_acc,none\": 0.5683453237410072,\n \"inst_level_loose_acc_stderr,none\"\
: \"N/A\",\n \"exact_match,none\": 0.006042296072507553,\n \"exact_match_stderr,none\"\
: 0.0021310320096616354,\n \"alias\": \"leaderboard\"\n },\n \"leaderboard_bbh\"\
: {\n \"acc_norm,none\": 0.5271654226696754,\n \"acc_norm_stderr,none\"\
: 0.00618516517577893,\n \"alias\": \" - leaderboard_bbh\"\n },\n \"\
leaderboard_bbh_boolean_expressions\": {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\"\
,\n \"acc_norm,none\": 0.84,\n \"acc_norm_stderr,none\": 0.023232714782060626\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5882352941176471,\n \"acc_norm_stderr,none\"\
: 0.036086405630856196\n },\n \"leaderboard_bbh_date_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.508,\n \"acc_norm_stderr,none\": 0.03168215643141386\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.66,\n \"acc_norm_stderr,none\": 0.030020073605457876\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.572,\n \"acc_norm_stderr,none\": 0.031355968923772626\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.472,\n \"acc_norm_stderr,none\": 0.031636489531544396\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.628,\n \"acc_norm_stderr,none\": 0.03063032594455827\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.464,\n \"acc_norm_stderr,none\": 0.03160397514522374\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.428,\n \"acc_norm_stderr,none\": 0.031355968923772626\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.58,\n \"acc_norm_stderr,none\": 0.03127799950463661\n },\n \"leaderboard_bbh_movie_recommendation\"\
: {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"\
acc_norm,none\": 0.688,\n \"acc_norm_stderr,none\": 0.029361067575219852\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.604,\n \"acc_norm_stderr,none\": 0.030993197854577898\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.376,\n \"acc_norm_stderr,none\": 0.03069633626739458\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.4794520547945205,\n\
\ \"acc_norm_stderr,none\": 0.041487661809251744\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.552,\n \"acc_norm_stderr,none\": 0.03151438761115348\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.616,\n \"acc_norm_stderr,none\": 0.030821679117375447\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.56,\n \"acc_norm_stderr,none\": 0.03145724452223569\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.6966292134831461,\n \"acc_norm_stderr,none\"\
: 0.03455421944400101\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.86,\n \"acc_norm_stderr,none\": 0.021989409645240245\n },\n \"\
leaderboard_bbh_temporal_sequences\": {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\"\
,\n \"acc_norm,none\": 0.444,\n \"acc_norm_stderr,none\": 0.03148684942554571\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.156,\n \"acc_norm_stderr,none\": 0.022995023034068682\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.176,\n \"acc_norm_stderr,none\": 0.024133497525457123\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.264,\n \"acc_norm_stderr,none\": 0.027934518957690866\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.484,\n \"acc_norm_stderr,none\": 0.03166998503010743\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2986577181208054,\n\
\ \"acc_norm_stderr,none\": 0.013247871854324349,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.23737373737373738,\n\
\ \"acc_norm_stderr,none\": 0.030313710538198924\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.31684981684981683,\n \"acc_norm_stderr,none\": 0.019929048938214563\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.30357142857142855,\n \"acc_norm_stderr,none\"\
: 0.021747782232917543\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.4232902033271719,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.021261842325248494,\n \
\ \"inst_level_strict_acc,none\": 0.5383693045563549,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.45286506469500926,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.02142075394952955,\n \"inst_level_loose_acc,none\"\
: 0.5683453237410072,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n \
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.006042296072507553,\n\
\ \"exact_match_stderr,none\": 0.0021310320096616354,\n \"alias\"\
: \" - leaderboard_math_hard\"\n },\n \"leaderboard_math_algebra_hard\": {\n\
\ \"alias\": \" - leaderboard_math_algebra_hard\",\n \"exact_match,none\"\
: 0.009771986970684038,\n \"exact_match_stderr,none\": 0.005623391633915856\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.016260162601626018,\n \"exact_match_stderr,none\": 0.011450452676925654\n\
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\": \" - leaderboard_math_geometry_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_intermediate_algebra_hard\": {\n \"alias\"\
: \" - leaderboard_math_intermediate_algebra_hard\",\n \"exact_match,none\"\
: 0.007142857142857143,\n \"exact_match_stderr,none\": 0.005041703051390571\n\
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\": \" - leaderboard_math_num_theory_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.0051813471502590676,\n \"exact_match_stderr,none\"\
: 0.0051813471502590676\n },\n \"leaderboard_math_precalculus_hard\": {\n\
\ \"alias\": \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_mmlu_pro\"\
: {\n \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.331781914893617,\n\
\ \"acc_stderr,none\": 0.004292740186210186\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.43386243386243384,\n \"acc_norm_stderr,none\"\
: 0.01742507032186109,\n \"alias\": \" - leaderboard_musr\"\n },\n \
\ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\
,\n \"acc_norm,none\": 0.596,\n \"acc_norm_stderr,none\": 0.03109668818482536\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\
\ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.27734375,\n\
\ \"acc_norm_stderr,none\": 0.02803528549328419\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\
: 0.432,\n \"acc_norm_stderr,none\": 0.03139181076542942\n }\n}\n```"
repo_url: https://huggingface.co/TheHierophant/Underground-Cognitive-V0.3-test
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_navigate
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_snarks
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_gpqa_extended
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_gpqa_main
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_ifeval
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_ifeval_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_mmlu_pro
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_musr_object_placements
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T08-04-51.528750.jsonl'
- config_name: TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_22T08_04_51.528750
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T08-04-51.528750.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T08-04-51.528750.jsonl'
---
# Dataset Card for Evaluation run of TheHierophant/Underground-Cognitive-V0.3-test
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [TheHierophant/Underground-Cognitive-V0.3-test](https://huggingface.co/TheHierophant/Underground-Cognitive-V0.3-test)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/TheHierophant__Underground-Cognitive-V0.3-test-details",
name="TheHierophant__Underground-Cognitive-V0.3-test__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-22T08-04-51.528750](https://huggingface.co/datasets/open-llm-leaderboard/TheHierophant__Underground-Cognitive-V0.3-test-details/blob/main/TheHierophant__Underground-Cognitive-V0.3-test/results_2024-11-22T08-04-51.528750.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"prompt_level_strict_acc,none": 0.4232902033271719,
"prompt_level_strict_acc_stderr,none": 0.021261842325248494,
"acc,none": 0.331781914893617,
"acc_stderr,none": 0.004292740186210186,
"prompt_level_loose_acc,none": 0.45286506469500926,
"prompt_level_loose_acc_stderr,none": 0.02142075394952955,
"inst_level_strict_acc,none": 0.5383693045563549,
"inst_level_strict_acc_stderr,none": "N/A",
"acc_norm,none": 0.48268257880399534,
"acc_norm_stderr,none": 0.0053367471046366485,
"inst_level_loose_acc,none": 0.5683453237410072,
"inst_level_loose_acc_stderr,none": "N/A",
"exact_match,none": 0.006042296072507553,
"exact_match_stderr,none": 0.0021310320096616354,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5271654226696754,
"acc_norm_stderr,none": 0.00618516517577893,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.84,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5882352941176471,
"acc_norm_stderr,none": 0.036086405630856196
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.508,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.66,
"acc_norm_stderr,none": 0.030020073605457876
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.472,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.628,
"acc_norm_stderr,none": 0.03063032594455827
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.428,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.688,
"acc_norm_stderr,none": 0.029361067575219852
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.376,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4794520547945205,
"acc_norm_stderr,none": 0.041487661809251744
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.616,
"acc_norm_stderr,none": 0.030821679117375447
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6966292134831461,
"acc_norm_stderr,none": 0.03455421944400101
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.86,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.444,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.156,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.176,
"acc_norm_stderr,none": 0.024133497525457123
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.264,
"acc_norm_stderr,none": 0.027934518957690866
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2986577181208054,
"acc_norm_stderr,none": 0.013247871854324349,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.23737373737373738,
"acc_norm_stderr,none": 0.030313710538198924
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.31684981684981683,
"acc_norm_stderr,none": 0.019929048938214563
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.30357142857142855,
"acc_norm_stderr,none": 0.021747782232917543
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.4232902033271719,
"prompt_level_strict_acc_stderr,none": 0.021261842325248494,
"inst_level_strict_acc,none": 0.5383693045563549,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.45286506469500926,
"prompt_level_loose_acc_stderr,none": 0.02142075394952955,
"inst_level_loose_acc,none": 0.5683453237410072,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.006042296072507553,
"exact_match_stderr,none": 0.0021310320096616354,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.009771986970684038,
"exact_match_stderr,none": 0.005623391633915856
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.016260162601626018,
"exact_match_stderr,none": 0.011450452676925654
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.007142857142857143,
"exact_match_stderr,none": 0.005041703051390571
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.0051813471502590676,
"exact_match_stderr,none": 0.0051813471502590676
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.331781914893617,
"acc_stderr,none": 0.004292740186210186
},
"leaderboard_musr": {
"acc_norm,none": 0.43386243386243384,
"acc_norm_stderr,none": 0.01742507032186109,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.596,
"acc_norm_stderr,none": 0.03109668818482536
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.27734375,
"acc_norm_stderr,none": 0.02803528549328419
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
}
},
"leaderboard": {
"prompt_level_strict_acc,none": 0.4232902033271719,
"prompt_level_strict_acc_stderr,none": 0.021261842325248494,
"acc,none": 0.331781914893617,
"acc_stderr,none": 0.004292740186210186,
"prompt_level_loose_acc,none": 0.45286506469500926,
"prompt_level_loose_acc_stderr,none": 0.02142075394952955,
"inst_level_strict_acc,none": 0.5383693045563549,
"inst_level_strict_acc_stderr,none": "N/A",
"acc_norm,none": 0.48268257880399534,
"acc_norm_stderr,none": 0.0053367471046366485,
"inst_level_loose_acc,none": 0.5683453237410072,
"inst_level_loose_acc_stderr,none": "N/A",
"exact_match,none": 0.006042296072507553,
"exact_match_stderr,none": 0.0021310320096616354,
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.5271654226696754,
"acc_norm_stderr,none": 0.00618516517577893,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.84,
"acc_norm_stderr,none": 0.023232714782060626
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5882352941176471,
"acc_norm_stderr,none": 0.036086405630856196
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.508,
"acc_norm_stderr,none": 0.03168215643141386
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.66,
"acc_norm_stderr,none": 0.030020073605457876
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.572,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.472,
"acc_norm_stderr,none": 0.031636489531544396
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.628,
"acc_norm_stderr,none": 0.03063032594455827
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.464,
"acc_norm_stderr,none": 0.03160397514522374
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.428,
"acc_norm_stderr,none": 0.031355968923772626
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.58,
"acc_norm_stderr,none": 0.03127799950463661
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.688,
"acc_norm_stderr,none": 0.029361067575219852
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.604,
"acc_norm_stderr,none": 0.030993197854577898
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.376,
"acc_norm_stderr,none": 0.03069633626739458
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.4794520547945205,
"acc_norm_stderr,none": 0.041487661809251744
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.552,
"acc_norm_stderr,none": 0.03151438761115348
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.616,
"acc_norm_stderr,none": 0.030821679117375447
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.56,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.6966292134831461,
"acc_norm_stderr,none": 0.03455421944400101
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.86,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.444,
"acc_norm_stderr,none": 0.03148684942554571
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.156,
"acc_norm_stderr,none": 0.022995023034068682
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.176,
"acc_norm_stderr,none": 0.024133497525457123
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.264,
"acc_norm_stderr,none": 0.027934518957690866
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.484,
"acc_norm_stderr,none": 0.03166998503010743
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2986577181208054,
"acc_norm_stderr,none": 0.013247871854324349,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.23737373737373738,
"acc_norm_stderr,none": 0.030313710538198924
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.31684981684981683,
"acc_norm_stderr,none": 0.019929048938214563
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.30357142857142855,
"acc_norm_stderr,none": 0.021747782232917543
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.4232902033271719,
"prompt_level_strict_acc_stderr,none": 0.021261842325248494,
"inst_level_strict_acc,none": 0.5383693045563549,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.45286506469500926,
"prompt_level_loose_acc_stderr,none": 0.02142075394952955,
"inst_level_loose_acc,none": 0.5683453237410072,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.006042296072507553,
"exact_match_stderr,none": 0.0021310320096616354,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.009771986970684038,
"exact_match_stderr,none": 0.005623391633915856
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.016260162601626018,
"exact_match_stderr,none": 0.011450452676925654
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.007142857142857143,
"exact_match_stderr,none": 0.005041703051390571
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.0051813471502590676,
"exact_match_stderr,none": 0.0051813471502590676
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.331781914893617,
"acc_stderr,none": 0.004292740186210186
},
"leaderboard_musr": {
"acc_norm,none": 0.43386243386243384,
"acc_norm_stderr,none": 0.01742507032186109,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.596,
"acc_norm_stderr,none": 0.03109668818482536
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.27734375,
"acc_norm_stderr,none": 0.02803528549328419
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.432,
"acc_norm_stderr,none": 0.03139181076542942
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_12 | ZixuanKe | "2024-11-22T08:07:46Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T08:07:45Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 142547
num_examples: 24
download_size: 38327
dataset_size: 142547
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PrincekrampahReal/custom-generated-ufo-colpali | PrincekrampahReal | "2024-11-22T08:09:21Z" | 5 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T08:08:59Z" | ---
license: apache-2.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: raw_queries
sequence: string
- name: broad_topical_query
dtype: string
- name: broad_topical_explanation
dtype: string
- name: specific_detail_query
dtype: string
- name: specific_detail_explanation
dtype: string
- name: visual_element_query
dtype: string
- name: visual_element_explanation
dtype: string
- name: parsed_into_json
dtype: bool
splits:
- name: train
num_bytes: 12779093.0
num_examples: 100
download_size: 12708269
dataset_size: 12779093.0
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_17 | ZixuanKe | "2024-11-22T08:10:18Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T08:10:17Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 249300
num_examples: 43
download_size: 31451
dataset_size: 249300
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_27 | ZixuanKe | "2024-11-22T08:10:24Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T08:10:23Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 172583
num_examples: 32
download_size: 24384
dataset_size: 172583
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ZixuanKe/fingpt_convfinqa_sup_sample_from_policy_v1.1_dpo_val_chunk_22 | ZixuanKe | "2024-11-22T08:10:59Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T08:10:58Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: chosen
dtype: string
splits:
- name: train
num_bytes: 233080
num_examples: 53
download_size: 30652
dataset_size: 233080
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
reflection-gen/ds_coder_reflct_rmsprop_iter2_sppo_hard_new_cn_mining_oj_iter2-full_response_traceback | reflection-gen | "2024-11-22T21:59:12Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T08:13:47Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: test
dtype: string
- name: tag
dtype: string
- name: generate_0
dtype: string
- name: generate_0_score
dtype: int64
- name: traceback_0
dtype: string
- name: generate_1
dtype: string
- name: generate_1_score
dtype: int64
- name: traceback_1
dtype: string
- name: generate_2
dtype: string
- name: generate_2_score
dtype: int64
- name: traceback_2
dtype: string
- name: generate_3
dtype: string
- name: generate_3_score
dtype: int64
- name: traceback_3
dtype: string
- name: probability
sequence:
sequence: float64
- name: rm_scores
sequence: int64
splits:
- name: train
num_bytes: 16446279
num_examples: 2156
download_size: 6429025
dataset_size: 16446279
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "ds_coder_reflct_rmsprop_iter2_sppo_hard_new_cn_mining_oj_iter2-full_response_traceback"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
open-llm-leaderboard/BSC-LT__salamandra-7b-details | open-llm-leaderboard | "2024-11-22T08:19:11Z" | 5 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T08:16:10Z" | ---
pretty_name: Evaluation run of BSC-LT/salamandra-7b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [BSC-LT/salamandra-7b](https://huggingface.co/BSC-LT/salamandra-7b)\nThe dataset\
\ is composed of 38 configuration(s), each one corresponding to one of the evaluated\
\ task.\n\nThe dataset has been created from 1 run(s). Each run can be found as\
\ a specific split in each configuration, the split being named using the timestamp\
\ of the run.The \"train\" split is always pointing to the latest results.\n\nAn\
\ additional configuration \"results\" store all the aggregated results of the run.\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\n\t\"open-llm-leaderboard/BSC-LT__salamandra-7b-details\"\
,\n\tname=\"BSC-LT__salamandra-7b__leaderboard_bbh_boolean_expressions\",\n\tsplit=\"\
latest\"\n)\n```\n\n## Latest results\n\nThese are the [latest results from run\
\ 2024-11-22T08-16-09.604171](https://huggingface.co/datasets/open-llm-leaderboard/BSC-LT__salamandra-7b-details/blob/main/BSC-LT__salamandra-7b/results_2024-11-22T08-16-09.604171.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"leaderboard\": {\n\
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0,\n \"acc,none\": 0.14926861702127658,\n \"acc_stderr,none\"\
: 0.003248849684137683,\n \"acc_norm,none\": 0.3383058762485407,\n \
\ \"acc_norm_stderr,none\": 0.005079271239648858,\n \"prompt_level_loose_acc,none\"\
: 0.09611829944547134,\n \"prompt_level_loose_acc_stderr,none\": 0.01268416714715648,\n\
\ \"prompt_level_strict_acc,none\": 0.09242144177449169,\n \
\ \"prompt_level_strict_acc_stderr,none\": 0.012463258511767319,\n \"\
inst_level_strict_acc,none\": 0.18105515587529977,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"inst_level_loose_acc,none\": 0.18944844124700239,\n \
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"alias\"\
: \"leaderboard\"\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\"\
: 0.35098073251171674,\n \"acc_norm_stderr,none\": 0.005847360520810733,\n\
\ \"alias\": \" - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\"\
: {\n \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \
\ \"acc_norm,none\": 0.74,\n \"acc_norm_stderr,none\": 0.027797315752644335\n\
\ },\n \"leaderboard_bbh_causal_judgement\": {\n \"alias\"\
: \" - leaderboard_bbh_causal_judgement\",\n \"acc_norm,none\": 0.5401069518716578,\n\
\ \"acc_norm_stderr,none\": 0.036543642520475775\n },\n \
\ \"leaderboard_bbh_date_understanding\": {\n \"alias\": \" - leaderboard_bbh_date_understanding\"\
,\n \"acc_norm,none\": 0.372,\n \"acc_norm_stderr,none\":\
\ 0.03063032594455827\n },\n \"leaderboard_bbh_disambiguation_qa\"\
: {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\",\n \
\ \"acc_norm,none\": 0.564,\n \"acc_norm_stderr,none\": 0.03142556706028136\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\"\
: \" - leaderboard_bbh_formal_fallacies\",\n \"acc_norm,none\": 0.468,\n\
\ \"acc_norm_stderr,none\": 0.03162125257572558\n },\n \
\ \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.092,\n \"acc_norm_stderr,none\":\
\ 0.01831627537942964\n },\n \"leaderboard_bbh_hyperbaton\": {\n \
\ \"alias\": \" - leaderboard_bbh_hyperbaton\",\n \"acc_norm,none\"\
: 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n },\n\
\ \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.228,\n \"acc_norm_stderr,none\": 0.026587432487268498\n },\n\
\ \"leaderboard_bbh_logical_deduction_seven_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\",\n \"\
acc_norm,none\": 0.14,\n \"acc_norm_stderr,none\": 0.021989409645240245\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n\
\ \"acc_norm,none\": 0.388,\n \"acc_norm_stderr,none\": 0.030881038748993974\n\
\ },\n \"leaderboard_bbh_movie_recommendation\": {\n \"\
alias\": \" - leaderboard_bbh_movie_recommendation\",\n \"acc_norm,none\"\
: 0.596,\n \"acc_norm_stderr,none\": 0.03109668818482536\n },\n\
\ \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.44,\n \"acc_norm_stderr,none\": 0.03145724452223569\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\"\
: \" - leaderboard_bbh_object_counting\",\n \"acc_norm,none\": 0.332,\n\
\ \"acc_norm_stderr,none\": 0.029844039047465857\n },\n \
\ \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" - leaderboard_bbh_penguins_in_a_table\"\
,\n \"acc_norm,none\": 0.2191780821917808,\n \"acc_norm_stderr,none\"\
: 0.03435504786264928\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.196,\n \"acc_norm_stderr,none\":\
\ 0.025156857313255926\n },\n \"leaderboard_bbh_ruin_names\": {\n\
\ \"alias\": \" - leaderboard_bbh_ruin_names\",\n \"acc_norm,none\"\
: 0.168,\n \"acc_norm_stderr,none\": 0.023692813205492536\n },\n\
\ \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.188,\n \"acc_norm_stderr,none\": 0.024760377727750513\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" -\
\ leaderboard_bbh_snarks\",\n \"acc_norm,none\": 0.43258426966292135,\n\
\ \"acc_norm_stderr,none\": 0.03723912037707517\n },\n \
\ \"leaderboard_bbh_sports_understanding\": {\n \"alias\": \" - leaderboard_bbh_sports_understanding\"\
,\n \"acc_norm,none\": 0.48,\n \"acc_norm_stderr,none\": 0.03166085340849512\n\
\ },\n \"leaderboard_bbh_temporal_sequences\": {\n \"alias\"\
: \" - leaderboard_bbh_temporal_sequences\",\n \"acc_norm,none\": 0.172,\n\
\ \"acc_norm_stderr,none\": 0.02391551394448624\n },\n \
\ \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \"\
alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\",\n \
\ \"acc_norm,none\": 0.204,\n \"acc_norm_stderr,none\": 0.025537121574548162\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.12,\n \"acc_norm_stderr,none\": 0.020593600596839998\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.296,\n \"acc_norm_stderr,none\":\
\ 0.028928939388379694\n },\n \"leaderboard_bbh_web_of_lies\": {\n\
\ \"alias\": \" - leaderboard_bbh_web_of_lies\",\n \"acc_norm,none\"\
: 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n },\n\
\ \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2701342281879195,\n\
\ \"acc_norm_stderr,none\": 0.012874693203698098,\n \"alias\"\
: \" - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n\
\ \"alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\"\
: 0.2777777777777778,\n \"acc_norm_stderr,none\": 0.03191178226713548\n\
\ },\n \"leaderboard_gpqa_extended\": {\n \"alias\": \"\
\ - leaderboard_gpqa_extended\",\n \"acc_norm,none\": 0.2765567765567766,\n\
\ \"acc_norm_stderr,none\": 0.019160027479692504\n },\n \
\ \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.25892857142857145,\n \"acc_norm_stderr,none\"\
: 0.020718879324472094\n },\n \"leaderboard_ifeval\": {\n \
\ \"alias\": \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\"\
: 0.09242144177449169,\n \"prompt_level_strict_acc_stderr,none\": 0.012463258511767319,\n\
\ \"inst_level_strict_acc,none\": 0.18105515587529977,\n \"\
inst_level_strict_acc_stderr,none\": \"N/A\",\n \"prompt_level_loose_acc,none\"\
: 0.09611829944547134,\n \"prompt_level_loose_acc_stderr,none\": 0.01268416714715648,\n\
\ \"inst_level_loose_acc,none\": 0.18944844124700239,\n \"\
inst_level_loose_acc_stderr,none\": \"N/A\"\n },\n \"leaderboard_math_hard\"\
: {\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0,\n \"alias\": \" - leaderboard_math_hard\"\n },\n \
\ \"leaderboard_math_algebra_hard\": {\n \"alias\": \" - leaderboard_math_algebra_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0\n },\n \"leaderboard_math_counting_and_prob_hard\": {\n \
\ \"alias\": \" - leaderboard_math_counting_and_prob_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n \
\ },\n \"leaderboard_math_geometry_hard\": {\n \"alias\"\
: \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\": 0.0,\n\
\ \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n\
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\"\
: 0.0\n },\n \"leaderboard_math_num_theory_hard\": {\n \
\ \"alias\": \" - leaderboard_math_num_theory_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_prealgebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_prealgebra_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_precalculus_hard\": {\n \"alias\"\
: \" - leaderboard_math_precalculus_hard\",\n \"exact_match,none\":\
\ 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_mmlu_pro\"\
: {\n \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\"\
: 0.14926861702127658,\n \"acc_stderr,none\": 0.0032488496841376834\n\
\ },\n \"leaderboard_musr\": {\n \"acc_norm,none\": 0.3492063492063492,\n\
\ \"acc_norm_stderr,none\": 0.016882123224531108,\n \"alias\"\
: \" - leaderboard_musr\"\n },\n \"leaderboard_musr_murder_mysteries\"\
: {\n \"alias\": \" - leaderboard_musr_murder_mysteries\",\n \
\ \"acc_norm,none\": 0.504,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\"\
: \" - leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.23828125,\n\
\ \"acc_norm_stderr,none\": 0.026679160987075002\n },\n \
\ \"leaderboard_musr_team_allocation\": {\n \"alias\": \" - leaderboard_musr_team_allocation\"\
,\n \"acc_norm,none\": 0.308,\n \"acc_norm_stderr,none\":\
\ 0.02925692860650181\n }\n },\n \"leaderboard\": {\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0,\n \"acc,none\": 0.14926861702127658,\n\
\ \"acc_stderr,none\": 0.003248849684137683,\n \"acc_norm,none\":\
\ 0.3383058762485407,\n \"acc_norm_stderr,none\": 0.005079271239648858,\n\
\ \"prompt_level_loose_acc,none\": 0.09611829944547134,\n \"prompt_level_loose_acc_stderr,none\"\
: 0.01268416714715648,\n \"prompt_level_strict_acc,none\": 0.09242144177449169,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.012463258511767319,\n \
\ \"inst_level_strict_acc,none\": 0.18105515587529977,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"inst_level_loose_acc,none\": 0.18944844124700239,\n \
\ \"inst_level_loose_acc_stderr,none\": \"N/A\",\n \"alias\": \"leaderboard\"\
\n },\n \"leaderboard_bbh\": {\n \"acc_norm,none\": 0.35098073251171674,\n\
\ \"acc_norm_stderr,none\": 0.005847360520810733,\n \"alias\": \"\
\ - leaderboard_bbh\"\n },\n \"leaderboard_bbh_boolean_expressions\": {\n\
\ \"alias\": \" - leaderboard_bbh_boolean_expressions\",\n \"acc_norm,none\"\
: 0.74,\n \"acc_norm_stderr,none\": 0.027797315752644335\n },\n \"\
leaderboard_bbh_causal_judgement\": {\n \"alias\": \" - leaderboard_bbh_causal_judgement\"\
,\n \"acc_norm,none\": 0.5401069518716578,\n \"acc_norm_stderr,none\"\
: 0.036543642520475775\n },\n \"leaderboard_bbh_date_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_date_understanding\",\n \"acc_norm,none\"\
: 0.372,\n \"acc_norm_stderr,none\": 0.03063032594455827\n },\n \"\
leaderboard_bbh_disambiguation_qa\": {\n \"alias\": \" - leaderboard_bbh_disambiguation_qa\"\
,\n \"acc_norm,none\": 0.564,\n \"acc_norm_stderr,none\": 0.03142556706028136\n\
\ },\n \"leaderboard_bbh_formal_fallacies\": {\n \"alias\": \" - leaderboard_bbh_formal_fallacies\"\
,\n \"acc_norm,none\": 0.468,\n \"acc_norm_stderr,none\": 0.03162125257572558\n\
\ },\n \"leaderboard_bbh_geometric_shapes\": {\n \"alias\": \" - leaderboard_bbh_geometric_shapes\"\
,\n \"acc_norm,none\": 0.092,\n \"acc_norm_stderr,none\": 0.01831627537942964\n\
\ },\n \"leaderboard_bbh_hyperbaton\": {\n \"alias\": \" - leaderboard_bbh_hyperbaton\"\
,\n \"acc_norm,none\": 0.576,\n \"acc_norm_stderr,none\": 0.03131803437491622\n\
\ },\n \"leaderboard_bbh_logical_deduction_five_objects\": {\n \"alias\"\
: \" - leaderboard_bbh_logical_deduction_five_objects\",\n \"acc_norm,none\"\
: 0.228,\n \"acc_norm_stderr,none\": 0.026587432487268498\n },\n \"\
leaderboard_bbh_logical_deduction_seven_objects\": {\n \"alias\": \" - leaderboard_bbh_logical_deduction_seven_objects\"\
,\n \"acc_norm,none\": 0.14,\n \"acc_norm_stderr,none\": 0.021989409645240245\n\
\ },\n \"leaderboard_bbh_logical_deduction_three_objects\": {\n \"\
alias\": \" - leaderboard_bbh_logical_deduction_three_objects\",\n \"acc_norm,none\"\
: 0.388,\n \"acc_norm_stderr,none\": 0.030881038748993974\n },\n \"\
leaderboard_bbh_movie_recommendation\": {\n \"alias\": \" - leaderboard_bbh_movie_recommendation\"\
,\n \"acc_norm,none\": 0.596,\n \"acc_norm_stderr,none\": 0.03109668818482536\n\
\ },\n \"leaderboard_bbh_navigate\": {\n \"alias\": \" - leaderboard_bbh_navigate\"\
,\n \"acc_norm,none\": 0.44,\n \"acc_norm_stderr,none\": 0.03145724452223569\n\
\ },\n \"leaderboard_bbh_object_counting\": {\n \"alias\": \" - leaderboard_bbh_object_counting\"\
,\n \"acc_norm,none\": 0.332,\n \"acc_norm_stderr,none\": 0.029844039047465857\n\
\ },\n \"leaderboard_bbh_penguins_in_a_table\": {\n \"alias\": \" \
\ - leaderboard_bbh_penguins_in_a_table\",\n \"acc_norm,none\": 0.2191780821917808,\n\
\ \"acc_norm_stderr,none\": 0.03435504786264928\n },\n \"leaderboard_bbh_reasoning_about_colored_objects\"\
: {\n \"alias\": \" - leaderboard_bbh_reasoning_about_colored_objects\"\
,\n \"acc_norm,none\": 0.196,\n \"acc_norm_stderr,none\": 0.025156857313255926\n\
\ },\n \"leaderboard_bbh_ruin_names\": {\n \"alias\": \" - leaderboard_bbh_ruin_names\"\
,\n \"acc_norm,none\": 0.168,\n \"acc_norm_stderr,none\": 0.023692813205492536\n\
\ },\n \"leaderboard_bbh_salient_translation_error_detection\": {\n \
\ \"alias\": \" - leaderboard_bbh_salient_translation_error_detection\",\n \
\ \"acc_norm,none\": 0.188,\n \"acc_norm_stderr,none\": 0.024760377727750513\n\
\ },\n \"leaderboard_bbh_snarks\": {\n \"alias\": \" - leaderboard_bbh_snarks\"\
,\n \"acc_norm,none\": 0.43258426966292135,\n \"acc_norm_stderr,none\"\
: 0.03723912037707517\n },\n \"leaderboard_bbh_sports_understanding\": {\n\
\ \"alias\": \" - leaderboard_bbh_sports_understanding\",\n \"acc_norm,none\"\
: 0.48,\n \"acc_norm_stderr,none\": 0.03166085340849512\n },\n \"leaderboard_bbh_temporal_sequences\"\
: {\n \"alias\": \" - leaderboard_bbh_temporal_sequences\",\n \"\
acc_norm,none\": 0.172,\n \"acc_norm_stderr,none\": 0.02391551394448624\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_five_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_five_objects\"\
,\n \"acc_norm,none\": 0.204,\n \"acc_norm_stderr,none\": 0.025537121574548162\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_seven_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_seven_objects\"\
,\n \"acc_norm,none\": 0.12,\n \"acc_norm_stderr,none\": 0.020593600596839998\n\
\ },\n \"leaderboard_bbh_tracking_shuffled_objects_three_objects\": {\n \
\ \"alias\": \" - leaderboard_bbh_tracking_shuffled_objects_three_objects\"\
,\n \"acc_norm,none\": 0.296,\n \"acc_norm_stderr,none\": 0.028928939388379694\n\
\ },\n \"leaderboard_bbh_web_of_lies\": {\n \"alias\": \" - leaderboard_bbh_web_of_lies\"\
,\n \"acc_norm,none\": 0.488,\n \"acc_norm_stderr,none\": 0.03167708558254714\n\
\ },\n \"leaderboard_gpqa\": {\n \"acc_norm,none\": 0.2701342281879195,\n\
\ \"acc_norm_stderr,none\": 0.012874693203698098,\n \"alias\": \"\
\ - leaderboard_gpqa\"\n },\n \"leaderboard_gpqa_diamond\": {\n \"\
alias\": \" - leaderboard_gpqa_diamond\",\n \"acc_norm,none\": 0.2777777777777778,\n\
\ \"acc_norm_stderr,none\": 0.03191178226713548\n },\n \"leaderboard_gpqa_extended\"\
: {\n \"alias\": \" - leaderboard_gpqa_extended\",\n \"acc_norm,none\"\
: 0.2765567765567766,\n \"acc_norm_stderr,none\": 0.019160027479692504\n\
\ },\n \"leaderboard_gpqa_main\": {\n \"alias\": \" - leaderboard_gpqa_main\"\
,\n \"acc_norm,none\": 0.25892857142857145,\n \"acc_norm_stderr,none\"\
: 0.020718879324472094\n },\n \"leaderboard_ifeval\": {\n \"alias\"\
: \" - leaderboard_ifeval\",\n \"prompt_level_strict_acc,none\": 0.09242144177449169,\n\
\ \"prompt_level_strict_acc_stderr,none\": 0.012463258511767319,\n \
\ \"inst_level_strict_acc,none\": 0.18105515587529977,\n \"inst_level_strict_acc_stderr,none\"\
: \"N/A\",\n \"prompt_level_loose_acc,none\": 0.09611829944547134,\n \
\ \"prompt_level_loose_acc_stderr,none\": 0.01268416714715648,\n \"inst_level_loose_acc,none\"\
: 0.18944844124700239,\n \"inst_level_loose_acc_stderr,none\": \"N/A\"\n\
\ },\n \"leaderboard_math_hard\": {\n \"exact_match,none\": 0.0,\n\
\ \"exact_match_stderr,none\": 0.0,\n \"alias\": \" - leaderboard_math_hard\"\
\n },\n \"leaderboard_math_algebra_hard\": {\n \"alias\": \" - leaderboard_math_algebra_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_counting_and_prob_hard\": {\n \"alias\":\
\ \" - leaderboard_math_counting_and_prob_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_geometry_hard\"\
: {\n \"alias\": \" - leaderboard_math_geometry_hard\",\n \"exact_match,none\"\
: 0.0,\n \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_math_intermediate_algebra_hard\"\
: {\n \"alias\": \" - leaderboard_math_intermediate_algebra_hard\",\n \
\ \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n \
\ },\n \"leaderboard_math_num_theory_hard\": {\n \"alias\": \" - leaderboard_math_num_theory_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_prealgebra_hard\": {\n \"alias\": \" - leaderboard_math_prealgebra_hard\"\
,\n \"exact_match,none\": 0.0,\n \"exact_match_stderr,none\": 0.0\n\
\ },\n \"leaderboard_math_precalculus_hard\": {\n \"alias\": \" -\
\ leaderboard_math_precalculus_hard\",\n \"exact_match,none\": 0.0,\n \
\ \"exact_match_stderr,none\": 0.0\n },\n \"leaderboard_mmlu_pro\": {\n\
\ \"alias\": \" - leaderboard_mmlu_pro\",\n \"acc,none\": 0.14926861702127658,\n\
\ \"acc_stderr,none\": 0.0032488496841376834\n },\n \"leaderboard_musr\"\
: {\n \"acc_norm,none\": 0.3492063492063492,\n \"acc_norm_stderr,none\"\
: 0.016882123224531108,\n \"alias\": \" - leaderboard_musr\"\n },\n \
\ \"leaderboard_musr_murder_mysteries\": {\n \"alias\": \" - leaderboard_musr_murder_mysteries\"\
,\n \"acc_norm,none\": 0.504,\n \"acc_norm_stderr,none\": 0.0316851985511992\n\
\ },\n \"leaderboard_musr_object_placements\": {\n \"alias\": \" -\
\ leaderboard_musr_object_placements\",\n \"acc_norm,none\": 0.23828125,\n\
\ \"acc_norm_stderr,none\": 0.026679160987075002\n },\n \"leaderboard_musr_team_allocation\"\
: {\n \"alias\": \" - leaderboard_musr_team_allocation\",\n \"acc_norm,none\"\
: 0.308,\n \"acc_norm_stderr,none\": 0.02925692860650181\n }\n}\n```"
repo_url: https://huggingface.co/BSC-LT/salamandra-7b
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_boolean_expressions
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_boolean_expressions_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_causal_judgement
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_causal_judgement_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_date_understanding
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_date_understanding_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_disambiguation_qa
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_disambiguation_qa_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_formal_fallacies
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_formal_fallacies_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_geometric_shapes
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_geometric_shapes_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_hyperbaton
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_hyperbaton_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_logical_deduction_five_objects
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_five_objects_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_logical_deduction_seven_objects
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_seven_objects_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_logical_deduction_three_objects
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_logical_deduction_three_objects_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_movie_recommendation
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_movie_recommendation_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_navigate
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_navigate_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_object_counting
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_object_counting_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_penguins_in_a_table
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_penguins_in_a_table_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_reasoning_about_colored_objects
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_reasoning_about_colored_objects_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_ruin_names
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_ruin_names_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_salient_translation_error_detection
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_salient_translation_error_detection_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_snarks
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_snarks_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_sports_understanding
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_sports_understanding_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_temporal_sequences
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_temporal_sequences_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_tracking_shuffled_objects_five_objects
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_five_objects_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_tracking_shuffled_objects_seven_objects
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_seven_objects_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_tracking_shuffled_objects_three_objects
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_tracking_shuffled_objects_three_objects_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_bbh_web_of_lies
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_bbh_web_of_lies_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_gpqa_diamond
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_diamond_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_gpqa_extended
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_extended_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_gpqa_main
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_gpqa_main_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_ifeval
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_ifeval_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_ifeval_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_math_algebra_hard
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_algebra_hard_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_math_counting_and_prob_hard
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_counting_and_prob_hard_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_math_geometry_hard
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_geometry_hard_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_math_intermediate_algebra_hard
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_intermediate_algebra_hard_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_math_num_theory_hard
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_num_theory_hard_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_math_prealgebra_hard
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_prealgebra_hard_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_math_precalculus_hard
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_math_precalculus_hard_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_mmlu_pro
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_mmlu_pro_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_musr_murder_mysteries
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_murder_mysteries_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_musr_object_placements
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_object_placements_2024-11-22T08-16-09.604171.jsonl'
- config_name: BSC-LT__salamandra-7b__leaderboard_musr_team_allocation
data_files:
- split: 2024_11_22T08_16_09.604171
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T08-16-09.604171.jsonl'
- split: latest
path:
- '**/samples_leaderboard_musr_team_allocation_2024-11-22T08-16-09.604171.jsonl'
---
# Dataset Card for Evaluation run of BSC-LT/salamandra-7b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [BSC-LT/salamandra-7b](https://huggingface.co/BSC-LT/salamandra-7b)
The dataset is composed of 38 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"open-llm-leaderboard/BSC-LT__salamandra-7b-details",
name="BSC-LT__salamandra-7b__leaderboard_bbh_boolean_expressions",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-11-22T08-16-09.604171](https://huggingface.co/datasets/open-llm-leaderboard/BSC-LT__salamandra-7b-details/blob/main/BSC-LT__salamandra-7b/results_2024-11-22T08-16-09.604171.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"leaderboard": {
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0,
"acc,none": 0.14926861702127658,
"acc_stderr,none": 0.003248849684137683,
"acc_norm,none": 0.3383058762485407,
"acc_norm_stderr,none": 0.005079271239648858,
"prompt_level_loose_acc,none": 0.09611829944547134,
"prompt_level_loose_acc_stderr,none": 0.01268416714715648,
"prompt_level_strict_acc,none": 0.09242144177449169,
"prompt_level_strict_acc_stderr,none": 0.012463258511767319,
"inst_level_strict_acc,none": 0.18105515587529977,
"inst_level_strict_acc_stderr,none": "N/A",
"inst_level_loose_acc,none": 0.18944844124700239,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.35098073251171674,
"acc_norm_stderr,none": 0.005847360520810733,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.74,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5401069518716578,
"acc_norm_stderr,none": 0.036543642520475775
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.372,
"acc_norm_stderr,none": 0.03063032594455827
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.564,
"acc_norm_stderr,none": 0.03142556706028136
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.468,
"acc_norm_stderr,none": 0.03162125257572558
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.092,
"acc_norm_stderr,none": 0.01831627537942964
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.228,
"acc_norm_stderr,none": 0.026587432487268498
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.14,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.388,
"acc_norm_stderr,none": 0.030881038748993974
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.596,
"acc_norm_stderr,none": 0.03109668818482536
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.44,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.332,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.2191780821917808,
"acc_norm_stderr,none": 0.03435504786264928
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.196,
"acc_norm_stderr,none": 0.025156857313255926
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.168,
"acc_norm_stderr,none": 0.023692813205492536
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.43258426966292135,
"acc_norm_stderr,none": 0.03723912037707517
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.172,
"acc_norm_stderr,none": 0.02391551394448624
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.204,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.12,
"acc_norm_stderr,none": 0.020593600596839998
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.296,
"acc_norm_stderr,none": 0.028928939388379694
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2701342281879195,
"acc_norm_stderr,none": 0.012874693203698098,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2777777777777778,
"acc_norm_stderr,none": 0.03191178226713548
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2765567765567766,
"acc_norm_stderr,none": 0.019160027479692504
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.25892857142857145,
"acc_norm_stderr,none": 0.020718879324472094
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.09242144177449169,
"prompt_level_strict_acc_stderr,none": 0.012463258511767319,
"inst_level_strict_acc,none": 0.18105515587529977,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.09611829944547134,
"prompt_level_loose_acc_stderr,none": 0.01268416714715648,
"inst_level_loose_acc,none": 0.18944844124700239,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.14926861702127658,
"acc_stderr,none": 0.0032488496841376834
},
"leaderboard_musr": {
"acc_norm,none": 0.3492063492063492,
"acc_norm_stderr,none": 0.016882123224531108,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.504,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.23828125,
"acc_norm_stderr,none": 0.026679160987075002
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.308,
"acc_norm_stderr,none": 0.02925692860650181
}
},
"leaderboard": {
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0,
"acc,none": 0.14926861702127658,
"acc_stderr,none": 0.003248849684137683,
"acc_norm,none": 0.3383058762485407,
"acc_norm_stderr,none": 0.005079271239648858,
"prompt_level_loose_acc,none": 0.09611829944547134,
"prompt_level_loose_acc_stderr,none": 0.01268416714715648,
"prompt_level_strict_acc,none": 0.09242144177449169,
"prompt_level_strict_acc_stderr,none": 0.012463258511767319,
"inst_level_strict_acc,none": 0.18105515587529977,
"inst_level_strict_acc_stderr,none": "N/A",
"inst_level_loose_acc,none": 0.18944844124700239,
"inst_level_loose_acc_stderr,none": "N/A",
"alias": "leaderboard"
},
"leaderboard_bbh": {
"acc_norm,none": 0.35098073251171674,
"acc_norm_stderr,none": 0.005847360520810733,
"alias": " - leaderboard_bbh"
},
"leaderboard_bbh_boolean_expressions": {
"alias": " - leaderboard_bbh_boolean_expressions",
"acc_norm,none": 0.74,
"acc_norm_stderr,none": 0.027797315752644335
},
"leaderboard_bbh_causal_judgement": {
"alias": " - leaderboard_bbh_causal_judgement",
"acc_norm,none": 0.5401069518716578,
"acc_norm_stderr,none": 0.036543642520475775
},
"leaderboard_bbh_date_understanding": {
"alias": " - leaderboard_bbh_date_understanding",
"acc_norm,none": 0.372,
"acc_norm_stderr,none": 0.03063032594455827
},
"leaderboard_bbh_disambiguation_qa": {
"alias": " - leaderboard_bbh_disambiguation_qa",
"acc_norm,none": 0.564,
"acc_norm_stderr,none": 0.03142556706028136
},
"leaderboard_bbh_formal_fallacies": {
"alias": " - leaderboard_bbh_formal_fallacies",
"acc_norm,none": 0.468,
"acc_norm_stderr,none": 0.03162125257572558
},
"leaderboard_bbh_geometric_shapes": {
"alias": " - leaderboard_bbh_geometric_shapes",
"acc_norm,none": 0.092,
"acc_norm_stderr,none": 0.01831627537942964
},
"leaderboard_bbh_hyperbaton": {
"alias": " - leaderboard_bbh_hyperbaton",
"acc_norm,none": 0.576,
"acc_norm_stderr,none": 0.03131803437491622
},
"leaderboard_bbh_logical_deduction_five_objects": {
"alias": " - leaderboard_bbh_logical_deduction_five_objects",
"acc_norm,none": 0.228,
"acc_norm_stderr,none": 0.026587432487268498
},
"leaderboard_bbh_logical_deduction_seven_objects": {
"alias": " - leaderboard_bbh_logical_deduction_seven_objects",
"acc_norm,none": 0.14,
"acc_norm_stderr,none": 0.021989409645240245
},
"leaderboard_bbh_logical_deduction_three_objects": {
"alias": " - leaderboard_bbh_logical_deduction_three_objects",
"acc_norm,none": 0.388,
"acc_norm_stderr,none": 0.030881038748993974
},
"leaderboard_bbh_movie_recommendation": {
"alias": " - leaderboard_bbh_movie_recommendation",
"acc_norm,none": 0.596,
"acc_norm_stderr,none": 0.03109668818482536
},
"leaderboard_bbh_navigate": {
"alias": " - leaderboard_bbh_navigate",
"acc_norm,none": 0.44,
"acc_norm_stderr,none": 0.03145724452223569
},
"leaderboard_bbh_object_counting": {
"alias": " - leaderboard_bbh_object_counting",
"acc_norm,none": 0.332,
"acc_norm_stderr,none": 0.029844039047465857
},
"leaderboard_bbh_penguins_in_a_table": {
"alias": " - leaderboard_bbh_penguins_in_a_table",
"acc_norm,none": 0.2191780821917808,
"acc_norm_stderr,none": 0.03435504786264928
},
"leaderboard_bbh_reasoning_about_colored_objects": {
"alias": " - leaderboard_bbh_reasoning_about_colored_objects",
"acc_norm,none": 0.196,
"acc_norm_stderr,none": 0.025156857313255926
},
"leaderboard_bbh_ruin_names": {
"alias": " - leaderboard_bbh_ruin_names",
"acc_norm,none": 0.168,
"acc_norm_stderr,none": 0.023692813205492536
},
"leaderboard_bbh_salient_translation_error_detection": {
"alias": " - leaderboard_bbh_salient_translation_error_detection",
"acc_norm,none": 0.188,
"acc_norm_stderr,none": 0.024760377727750513
},
"leaderboard_bbh_snarks": {
"alias": " - leaderboard_bbh_snarks",
"acc_norm,none": 0.43258426966292135,
"acc_norm_stderr,none": 0.03723912037707517
},
"leaderboard_bbh_sports_understanding": {
"alias": " - leaderboard_bbh_sports_understanding",
"acc_norm,none": 0.48,
"acc_norm_stderr,none": 0.03166085340849512
},
"leaderboard_bbh_temporal_sequences": {
"alias": " - leaderboard_bbh_temporal_sequences",
"acc_norm,none": 0.172,
"acc_norm_stderr,none": 0.02391551394448624
},
"leaderboard_bbh_tracking_shuffled_objects_five_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_five_objects",
"acc_norm,none": 0.204,
"acc_norm_stderr,none": 0.025537121574548162
},
"leaderboard_bbh_tracking_shuffled_objects_seven_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_seven_objects",
"acc_norm,none": 0.12,
"acc_norm_stderr,none": 0.020593600596839998
},
"leaderboard_bbh_tracking_shuffled_objects_three_objects": {
"alias": " - leaderboard_bbh_tracking_shuffled_objects_three_objects",
"acc_norm,none": 0.296,
"acc_norm_stderr,none": 0.028928939388379694
},
"leaderboard_bbh_web_of_lies": {
"alias": " - leaderboard_bbh_web_of_lies",
"acc_norm,none": 0.488,
"acc_norm_stderr,none": 0.03167708558254714
},
"leaderboard_gpqa": {
"acc_norm,none": 0.2701342281879195,
"acc_norm_stderr,none": 0.012874693203698098,
"alias": " - leaderboard_gpqa"
},
"leaderboard_gpqa_diamond": {
"alias": " - leaderboard_gpqa_diamond",
"acc_norm,none": 0.2777777777777778,
"acc_norm_stderr,none": 0.03191178226713548
},
"leaderboard_gpqa_extended": {
"alias": " - leaderboard_gpqa_extended",
"acc_norm,none": 0.2765567765567766,
"acc_norm_stderr,none": 0.019160027479692504
},
"leaderboard_gpqa_main": {
"alias": " - leaderboard_gpqa_main",
"acc_norm,none": 0.25892857142857145,
"acc_norm_stderr,none": 0.020718879324472094
},
"leaderboard_ifeval": {
"alias": " - leaderboard_ifeval",
"prompt_level_strict_acc,none": 0.09242144177449169,
"prompt_level_strict_acc_stderr,none": 0.012463258511767319,
"inst_level_strict_acc,none": 0.18105515587529977,
"inst_level_strict_acc_stderr,none": "N/A",
"prompt_level_loose_acc,none": 0.09611829944547134,
"prompt_level_loose_acc_stderr,none": 0.01268416714715648,
"inst_level_loose_acc,none": 0.18944844124700239,
"inst_level_loose_acc_stderr,none": "N/A"
},
"leaderboard_math_hard": {
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0,
"alias": " - leaderboard_math_hard"
},
"leaderboard_math_algebra_hard": {
"alias": " - leaderboard_math_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_counting_and_prob_hard": {
"alias": " - leaderboard_math_counting_and_prob_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_geometry_hard": {
"alias": " - leaderboard_math_geometry_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_intermediate_algebra_hard": {
"alias": " - leaderboard_math_intermediate_algebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_num_theory_hard": {
"alias": " - leaderboard_math_num_theory_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_prealgebra_hard": {
"alias": " - leaderboard_math_prealgebra_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_math_precalculus_hard": {
"alias": " - leaderboard_math_precalculus_hard",
"exact_match,none": 0.0,
"exact_match_stderr,none": 0.0
},
"leaderboard_mmlu_pro": {
"alias": " - leaderboard_mmlu_pro",
"acc,none": 0.14926861702127658,
"acc_stderr,none": 0.0032488496841376834
},
"leaderboard_musr": {
"acc_norm,none": 0.3492063492063492,
"acc_norm_stderr,none": 0.016882123224531108,
"alias": " - leaderboard_musr"
},
"leaderboard_musr_murder_mysteries": {
"alias": " - leaderboard_musr_murder_mysteries",
"acc_norm,none": 0.504,
"acc_norm_stderr,none": 0.0316851985511992
},
"leaderboard_musr_object_placements": {
"alias": " - leaderboard_musr_object_placements",
"acc_norm,none": 0.23828125,
"acc_norm_stderr,none": 0.026679160987075002
},
"leaderboard_musr_team_allocation": {
"alias": " - leaderboard_musr_team_allocation",
"acc_norm,none": 0.308,
"acc_norm_stderr,none": 0.02925692860650181
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
neoneye/simon-arc-solve-skew-v7 | neoneye | "2024-11-22T08:28:57Z" | 5 | 0 | [
"task_categories:image-to-text",
"task_categories:text-to-image",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"image-to-text",
"text-to-image"
] | "2024-11-22T08:27:33Z" | ---
license: mit
task_categories:
- image-to-text
- text-to-image
language:
- en
pretty_name: simons ARC (abstraction & reasoning corpus) solve skew version 7
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: data.jsonl
---
# Version 1
ARC-AGI Tasks where the job is to apply skew/unkew in the directions up/down/left/right.
example count: 2-4.
test count: 1-2.
image size: 1-4.
# Version 2
image size: 1-7.
# Version 3
Earlier predictions added to some of the rows.
# Version 4
Added fields: `arc_task`, `test_index`, `earlier_output`.
# Version 5
Replaced RLE compressed response with raw pixel response.
# Version 6
image size: 1-9.
# Version 7
Smaller images again.
image size: 1-5.
|
geyongtao/layer_human | geyongtao | "2024-11-22T08:34:20Z" | 5 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-11-22T08:34:20Z" | ---
license: apache-2.0
---
|
ShenTan/video-matting | ShenTan | "2024-11-22T09:28:21Z" | 5 | 0 | [
"license:mit",
"size_categories:n<1K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-11-22T08:39:15Z" | ---
license: mit
---
|
musaday/llama3data_ybmusa_instruction | musaday | "2024-11-22T09:02:00Z" | 5 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T09:00:39Z" | ---
license: mit
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 1218721
num_examples: 3586
download_size: 499147
dataset_size: 1218721
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Ameeeee/my-distiset-46cdcb49 | Ameeeee | "2024-11-22T09:07:18Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-11-22T09:07:17Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': fruit-bearing
'1': evergreen
'2': ornamental
'3': flowering
'4': deciduous
'5': coniferous
splits:
- name: train
num_bytes: 3849
num_examples: 10
download_size: 4855
dataset_size: 3849
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-46cdcb49
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/Ameeeee/my-distiset-46cdcb49/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/Ameeeee/my-distiset-46cdcb49/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"label": 4,
"text": "The European beech tree (Fagus sylvatica) is a deciduous tree species native to Europe, with its broad, rounded crown, spreading branches, and smooth, grey bark. It is a well-known and highly valued species in European forestry and horticulture, primarily prized for its timber and the creation of beech leaf litter which is a vital component of nutrient cycling."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("Ameeeee/my-distiset-46cdcb49", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("Ameeeee/my-distiset-46cdcb49")
```
</details>
|
pkuAI4M/lean_github | pkuAI4M | "2024-11-22T09:18:02Z" | 5 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T09:17:41Z" | ---
dataset_info:
features:
- name: url
dtype: string
- name: commit
dtype: string
- name: file_path
dtype: string
- name: full_name
dtype: string
- name: start
dtype: string
- name: end
dtype: string
- name: tactic
dtype: string
- name: state_before
dtype: string
- name: state_after
dtype: string
- name: input
dtype: string
splits:
- name: train
num_bytes: 653756100
num_examples: 218866
download_size: 65989335
dataset_size: 653756100
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
wyzard-ai/Anuj | wyzard-ai | "2024-11-22T09:19:40Z" | 5 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:argilla",
"region:us",
"rlfh",
"argilla",
"human-feedback"
] | null | "2024-11-22T09:18:18Z" | ---
size_categories: n<1K
tags:
- rlfh
- argilla
- human-feedback
---
# Dataset Card for Anuj
This dataset has been created with [Argilla](https://github.com/argilla-io/argilla). As shown in the sections below, this dataset can be loaded into your Argilla server as explained in [Load with Argilla](#load-with-argilla), or used directly with the `datasets` library in [Load with `datasets`](#load-with-datasets).
## Using this dataset with Argilla
To load with Argilla, you'll just need to install Argilla as `pip install argilla --upgrade` and then use the following code:
```python
import argilla as rg
ds = rg.Dataset.from_hub("wyzard-ai/Anuj", settings="auto")
```
This will load the settings and records from the dataset repository and push them to you Argilla server for exploration and annotation.
## Using this dataset with `datasets`
To load the records of this dataset with `datasets`, you'll just need to install `datasets` as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset("wyzard-ai/Anuj")
```
This will only load the records of the dataset, but not the Argilla settings.
## Dataset Structure
This dataset repo contains:
* Dataset records in a format compatible with HuggingFace `datasets`. These records will be loaded automatically when using `rg.Dataset.from_hub` and can be loaded independently using the `datasets` library via `load_dataset`.
* The [annotation guidelines](#annotation-guidelines) that have been used for building and curating the dataset, if they've been defined in Argilla.
* A dataset configuration folder conforming to the Argilla dataset format in `.argilla`.
The dataset is created in Argilla with: **fields**, **questions**, **suggestions**, **metadata**, **vectors**, and **guidelines**.
### Fields
The **fields** are the features or text of a dataset's records. For example, the 'text' column of a text classification dataset of the 'prompt' column of an instruction following dataset.
| Field Name | Title | Type | Required | Markdown |
| ---------- | ----- | ---- | -------- | -------- |
| instruction | User instruction | text | True | True |
### Questions
The **questions** are the questions that will be asked to the annotators. They can be of different types, such as rating, text, label_selection, multi_label_selection, or ranking.
| Question Name | Title | Type | Required | Description | Values/Labels |
| ------------- | ----- | ---- | -------- | ----------- | ------------- |
| relevance_score | How Relevant is the conversation based upon expert. Is the conversation highly curated for you or not. Please don't judge accuracy. | rating | True | N/A | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
| accuracy_score | How accurate is the conversation based upon persona | rating | True | if | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
| clarity_score | How clear is the conversation based upon persona | rating | True | Is the LLM getting confused | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
| actionable_score | How actionable is the conversation based upon persona | rating | True | Is the LLM response to actionable for example, it shows comparison card on the right question. | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
| engagement_score | How engaging is the conversation based upon persona | rating | True | Are there a lot of question that are being shown if yes, high score else low score | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
| completeness_score | is the conversation complete based upon persona | rating | True | is the conversation complete based upon persona, not leaving any key aspect out | [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] |
| feedback | feedback | text | True | What do you think can be improved in the given conversation. How good was the conversation as per you? | N/A |
<!-- check length of metadata properties -->
### Metadata
The **metadata** is a dictionary that can be used to provide additional information about the dataset record.
| Metadata Name | Title | Type | Values | Visible for Annotators |
| ------------- | ----- | ---- | ------ | ---------------------- |
| conv_id | Conversation ID | | - | True |
| turn | Conversation Turn | | 0 - 100 | True |
### Data Instances
An example of a dataset instance in Argilla looks as follows:
```json
{
"_server_id": "56540022-7892-43f0-8d3f-214638ef15b1",
"fields": {
"instruction": "**user**: HI, Let me know good in memory cache libraries\n**assistant**: Several factors influence the best in-memory caching library for your needs. To help me narrow down the most suitable options, could you please tell me more about:\n\n* **What programming language are you using?** (e.g., Java, Python, .NET)\n* **What is the scale of your application?** (e.g., small project, large enterprise system)\n* **What are your specific performance requirements?** (e.g., throughput, latency)\n* **What kind of data will you be caching?** (e.g., simple objects, complex data structures)\n\n\nHere are a few popular in-memory caching options to consider:\n\n| Library/Tool | Language | Description |\n|---|---|---|\n| Guava Cache | Java | Powerful and flexible caching offered by Google\u0027s Guava library. |\n| Redis | Cross-Language | In-memory data structure store, used as a database, cache, and message broker. |\n| Memcached | Cross-Language | General-purpose distributed memory caching system. |\n| Ehcache | Java | Widely used open-source caching library. |\n\n\nProviding more context about your needs will allow me to provide more targeted recommendations.\n\n**user**: Language - java, medium size project, requirement is throughput, caching object can be complex\n**assistant**: Given your requirements for a Java-based, medium-sized project with a focus on throughput and complex object caching, Guava Cache and Ehcache are strong contenders.\n\n| Library | Description | Considerations |\n|---|---|---|\n| Guava Cache | Part of Google\u0027s Guava library, offering a simple, thread-safe, and high-performance solution. Excellent for local, in-memory caching. | Suitable for single-node applications. Consider Caffeine, a Java 8 rewrite of Guava Cache, for even better performance. |\n| Ehcache | Open-source, standards-based cache known for boosting performance and simplifying scalability. Supports memory and disk stores. | Mature and widely used, offering various features like listeners and cache loaders. May be slightly more complex to set up than Guava Cache. |\n\n\n* **Are there any specific functionalities you require, such as eviction policies or data persistence?**\n* **Do you anticipate scaling to a distributed environment in the future?** (This would influence the choice between local caches like Guava/Ehcache and distributed ones like Redis/Memcached.)\n\nKnowing these details will help me pinpoint the ideal solution for your project.\n\n**feedback**: I beleive this answer can be given by development specialist as well, as its related to development only. Although answer seems to be correct.\n**user**: I need to have application running on multiple kubernetes pods.\n**assistant**: Given your application runs on multiple Kubernetes pods, a distributed cache is necessary. Local in-memory caches like Guava or Ehcache won\u0027t be effective across different pods.\n\nHere are a few distributed caching options suitable for Kubernetes:\n\n| Tool | Description | Considerations |\n|---|---|---|\n| Redis | Highly popular in-memory data structure store. Used as a database, cache, and message broker. Excellent performance and scalability. | Requires setting up a Redis cluster within your Kubernetes environment. |\n| Memcached | General-purpose distributed memory caching system. Known for its simplicity and speed. | Simpler to set up than Redis but offers fewer features. |\n\n\n* **What type of data are you caching?** (Simple strings, complex objects, etc.)\n* **What are your performance requirements?** (Throughput, latency, etc.)\n* **Do you require data persistence?** (Redis offers persistence options, while Memcached is purely in-memory.)\n\nAnswering these questions will help me provide the most tailored solution. If you\u0027re ready to discuss implementation, you can contact our sales representative, Eshu, at eshu@wyzard.in or +91-9284730045.\n\n**feedback**: In comparison table , both redis and memcached should be shown, here only redis is shown \n**user**: Can you please give me difference between redis and memcached in more detail\n**assistant**: When comparing Redis and Memcached, both are powerful in-memory caching solutions, but they serve different use cases and have distinct features. Here\u2019s a detailed comparison based on their capabilities:\n\nnull{\"graph_type\": \"comparison table\", \"x_label\": \"Features\", \"y_label\": \"Redis vs Memcached\", \"json_payload\": [{\"name\": \"Redis\", \"info\": [{\"feature\": \"Data Structure Support\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Persistence Options\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Replication Support\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Data Expiration\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Atomic Operations\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Pub/Sub Messaging\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Complex Data Types\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Multi-threaded\", \"value\": false, \"subFeatures\": []}, {\"feature\": \"Memory Management\", \"value\": true, \"subFeatures\": []}]}]}\n\n### Insights\n\n1. **Data Structure Support**: Redis supports various data structures (strings, hashes, lists, sets, etc.), making it versatile for different use cases. In contrast, Memcached primarily supports simple key-value pairs.\n\n2. **Persistence Options**: Redis offers options for data persistence, allowing you to save data to disk, which is not available in Memcached. This makes Redis suitable for applications that require data durability.\n\n3. **Replication and Clustering**: Redis supports replication and clustering, enhancing its scalability and availability. Memcached lacks these features, making it less suitable for distributed architectures.\n\n4. **Atomic Operations**: Redis supports atomic operations on its data structures, while Memcached does not, which can be critical for certain applications requiring consistent state.\n\n5. **Memory Management**: Redis has advanced memory management features, including the ability to handle large datasets efficiently, whereas Memcached does not provide such capabilities.\n\n### Business Implications\n\nChoosing between Redis and Memcached depends on your application\u0027s specific requirements. If you need advanced data structures, persistence, and replication, Redis is the better choice. For simpler caching needs with high-speed access and less complexity, Memcached may suffice.\n\nIf you\u0027re interested in exploring how these caching solutions can be integrated into your tech stack or require a demo, please feel free to reach out for further assistance.\n**feedback**: Needed more elaboration on how to pass configs in spring boot application\n**user**: how to configure redis-cluster in java application using spring boot\n**assistant**: For configuring a Redis cluster in your Java Spring Boot application, you\u0027ll need to use the `Lettuce` or `Jedis` client library along with Spring Data Redis.\n\nHere\u0027s a breakdown using `Lettuce`, which is generally recommended for its thread-safety and performance:\n\n| Dependency | Purpose |\n|---|---|\n| `spring-boot-starter-data-redis` | Provides Spring Data Redis integration. |\n| `io.lettuce:lettuce-core` | The Lettuce Redis client library. |\n\n**Configuration:**\n\n1. **Cluster Configuration:** Configure the `LettuceConnectionFactory` to connect to your Redis cluster nodes. You\u0027ll need to provide the host and port of each node.\n\n ```java\n @Configuration\n public class RedisConfig {\n\n @Bean\n public LettuceConnectionFactory redisConnectionFactory() {\n RedisClusterConfiguration clusterConfig = new RedisClusterConfiguration();\n clusterConfig.clusterNodeProvider(new StaticClusterNodeProvider(\n Arrays.asList(new RedisNode(\"host1\", 7000), \n new RedisNode(\"host2\", 7001),\n new RedisNode(\"host3\", 7002)))); // Add all your cluster nodes\n return new LettuceConnectionFactory(clusterConfig);\n }\n\n @Bean\n public RedisTemplate\u003cString, Object\u003e redisTemplate(RedisConnectionFactory connectionFactory) {\n RedisTemplate\u003cString, Object\u003e template = new RedisTemplate\u003c\u003e();\n template.setConnectionFactory(connectionFactory);\n // Configure any serialization/deserialization strategies as needed\n return template;\n }\n }\n ```\n\n2. **Usage:** Use the `RedisTemplate` to interact with the Redis cluster.\n\n ```java\n @Service\n public class RedisService {\n\n @Autowired\n private RedisTemplate\u003cString, Object\u003e redisTemplate;\n\n public void setValue(String key, Object value) {\n redisTemplate.opsForValue().set(key, value);\n }\n\n public Object getValue(String key) {\n return redisTemplate.opsForValue().get(key);\n }\n }\n ```\n\n**Key Considerations:**\n\n* **Connection Pooling:** Configure connection pooling for better resource management.\n* **Serialization:** Choose an appropriate serialization strategy (e.g., Jackson2JsonRedisSerializer) for complex objects.\n* **Error Handling:** Implement proper error handling for connection failures or other Redis-related issues.\n\nIf you are ready to proceed with implementation and need further assistance, please contact our sales representative at sales@wyzard.in or +91-8826746733.\n\n**user**: which is preferable lettuce or spring-boot-starter-data-redis or any other?\n**assistant**: When comparing `Lettuce` and `spring-boot-starter-data-redis`, both serve the purpose of integrating Redis with Java applications, but they have distinct roles and features. Here\u2019s a detailed comparison based on their capabilities:\n\nnull{\"graph_type\": \"comparison table\", \"x_label\": \"Features\", \"y_label\": \"Lettuce vs Spring Boot Starter Data Redis\", \"json_payload\": [{\"name\": \"Lettuce\", \"info\": [{\"feature\": \"Asynchronous API Support\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Thread Safety\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Reactive Programming Support\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Connection Pooling\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Low-Level API Access\", \"value\": true, \"subFeatures\": []}]}, {\"name\": \"Spring Boot Starter Data Redis\", \"info\": [{\"feature\": \"Asynchronous API Support\", \"value\": false, \"subFeatures\": []}, {\"feature\": \"Thread Safety\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Reactive Programming Support\", \"value\": false, \"subFeatures\": []}, {\"feature\": \"Connection Pooling\", \"value\": false, \"subFeatures\": []}, {\"feature\": \"Low-Level API Access\", \"value\": false, \"subFeatures\": []}]}]}\n\n### Insights\n\n1. **Asynchronous API Support**: Lettuce provides asynchronous capabilities, making it suitable for applications that require non-blocking operations. In contrast, the Spring Boot Starter does not offer this feature, which may limit its use in highly concurrent applications.\n\n2. **Reactive Programming Support**: Lettuce supports reactive programming, which is beneficial for modern applications that leverage reactive streams. The Spring Boot Starter does not support this paradigm, making it less flexible for reactive applications.\n\n3. **Connection Pooling**: Lettuce supports connection pooling, which is essential for managing multiple connections efficiently. The Spring Boot Starter does not provide this feature, which may lead to performance bottlenecks in high-load scenarios.\n\n4. **Low-Level API Access**: Lettuce offers low-level API access, allowing developers to interact directly with Redis commands. The Spring Boot Starter abstracts these details, which can simplify development but may limit advanced use cases.\n\n### Business Implications\n\nChoosing between `Lettuce` and `spring-boot-starter-data-redis` depends on your application\u0027s specific requirements. If you need advanced features like asynchronous support and reactive programming, `Lettuce` is the preferable choice. However, if you are looking for a simplified integration with Spring and do not require those advanced features, `spring-boot-starter-data-redis` may suffice.\n\nIf you\u0027re interested in exploring how these libraries can be integrated into your tech stack or require a demo, please feel free to reach out for further assistance.\n**user**: which configuration need to pass in application to connect to redis shard having two nodes\n**assistant**: To connect to a Redis shard with two nodes, you\u0027ll need to configure your Redis client library to connect to both nodes. Here\u0027s how you can do it using `Lettuce`, the recommended client for Spring Boot applications, along with the necessary configuration details for your application:\n\n| Parameter | Description | Example |\n|---|---|---|\n| `spring.redis.cluster.nodes` | Comma-separated list of Redis node addresses. | `host1:port1,host2:port2` |\n\n\n**Example Configuration in `application.properties`:**\n\n```\nspring.redis.cluster.nodes=redis-node-1:6379,redis-node-2:6380 \n```\n\n**Explanation:**\n\n* This configuration tells Spring Data Redis to connect to a Redis cluster.\n* `redis-node-1:6379` and `redis-node-2:6380` are the addresses of your two Redis shard nodes. Replace these with your actual hostnames/IP addresses and ports.\n\n\n**Additional Considerations:**\n\n* **Password:** If your Redis instances require authentication, use the `spring.redis.password` property.\n* **SSL:** For secure connections, set `spring.redis.ssl.enabled` to `true`. You may also need to configure truststore and keystore settings.\n* **Connection Pool:** Configure a connection pool for improved performance and resource management. Lettuce handles this automatically to some extent, but you can fine-tune it if needed.\n\n\n\nIf you\u0027re ready to purchase and implement a Redis solution, please contact our sales representative at sales@wyzard.in or +91-8826746733 for personalized assistance.\n\n**user**: what is Redis Sentinel\n**assistant**: Redis Sentinel is a robust system designed for managing and monitoring Redis instances. It primarily focuses on high availability and automatic failover. Here\u0027s a breakdown of its core functionalities:\n\n| Feature | Description |\n|---|---|\n| **Monitoring** | Continuously checks the health and responsiveness of your master and slave Redis instances. |\n| **Automatic Failover** | If a master instance fails, Sentinel automatically promotes a suitable slave to become the new master, ensuring minimal downtime. |\n| **Notification** | Alerts system administrators or other applications about changes in the master instance, such as failovers or configuration changes. |\n| **Configuration Provider** | Acts as a source of information for clients connecting to the Redis cluster. Clients can query Sentinel to discover the current master instance address. |\n\n\nEssentially, Redis Sentinel adds an extra layer of resilience and management to your Redis deployment, ensuring your application remains available even in the face of failures. It simplifies the process of setting up and maintaining a highly available Redis system.\n\n**user**: how to setup redis sentinel?/\n**assistant**: To set up Redis Sentinel for high availability, you\u0027ll need at least three Sentinel instances and a Redis master-slave setup. Here\u0027s a simplified guide:\n\n1. **Configure Redis Instances:**\n\n * Configure your master and slave Redis instances. Ensure they are running and replicating correctly.\n\n2. **Configure Sentinel Instances:**\n\n * Create a `sentinel.conf` file for each Sentinel instance.\n * Key configuration lines in `sentinel.conf`:\n\n | Parameter | Description | Example |\n |---|---|---|\n | `sentinel monitor \u003cmaster-name\u003e \u003cmaster-ip\u003e \u003cmaster-port\u003e \u003cquorum\u003e` | Tells Sentinel to monitor a master. | `sentinel monitor mymaster 192.168.1.100 6379 2` |\n | `sentinel down-after-milliseconds \u003cmaster-name\u003e \u003cmilliseconds\u003e` | Time before a master is considered down. | `sentinel down-after-milliseconds mymaster 5000` |\n | `sentinel failover-timeout \u003cmaster-name\u003e \u003cmilliseconds\u003e` | Maximum time to wait for a failover. | `sentinel failover-timeout mymaster 10000` |\n | `sentinel parallel-syncs \u003cmaster-name\u003e \u003cnumber\u003e` | Number of slaves to reconfigure at once after a failover. | `sentinel parallel-syncs mymaster 1` |\n\n3. **Start Sentinel Instances:**\n\n * Run `redis-sentinel /path/to/sentinel.conf` for each instance.\n\n4. **Connect Clients via Sentinel:**\n\n * Clients should connect to Sentinel to discover the current master address. Most Redis client libraries support Sentinel connections.\n\n\n**Key Considerations:**\n\n* **Quorum:** The quorum parameter in `sentinel monitor` determines how many Sentinels must agree that a master is down before a failover is initiated.\n* **Configuration:** Adjust the `down-after-milliseconds` and `failover-timeout` parameters based on your environment and application requirements.\n* **Network Connectivity:** Ensure all Sentinels can communicate with each other and with the Redis instances.\n\n\n\nIf you\u0027d like to discuss implementing Redis Sentinel or explore other high-availability solutions, please contact our sales representative at sales@wyzard.in or +91-8826746733.\n\n**user**: why you need at least three Sentinel instances ?\n**assistant**: For robust Redis Sentinel deployments, three or more instances are crucial for several reasons:\n\n| Reason | Description |\n|---|---|\n| **Quorum-Based Decisions** | Sentinel operates using a quorum mechanism. Having multiple instances ensures that decisions about failovers and other critical operations are made reliably, even if one Sentinel instance goes down. |\n| **Avoiding Single Points of Failure** | Distributing Sentinel across multiple instances eliminates single points of failure. If only one Sentinel were running and it failed, you would lose the monitoring and failover capabilities. |\n| **Accurate Failure Detection** | Multiple Sentinels independently monitor the Redis master. This helps avoid false positives where a temporary network issue might be misinterpreted as a master failure. The quorum ensures that only genuine failures trigger a failover. |\n\n\nIf you have further questions or would like to discuss implementing a highly available Redis setup with Sentinel, please contact our sales representative at sales@wyzard.in or +91-8826746733.\n\n**user**: explain quorum mechanism\n**assistant**: The quorum mechanism is a fault-tolerance technique used in distributed systems to ensure consistency and availability. It works by requiring a majority of nodes to agree on a decision before it\u0027s considered valid.\n\nHere\u0027s how it typically works:\n\n| Step | Description |\n|---|---|\n| **Proposal** | A node proposes a value or action. |\n| **Voting** | The proposal is sent to all participating nodes for a vote. |\n| **Quorum Reached** | If a majority of nodes vote in favor of the proposal, it\u0027s accepted. |\n| **Action Taken** | The agreed-upon action is executed. |\n\n\n**Benefits of Quorum:**\n\n* **Fault Tolerance:** Even if some nodes fail, the system can continue to operate as long as a quorum of nodes is still functioning.\n* **Consistency:** Prevents conflicting decisions by ensuring that only one proposal can be accepted at a time.\n* **Availability:** Increases the overall availability of the system by allowing it to tolerate node failures.\n\n\n**Example in Redis Sentinel:**\n\nRedis Sentinel uses a quorum mechanism to determine when a Redis master has failed. If a majority of Sentinel instances agree that the master is unavailable, they elect a new master from the available slaves. This ensures that the failover process is reliable and consistent, even if some Sentinel instances are down.\n\nIf you\u0027d like to discuss implementing a system with quorum or explore related technologies, please contact our sales representative at sales@wyzard.in or +91-8826746733.\n"
},
"id": "34d5d544-9d66-4937-a157-493f919259ce",
"metadata": {
"conv_id": "8f8984fa-8c35-4093-83df-0366a3540a49",
"turn": 0
},
"responses": {
"accuracy_score": [
{
"user_id": "dad9bc3b-952a-4c41-b3ec-f2a990320d08",
"value": 9
}
],
"actionable_score": [
{
"user_id": "dad9bc3b-952a-4c41-b3ec-f2a990320d08",
"value": 9
}
],
"clarity_score": [
{
"user_id": "dad9bc3b-952a-4c41-b3ec-f2a990320d08",
"value": 9
}
],
"completeness_score": [
{
"user_id": "dad9bc3b-952a-4c41-b3ec-f2a990320d08",
"value": 9
}
],
"engagement_score": [
{
"user_id": "dad9bc3b-952a-4c41-b3ec-f2a990320d08",
"value": 9
}
],
"feedback": [
{
"user_id": "dad9bc3b-952a-4c41-b3ec-f2a990320d08",
"value": "In comparison table , Only one column is present. for eg while asking about difference between redis and memcache, only redis column is displayed with feature applicable or not"
}
],
"relevance_score": [
{
"user_id": "dad9bc3b-952a-4c41-b3ec-f2a990320d08",
"value": 8
}
]
},
"status": "completed",
"suggestions": {},
"vectors": {}
}
```
While the same record in HuggingFace `datasets` looks as follows:
```json
{
"_server_id": "56540022-7892-43f0-8d3f-214638ef15b1",
"accuracy_score.responses": [
9
],
"accuracy_score.responses.status": [
"submitted"
],
"accuracy_score.responses.users": [
"dad9bc3b-952a-4c41-b3ec-f2a990320d08"
],
"actionable_score.responses": [
9
],
"actionable_score.responses.status": [
"submitted"
],
"actionable_score.responses.users": [
"dad9bc3b-952a-4c41-b3ec-f2a990320d08"
],
"clarity_score.responses": [
9
],
"clarity_score.responses.status": [
"submitted"
],
"clarity_score.responses.users": [
"dad9bc3b-952a-4c41-b3ec-f2a990320d08"
],
"completeness_score.responses": [
9
],
"completeness_score.responses.status": [
"submitted"
],
"completeness_score.responses.users": [
"dad9bc3b-952a-4c41-b3ec-f2a990320d08"
],
"conv_id": "8f8984fa-8c35-4093-83df-0366a3540a49",
"engagement_score.responses": [
9
],
"engagement_score.responses.status": [
"submitted"
],
"engagement_score.responses.users": [
"dad9bc3b-952a-4c41-b3ec-f2a990320d08"
],
"feedback.responses": [
"In comparison table , Only one column is present. for eg while asking about difference between redis and memcache, only redis column is displayed with feature applicable or not"
],
"feedback.responses.status": [
"submitted"
],
"feedback.responses.users": [
"dad9bc3b-952a-4c41-b3ec-f2a990320d08"
],
"id": "34d5d544-9d66-4937-a157-493f919259ce",
"instruction": "**user**: HI, Let me know good in memory cache libraries\n**assistant**: Several factors influence the best in-memory caching library for your needs. To help me narrow down the most suitable options, could you please tell me more about:\n\n* **What programming language are you using?** (e.g., Java, Python, .NET)\n* **What is the scale of your application?** (e.g., small project, large enterprise system)\n* **What are your specific performance requirements?** (e.g., throughput, latency)\n* **What kind of data will you be caching?** (e.g., simple objects, complex data structures)\n\n\nHere are a few popular in-memory caching options to consider:\n\n| Library/Tool | Language | Description |\n|---|---|---|\n| Guava Cache | Java | Powerful and flexible caching offered by Google\u0027s Guava library. |\n| Redis | Cross-Language | In-memory data structure store, used as a database, cache, and message broker. |\n| Memcached | Cross-Language | General-purpose distributed memory caching system. |\n| Ehcache | Java | Widely used open-source caching library. |\n\n\nProviding more context about your needs will allow me to provide more targeted recommendations.\n\n**user**: Language - java, medium size project, requirement is throughput, caching object can be complex\n**assistant**: Given your requirements for a Java-based, medium-sized project with a focus on throughput and complex object caching, Guava Cache and Ehcache are strong contenders.\n\n| Library | Description | Considerations |\n|---|---|---|\n| Guava Cache | Part of Google\u0027s Guava library, offering a simple, thread-safe, and high-performance solution. Excellent for local, in-memory caching. | Suitable for single-node applications. Consider Caffeine, a Java 8 rewrite of Guava Cache, for even better performance. |\n| Ehcache | Open-source, standards-based cache known for boosting performance and simplifying scalability. Supports memory and disk stores. | Mature and widely used, offering various features like listeners and cache loaders. May be slightly more complex to set up than Guava Cache. |\n\n\n* **Are there any specific functionalities you require, such as eviction policies or data persistence?**\n* **Do you anticipate scaling to a distributed environment in the future?** (This would influence the choice between local caches like Guava/Ehcache and distributed ones like Redis/Memcached.)\n\nKnowing these details will help me pinpoint the ideal solution for your project.\n\n**feedback**: I beleive this answer can be given by development specialist as well, as its related to development only. Although answer seems to be correct.\n**user**: I need to have application running on multiple kubernetes pods.\n**assistant**: Given your application runs on multiple Kubernetes pods, a distributed cache is necessary. Local in-memory caches like Guava or Ehcache won\u0027t be effective across different pods.\n\nHere are a few distributed caching options suitable for Kubernetes:\n\n| Tool | Description | Considerations |\n|---|---|---|\n| Redis | Highly popular in-memory data structure store. Used as a database, cache, and message broker. Excellent performance and scalability. | Requires setting up a Redis cluster within your Kubernetes environment. |\n| Memcached | General-purpose distributed memory caching system. Known for its simplicity and speed. | Simpler to set up than Redis but offers fewer features. |\n\n\n* **What type of data are you caching?** (Simple strings, complex objects, etc.)\n* **What are your performance requirements?** (Throughput, latency, etc.)\n* **Do you require data persistence?** (Redis offers persistence options, while Memcached is purely in-memory.)\n\nAnswering these questions will help me provide the most tailored solution. If you\u0027re ready to discuss implementation, you can contact our sales representative, Eshu, at eshu@wyzard.in or +91-9284730045.\n\n**feedback**: In comparison table , both redis and memcached should be shown, here only redis is shown \n**user**: Can you please give me difference between redis and memcached in more detail\n**assistant**: When comparing Redis and Memcached, both are powerful in-memory caching solutions, but they serve different use cases and have distinct features. Here\u2019s a detailed comparison based on their capabilities:\n\nnull{\"graph_type\": \"comparison table\", \"x_label\": \"Features\", \"y_label\": \"Redis vs Memcached\", \"json_payload\": [{\"name\": \"Redis\", \"info\": [{\"feature\": \"Data Structure Support\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Persistence Options\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Replication Support\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Data Expiration\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Atomic Operations\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Pub/Sub Messaging\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Complex Data Types\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Multi-threaded\", \"value\": false, \"subFeatures\": []}, {\"feature\": \"Memory Management\", \"value\": true, \"subFeatures\": []}]}]}\n\n### Insights\n\n1. **Data Structure Support**: Redis supports various data structures (strings, hashes, lists, sets, etc.), making it versatile for different use cases. In contrast, Memcached primarily supports simple key-value pairs.\n\n2. **Persistence Options**: Redis offers options for data persistence, allowing you to save data to disk, which is not available in Memcached. This makes Redis suitable for applications that require data durability.\n\n3. **Replication and Clustering**: Redis supports replication and clustering, enhancing its scalability and availability. Memcached lacks these features, making it less suitable for distributed architectures.\n\n4. **Atomic Operations**: Redis supports atomic operations on its data structures, while Memcached does not, which can be critical for certain applications requiring consistent state.\n\n5. **Memory Management**: Redis has advanced memory management features, including the ability to handle large datasets efficiently, whereas Memcached does not provide such capabilities.\n\n### Business Implications\n\nChoosing between Redis and Memcached depends on your application\u0027s specific requirements. If you need advanced data structures, persistence, and replication, Redis is the better choice. For simpler caching needs with high-speed access and less complexity, Memcached may suffice.\n\nIf you\u0027re interested in exploring how these caching solutions can be integrated into your tech stack or require a demo, please feel free to reach out for further assistance.\n**feedback**: Needed more elaboration on how to pass configs in spring boot application\n**user**: how to configure redis-cluster in java application using spring boot\n**assistant**: For configuring a Redis cluster in your Java Spring Boot application, you\u0027ll need to use the `Lettuce` or `Jedis` client library along with Spring Data Redis.\n\nHere\u0027s a breakdown using `Lettuce`, which is generally recommended for its thread-safety and performance:\n\n| Dependency | Purpose |\n|---|---|\n| `spring-boot-starter-data-redis` | Provides Spring Data Redis integration. |\n| `io.lettuce:lettuce-core` | The Lettuce Redis client library. |\n\n**Configuration:**\n\n1. **Cluster Configuration:** Configure the `LettuceConnectionFactory` to connect to your Redis cluster nodes. You\u0027ll need to provide the host and port of each node.\n\n ```java\n @Configuration\n public class RedisConfig {\n\n @Bean\n public LettuceConnectionFactory redisConnectionFactory() {\n RedisClusterConfiguration clusterConfig = new RedisClusterConfiguration();\n clusterConfig.clusterNodeProvider(new StaticClusterNodeProvider(\n Arrays.asList(new RedisNode(\"host1\", 7000), \n new RedisNode(\"host2\", 7001),\n new RedisNode(\"host3\", 7002)))); // Add all your cluster nodes\n return new LettuceConnectionFactory(clusterConfig);\n }\n\n @Bean\n public RedisTemplate\u003cString, Object\u003e redisTemplate(RedisConnectionFactory connectionFactory) {\n RedisTemplate\u003cString, Object\u003e template = new RedisTemplate\u003c\u003e();\n template.setConnectionFactory(connectionFactory);\n // Configure any serialization/deserialization strategies as needed\n return template;\n }\n }\n ```\n\n2. **Usage:** Use the `RedisTemplate` to interact with the Redis cluster.\n\n ```java\n @Service\n public class RedisService {\n\n @Autowired\n private RedisTemplate\u003cString, Object\u003e redisTemplate;\n\n public void setValue(String key, Object value) {\n redisTemplate.opsForValue().set(key, value);\n }\n\n public Object getValue(String key) {\n return redisTemplate.opsForValue().get(key);\n }\n }\n ```\n\n**Key Considerations:**\n\n* **Connection Pooling:** Configure connection pooling for better resource management.\n* **Serialization:** Choose an appropriate serialization strategy (e.g., Jackson2JsonRedisSerializer) for complex objects.\n* **Error Handling:** Implement proper error handling for connection failures or other Redis-related issues.\n\nIf you are ready to proceed with implementation and need further assistance, please contact our sales representative at sales@wyzard.in or +91-8826746733.\n\n**user**: which is preferable lettuce or spring-boot-starter-data-redis or any other?\n**assistant**: When comparing `Lettuce` and `spring-boot-starter-data-redis`, both serve the purpose of integrating Redis with Java applications, but they have distinct roles and features. Here\u2019s a detailed comparison based on their capabilities:\n\nnull{\"graph_type\": \"comparison table\", \"x_label\": \"Features\", \"y_label\": \"Lettuce vs Spring Boot Starter Data Redis\", \"json_payload\": [{\"name\": \"Lettuce\", \"info\": [{\"feature\": \"Asynchronous API Support\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Thread Safety\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Reactive Programming Support\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Connection Pooling\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Low-Level API Access\", \"value\": true, \"subFeatures\": []}]}, {\"name\": \"Spring Boot Starter Data Redis\", \"info\": [{\"feature\": \"Asynchronous API Support\", \"value\": false, \"subFeatures\": []}, {\"feature\": \"Thread Safety\", \"value\": true, \"subFeatures\": []}, {\"feature\": \"Reactive Programming Support\", \"value\": false, \"subFeatures\": []}, {\"feature\": \"Connection Pooling\", \"value\": false, \"subFeatures\": []}, {\"feature\": \"Low-Level API Access\", \"value\": false, \"subFeatures\": []}]}]}\n\n### Insights\n\n1. **Asynchronous API Support**: Lettuce provides asynchronous capabilities, making it suitable for applications that require non-blocking operations. In contrast, the Spring Boot Starter does not offer this feature, which may limit its use in highly concurrent applications.\n\n2. **Reactive Programming Support**: Lettuce supports reactive programming, which is beneficial for modern applications that leverage reactive streams. The Spring Boot Starter does not support this paradigm, making it less flexible for reactive applications.\n\n3. **Connection Pooling**: Lettuce supports connection pooling, which is essential for managing multiple connections efficiently. The Spring Boot Starter does not provide this feature, which may lead to performance bottlenecks in high-load scenarios.\n\n4. **Low-Level API Access**: Lettuce offers low-level API access, allowing developers to interact directly with Redis commands. The Spring Boot Starter abstracts these details, which can simplify development but may limit advanced use cases.\n\n### Business Implications\n\nChoosing between `Lettuce` and `spring-boot-starter-data-redis` depends on your application\u0027s specific requirements. If you need advanced features like asynchronous support and reactive programming, `Lettuce` is the preferable choice. However, if you are looking for a simplified integration with Spring and do not require those advanced features, `spring-boot-starter-data-redis` may suffice.\n\nIf you\u0027re interested in exploring how these libraries can be integrated into your tech stack or require a demo, please feel free to reach out for further assistance.\n**user**: which configuration need to pass in application to connect to redis shard having two nodes\n**assistant**: To connect to a Redis shard with two nodes, you\u0027ll need to configure your Redis client library to connect to both nodes. Here\u0027s how you can do it using `Lettuce`, the recommended client for Spring Boot applications, along with the necessary configuration details for your application:\n\n| Parameter | Description | Example |\n|---|---|---|\n| `spring.redis.cluster.nodes` | Comma-separated list of Redis node addresses. | `host1:port1,host2:port2` |\n\n\n**Example Configuration in `application.properties`:**\n\n```\nspring.redis.cluster.nodes=redis-node-1:6379,redis-node-2:6380 \n```\n\n**Explanation:**\n\n* This configuration tells Spring Data Redis to connect to a Redis cluster.\n* `redis-node-1:6379` and `redis-node-2:6380` are the addresses of your two Redis shard nodes. Replace these with your actual hostnames/IP addresses and ports.\n\n\n**Additional Considerations:**\n\n* **Password:** If your Redis instances require authentication, use the `spring.redis.password` property.\n* **SSL:** For secure connections, set `spring.redis.ssl.enabled` to `true`. You may also need to configure truststore and keystore settings.\n* **Connection Pool:** Configure a connection pool for improved performance and resource management. Lettuce handles this automatically to some extent, but you can fine-tune it if needed.\n\n\n\nIf you\u0027re ready to purchase and implement a Redis solution, please contact our sales representative at sales@wyzard.in or +91-8826746733 for personalized assistance.\n\n**user**: what is Redis Sentinel\n**assistant**: Redis Sentinel is a robust system designed for managing and monitoring Redis instances. It primarily focuses on high availability and automatic failover. Here\u0027s a breakdown of its core functionalities:\n\n| Feature | Description |\n|---|---|\n| **Monitoring** | Continuously checks the health and responsiveness of your master and slave Redis instances. |\n| **Automatic Failover** | If a master instance fails, Sentinel automatically promotes a suitable slave to become the new master, ensuring minimal downtime. |\n| **Notification** | Alerts system administrators or other applications about changes in the master instance, such as failovers or configuration changes. |\n| **Configuration Provider** | Acts as a source of information for clients connecting to the Redis cluster. Clients can query Sentinel to discover the current master instance address. |\n\n\nEssentially, Redis Sentinel adds an extra layer of resilience and management to your Redis deployment, ensuring your application remains available even in the face of failures. It simplifies the process of setting up and maintaining a highly available Redis system.\n\n**user**: how to setup redis sentinel?/\n**assistant**: To set up Redis Sentinel for high availability, you\u0027ll need at least three Sentinel instances and a Redis master-slave setup. Here\u0027s a simplified guide:\n\n1. **Configure Redis Instances:**\n\n * Configure your master and slave Redis instances. Ensure they are running and replicating correctly.\n\n2. **Configure Sentinel Instances:**\n\n * Create a `sentinel.conf` file for each Sentinel instance.\n * Key configuration lines in `sentinel.conf`:\n\n | Parameter | Description | Example |\n |---|---|---|\n | `sentinel monitor \u003cmaster-name\u003e \u003cmaster-ip\u003e \u003cmaster-port\u003e \u003cquorum\u003e` | Tells Sentinel to monitor a master. | `sentinel monitor mymaster 192.168.1.100 6379 2` |\n | `sentinel down-after-milliseconds \u003cmaster-name\u003e \u003cmilliseconds\u003e` | Time before a master is considered down. | `sentinel down-after-milliseconds mymaster 5000` |\n | `sentinel failover-timeout \u003cmaster-name\u003e \u003cmilliseconds\u003e` | Maximum time to wait for a failover. | `sentinel failover-timeout mymaster 10000` |\n | `sentinel parallel-syncs \u003cmaster-name\u003e \u003cnumber\u003e` | Number of slaves to reconfigure at once after a failover. | `sentinel parallel-syncs mymaster 1` |\n\n3. **Start Sentinel Instances:**\n\n * Run `redis-sentinel /path/to/sentinel.conf` for each instance.\n\n4. **Connect Clients via Sentinel:**\n\n * Clients should connect to Sentinel to discover the current master address. Most Redis client libraries support Sentinel connections.\n\n\n**Key Considerations:**\n\n* **Quorum:** The quorum parameter in `sentinel monitor` determines how many Sentinels must agree that a master is down before a failover is initiated.\n* **Configuration:** Adjust the `down-after-milliseconds` and `failover-timeout` parameters based on your environment and application requirements.\n* **Network Connectivity:** Ensure all Sentinels can communicate with each other and with the Redis instances.\n\n\n\nIf you\u0027d like to discuss implementing Redis Sentinel or explore other high-availability solutions, please contact our sales representative at sales@wyzard.in or +91-8826746733.\n\n**user**: why you need at least three Sentinel instances ?\n**assistant**: For robust Redis Sentinel deployments, three or more instances are crucial for several reasons:\n\n| Reason | Description |\n|---|---|\n| **Quorum-Based Decisions** | Sentinel operates using a quorum mechanism. Having multiple instances ensures that decisions about failovers and other critical operations are made reliably, even if one Sentinel instance goes down. |\n| **Avoiding Single Points of Failure** | Distributing Sentinel across multiple instances eliminates single points of failure. If only one Sentinel were running and it failed, you would lose the monitoring and failover capabilities. |\n| **Accurate Failure Detection** | Multiple Sentinels independently monitor the Redis master. This helps avoid false positives where a temporary network issue might be misinterpreted as a master failure. The quorum ensures that only genuine failures trigger a failover. |\n\n\nIf you have further questions or would like to discuss implementing a highly available Redis setup with Sentinel, please contact our sales representative at sales@wyzard.in or +91-8826746733.\n\n**user**: explain quorum mechanism\n**assistant**: The quorum mechanism is a fault-tolerance technique used in distributed systems to ensure consistency and availability. It works by requiring a majority of nodes to agree on a decision before it\u0027s considered valid.\n\nHere\u0027s how it typically works:\n\n| Step | Description |\n|---|---|\n| **Proposal** | A node proposes a value or action. |\n| **Voting** | The proposal is sent to all participating nodes for a vote. |\n| **Quorum Reached** | If a majority of nodes vote in favor of the proposal, it\u0027s accepted. |\n| **Action Taken** | The agreed-upon action is executed. |\n\n\n**Benefits of Quorum:**\n\n* **Fault Tolerance:** Even if some nodes fail, the system can continue to operate as long as a quorum of nodes is still functioning.\n* **Consistency:** Prevents conflicting decisions by ensuring that only one proposal can be accepted at a time.\n* **Availability:** Increases the overall availability of the system by allowing it to tolerate node failures.\n\n\n**Example in Redis Sentinel:**\n\nRedis Sentinel uses a quorum mechanism to determine when a Redis master has failed. If a majority of Sentinel instances agree that the master is unavailable, they elect a new master from the available slaves. This ensures that the failover process is reliable and consistent, even if some Sentinel instances are down.\n\nIf you\u0027d like to discuss implementing a system with quorum or explore related technologies, please contact our sales representative at sales@wyzard.in or +91-8826746733.\n",
"relevance_score.responses": [
8
],
"relevance_score.responses.status": [
"submitted"
],
"relevance_score.responses.users": [
"dad9bc3b-952a-4c41-b3ec-f2a990320d08"
],
"status": "completed",
"turn": 0
}
```
### Data Splits
The dataset contains a single split, which is `train`.
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation guidelines
Review the user interactions with the chatbot.
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |
Rudra-ai/ai-responses-gsm8k-405b | Rudra-ai | "2024-11-22T09:33:23Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T09:33:20Z" | ---
dataset_info:
features:
- name: query
dtype: string
- name: response
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 7624146
num_examples: 5000
download_size: 3582824
dataset_size: 7624146
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ruchirsahni/Vaani_Dharwad_tran_kan_audio | ruchirsahni | "2024-11-22T09:34:05Z" | 5 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-22T09:33:33Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: file_name
dtype: string
- name: file_url
dtype: string
- name: metadata
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 493765304.625
num_examples: 2059
download_size: 483854538
dataset_size: 493765304.625
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|