license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
language:
- en
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- eo
- es
- et
- eu
- fa
- ff
- fi
- fr
- fy
- ga
- gd
- gl
- gn
- gu
- ha
- he
- hi
- hr
- ht
- hu
- hy
- id
- ig
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lg
- li
- ln
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- ns
- om
- or
- pa
- pl
- ps
- pt
- qu
- rm
- ro
- ru
- sa
- si
- sc
- sd
- sk
- sl
- so
- sq
- sr
- ss
- su
- sv
- sw
- ta
- te
- th
- tl
- tn
- tr
- ug
- uk
- ur
- uz
- vi
- wo
- xh
- yi
- yo
- zu
datasets:
- bigcode/programming-languages-keywords
- bigcode/the-stack-smol-xs
- nampdn-ai/tiny-textbooks
- xu-song/cc100-samples
- m-a-p/CodeFeedback-Filtered-Instruction
- nampdn-ai/tiny-codes
- ajibawa-2023/Maths-College
- microsoft/orca-math-word-problems-200k
- mlabonne/FineTome-100k
- arcee-ai/agent-data
- cognitivecomputations/SystemChat-2.0
- badrex/llm-emoji-dataset
tags:
- litgpt
- litdata
tangled-llama-109m-32k-base-v0.1
A pretrained language model based on the Llama model with about 109M parameters. This model has been trained on 9.7B (9,782,206,713
) tokens from more than 5.2M (5,285,575
) dataset rows.
This model isn't designed for immediate use but rather for Continued Pretraining and Finetuning on a downstream task. While it can handle a context length of up to 32K (32,768
) tokens, it was pretrained with sequences of 2K (2048
) tokens.
The objective is to streamline the cognitive or reasoning core, eliminating any redundant knowledge from the model.
lm-evaluation-harness
litgpt evaluate --tasks 'leaderboard' --out_dir 'evaluate-0/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
Tasks | Version | Filter | n-shot | Metric | Value | Stderr | ||
---|---|---|---|---|---|---|---|---|
leaderboard | N/A | |||||||
- leaderboard_bbh | N/A | |||||||
- leaderboard_bbh_boolean_expressions | 1 | none | 3 | acc_norm | ↑ | 0.5680 | ± | 0.0314 |
- leaderboard_bbh_causal_judgement | 1 | none | 3 | acc_norm | ↑ | 0.5294 | ± | 0.0366 |
- leaderboard_bbh_date_understanding | 1 | none | 3 | acc_norm | ↑ | 0.1880 | ± | 0.0248 |
- leaderboard_bbh_disambiguation_qa | 1 | none | 3 | acc_norm | ↑ | 0.3240 | ± | 0.0297 |
- leaderboard_bbh_formal_fallacies | 1 | none | 3 | acc_norm | ↑ | 0.4720 | ± | 0.0316 |
- leaderboard_bbh_geometric_shapes | 1 | none | 3 | acc_norm | ↑ | 0.0280 | ± | 0.0105 |
- leaderboard_bbh_hyperbaton | 1 | none | 3 | acc_norm | ↑ | 0.5160 | ± | 0.0317 |
- leaderboard_bbh_logical_deduction_five_objects | 1 | none | 3 | acc_norm | ↑ | 0.1760 | ± | 0.0241 |
- leaderboard_bbh_logical_deduction_seven_objects | 1 | none | 3 | acc_norm | ↑ | 0.1360 | ± | 0.0217 |
- leaderboard_bbh_logical_deduction_three_objects | 1 | none | 3 | acc_norm | ↑ | 0.3480 | ± | 0.0302 |
- leaderboard_bbh_movie_recommendation | 1 | none | 3 | acc_norm | ↑ | 0.2280 | ± | 0.0266 |
- leaderboard_bbh_navigate | 1 | none | 3 | acc_norm | ↑ | 0.4800 | ± | 0.0317 |
- leaderboard_bbh_object_counting | 1 | none | 3 | acc_norm | ↑ | 0.0640 | ± | 0.0155 |
- leaderboard_bbh_penguins_in_a_table | 1 | none | 3 | acc_norm | ↑ | 0.2329 | ± | 0.0351 |
- leaderboard_bbh_reasoning_about_colored_objects | 1 | none | 3 | acc_norm | ↑ | 0.1240 | ± | 0.0209 |
- leaderboard_bbh_ruin_names | 1 | none | 3 | acc_norm | ↑ | 0.2400 | ± | 0.0271 |
- leaderboard_bbh_salient_translation_error_detection | 1 | none | 3 | acc_norm | ↑ | 0.1560 | ± | 0.0230 |
- leaderboard_bbh_snarks | 1 | none | 3 | acc_norm | ↑ | 0.4607 | ± | 0.0375 |
- leaderboard_bbh_sports_understanding | 1 | none | 3 | acc_norm | ↑ | 0.4560 | ± | 0.0316 |
- leaderboard_bbh_temporal_sequences | 1 | none | 3 | acc_norm | ↑ | 0.2320 | ± | 0.0268 |
- leaderboard_bbh_tracking_shuffled_objects_five_objects | 1 | none | 3 | acc_norm | ↑ | 0.2000 | ± | 0.0253 |
- leaderboard_bbh_tracking_shuffled_objects_seven_objects | 1 | none | 3 | acc_norm | ↑ | 0.1520 | ± | 0.0228 |
- leaderboard_bbh_tracking_shuffled_objects_three_objects | 1 | none | 3 | acc_norm | ↑ | 0.3160 | ± | 0.0295 |
- leaderboard_bbh_web_of_lies | 1 | none | 3 | acc_norm | ↑ | 0.5040 | ± | 0.0317 |
- leaderboard_gpqa | N/A | |||||||
- leaderboard_gpqa_diamond | 1 | none | 0 | acc_norm | ↑ | 0.1919 | ± | 0.0281 |
- leaderboard_gpqa_extended | 1 | none | 0 | acc_norm | ↑ | 0.2747 | ± | 0.0191 |
- leaderboard_gpqa_main | 1 | none | 0 | acc_norm | ↑ | 0.2589 | ± | 0.0207 |
- leaderboard_ifeval | 3 | none | 0 | inst_level_loose_acc | ↑ | 0.2002 | ± | N/A |
none | 0 | inst_level_strict_acc | ↑ | 0.1871 | ± | N/A | ||
none | 0 | prompt_level_loose_acc | ↑ | 0.1072 | ± | 0.0133 | ||
none | 0 | prompt_level_strict_acc | ↑ | 0.0998 | ± | 0.0129 | ||
- leaderboard_math_hard | N/A | |||||||
- leaderboard_math_algebra_hard | 1 | none | 4 | exact_match | ↑ | 0.0000 | ± | 0 |
- leaderboard_math_counting_and_prob_hard | 1 | none | 4 | exact_match | ↑ | 0.0000 | ± | 0 |
- leaderboard_math_geometry_hard | 1 | none | 4 | exact_match | ↑ | 0.0000 | ± | 0 |
- leaderboard_math_intermediate_algebra_hard | 1 | none | 4 | exact_match | ↑ | 0.0000 | ± | 0 |
- leaderboard_math_num_theory_hard | 1 | none | 4 | exact_match | ↑ | 0.0000 | ± | 0 |
- leaderboard_math_prealgebra_hard | 1 | none | 4 | exact_match | ↑ | 0.0000 | ± | 0 |
- leaderboard_math_precalculus_hard | 1 | none | 4 | exact_match | ↑ | 0.0000 | ± | 0 |
- leaderboard_mmlu_pro | 0.1 | none | 5 | acc | ↑ | 0.1096 | ± | 0.0028 |
- leaderboard_musr | N/A | |||||||
- leaderboard_musr_murder_mysteries | 1 | none | 0 | acc_norm | ↑ | 0.4800 | ± | 0.0317 |
- leaderboard_musr_object_placements | 1 | none | 0 | acc_norm | ↑ | 0.2930 | ± | 0.0285 |
- leaderboard_musr_team_allocation | 1 | none | 0 | acc_norm | ↑ | 0.3360 | ± | 0.0299 |
litgpt evaluate --tasks 'hellaswag,gsm8k,truthfulqa_mc2,mmlu,winogrande,arc_challenge' --out_dir 'evaluate-1/' --batch_size 4 --dtype 'bfloat16' out/pretrain/final/
Tasks |Version| Filter |n-shot| Metric | |Value | |Stderr|
|---------------------------------------|------:|----------------|-----:|-----------|---|-----:|---|-----:| |arc_challenge | 1|none | 0|acc |↑ |0.2082|± |0.0119| | | |none | 0|acc_norm |↑ |0.2474|± |0.0126| |gsm8k | 3|flexible-extract| 5|exact_match|↑ |0.0106|± |0.0028| | | |strict-match | 5|exact_match|↑ |0.0008|± |0.0008| |hellaswag | 1|none | 0|acc |↑ |0.2766|± |0.0045| | | |none | 0|acc_norm |↑ |0.2926|± |0.0045| |mmlu | 2|none | |acc |↑ |0.2349|± |0.0036| | - humanities | 2|none | |acc |↑ |0.2461|± |0.0063| | - formal_logic | 1|none | 0|acc |↑ |0.2698|± |0.0397| | - high_school_european_history | 1|none | 0|acc |↑ |0.2000|± |0.0312| | - high_school_us_history | 1|none | 0|acc |↑ |0.2549|± |0.0306| | - high_school_world_history | 1|none | 0|acc |↑ |0.2616|± |0.0286| | - international_law | 1|none | 0|acc |↑ |0.2479|± |0.0394| | - jurisprudence | 1|none | 0|acc |↑ |0.2593|± |0.0424| | - logical_fallacies | 1|none | 0|acc |↑ |0.2638|± |0.0346| | - moral_disputes | 1|none | 0|acc |↑ |0.2457|± |0.0232| | - moral_scenarios | 1|none | 0|acc |↑ |0.2458|± |0.0144| | - philosophy | 1|none | 0|acc |↑ |0.1833|± |0.0220| | - prehistory | 1|none | 0|acc |↑ |0.2315|± |0.0235| | - professional_law | 1|none | 0|acc |↑ |0.2503|± |0.0111| | - world_religions | 1|none | 0|acc |↑ |0.3216|± |0.0358| | - other | 2|none | |acc |↑ |0.2391|± |0.0076| | - business_ethics | 1|none | 0|acc |↑ |0.2900|± |0.0456| | - clinical_knowledge | 1|none | 0|acc |↑ |0.2377|± |0.0262| | - college_medicine | 1|none | 0|acc |↑ |0.2197|± |0.0316| | - global_facts | 1|none | 0|acc |↑ |0.2100|± |0.0409| | - human_aging | 1|none | 0|acc |↑ |0.2960|± |0.0306| | - management | 1|none | 0|acc |↑ |0.1748|± |0.0376| | - marketing | 1|none | 0|acc |↑ |0.2949|± |0.0299| | - medical_genetics | 1|none | 0|acc |↑ |0.2700|± |0.0446| | - miscellaneous | 1|none | 0|acc |↑ |0.2222|± |0.0149| | - nutrition | 1|none | 0|acc |↑ |0.2092|± |0.0233| | - professional_accounting | 1|none | 0|acc |↑ |0.2518|± |0.0259| | - professional_medicine | 1|none | 0|acc |↑ |0.1949|± |0.0241| | - virology | 1|none | 0|acc |↑ |0.3012|± |0.0357| | - social sciences | 2|none | |acc |↑ |0.2246|± |0.0075| | - econometrics | 1|none | 0|acc |↑ |0.2807|± |0.0423| | - high_school_geography | 1|none | 0|acc |↑ |0.1818|± |0.0275| | - high_school_government_and_politics| 1|none | 0|acc |↑ |0.2176|± |0.0298| | - high_school_macroeconomics | 1|none | 0|acc |↑ |0.2179|± |0.0209| | - high_school_microeconomics | 1|none | 0|acc |↑ |0.2101|± |0.0265| | - high_school_psychology | 1|none | 0|acc |↑ |0.2000|± |0.0171| | - human_sexuality | 1|none | 0|acc |↑ |0.2519|± |0.0381| | - professional_psychology | 1|none | 0|acc |↑ |0.2516|± |0.0176| | - public_relations | 1|none | 0|acc |↑ |0.2182|± |0.0396| | - security_studies | 1|none | 0|acc |↑ |0.1959|± |0.0254| | - sociology | 1|none | 0|acc |↑ |0.2488|± |0.0306| | - us_foreign_policy | 1|none | 0|acc |↑ |0.2800|± |0.0451| | - stem | 2|none | |acc |↑ |0.2239|± |0.0074| | - abstract_algebra | 1|none | 0|acc |↑ |0.1800|± |0.0386| | - anatomy | 1|none | 0|acc |↑ |0.1778|± |0.0330| | - astronomy | 1|none | 0|acc |↑ |0.1974|± |0.0324| | - college_biology | 1|none | 0|acc |↑ |0.2569|± |0.0365| | - college_chemistry | 1|none | 0|acc |↑ |0.2400|± |0.0429| | - college_computer_science | 1|none | 0|acc |↑ |0.2400|± |0.0429| | - college_mathematics | 1|none | 0|acc |↑ |0.2400|± |0.0429| | - college_physics | 1|none | 0|acc |↑ |0.2255|± |0.0416| | - computer_security | 1|none | 0|acc |↑ |0.2700|± |0.0446| | - conceptual_physics | 1|none | 0|acc |↑ |0.2468|± |0.0282| | - electrical_engineering | 1|none | 0|acc |↑ |0.2552|± |0.0363| | - elementary_mathematics | 1|none | 0|acc |↑ |0.2407|± |0.0220| | - high_school_biology | 1|none | 0|acc |↑ |0.1710|± |0.0214| | - high_school_chemistry | 1|none | 0|acc |↑ |0.1724|± |0.0266| | - high_school_computer_science | 1|none | 0|acc |↑ |0.2600|± |0.0441| | - high_school_mathematics | 1|none | 0|acc |↑ |0.2519|± |0.0265| | - high_school_physics | 1|none | 0|acc |↑ |0.1457|± |0.0288| | - high_school_statistics | 1|none | 0|acc |↑ |0.2083|± |0.0277| | - machine_learning | 1|none | 0|acc |↑ |0.3571|± |0.0455| |truthfulqa_mc2 | 2|none | 0|acc |↑ |0.4506|± |0.0161| |winogrande | 1|none | 0|acc |↑ |0.5288|± |0.0140|
Groups | Version | Filter | n-shot | Metric | Value | Stderr | ||
---|---|---|---|---|---|---|---|---|
mmlu | 2 | none | acc | ↑ | 0.2349 | ± | 0.0036 | |
- humanities | 2 | none | acc | ↑ | 0.2461 | ± | 0.0063 | |
- other | 2 | none | acc | ↑ | 0.2391 | ± | 0.0076 | |
- social sciences | 2 | none | acc | ↑ | 0.2246 | ± | 0.0075 | |
- stem | 2 | none | acc | ↑ | 0.2239 | ± | 0.0074 |