Datasets:

ArXiv:
License:

Are the 0.14T and 0.28T models training with cool down stage?

#9
by zhuzilin - opened

Thank you for this great dataset!

In the benchmark table, the model trained with 0.14T and 0.28T token of DCLM-baseline reach 38.3 and 50.8 in MMLU respectively. Could you share if those models are trained with a fraction of the 4.1T dataset, or they are also trained with 2 stage training with filtered cooldown data? Thank you!

zhuzilin changed discussion title from Is the 0.14T and 0.28T model training with cool down stage? to Are the 0.14T and 0.28T models training with cool down stage?
zhuzilin changed discussion status to closed

Sign up or log in to comment