Cascade-2 cheating attempts on Math is... cute

#25
by chankhavu - opened

I was doing some inference of this model on a subset of harder problems from Nemotron-Math-V2, hoping to get a decent dataset for fun. I noticed that this model created a bunch of unrelated files in the sandbox folder (which is not network isolated in my case foolishly):
image

It downloaded IMO2021SL.pdf, imo2022sl.pdf and so on. I opened those files and they were legitimate PDF files. Which means the model tried to cheat by downloading those files 😄 Here are the statistics of network access attempts (out of 10K reasoning traces):

Domain Traces
artofproblemsolving.com 64
en.wikipedia.org 58
www.google.com 40
duckduckgo.com 39
oeis.org 28
math.stackexchange.com 13
api.stackexchange.com 12
www.imo-official.org 11
html.duckduckgo.com 11
raw.githubusercontent.com 8
api.duckduckgo.com 6
mathworld.wolfram.com 6
purplecomet.org 5
api.github.com 3
arxiv.org 3
www.bing.com 3
stackoverflow.com 3

The cutest part is that in most of these cheating attempts, it fails to produce the correct answer. Here are in-depth analysis together with exported reasoning traces with tool use: https://huggingface.co/datasets/chankhavu/nemotron-cascade2-cheating-attempts (analysis was done by Claude Code)

Thanks for the analysis. We did not use tool calls for IMO/IOI evaluation.

NVIDIA org

@chankhavu
Thanks for providing the examples. We’ve observed similar behaviors before. We believe this stems from our SFT training data, generated by DeepSeek-V3.2 and GPT-OSS with tool calls, which includes trajectories involving network access attempts. Most of these are hallucinations and fail to produce correct answers.

Just to emphasize:
For benchmark evaluations , including IMO 2025 and IOI 2025, we disable tool calls and do not allow the model to access the network, to rule out any possibility of cheating attempts.

Sign up or log in to comment