Dataset Viewer
Auto-converted to Parquet
id
int64
0
99
input
listlengths
50k
50k
values
listlengths
50k
50k
output
listlengths
50k
50k
0
["alarm","coach","doubt","cable","above","every","cloud","alive","craft","among","cover","chair","cr(...TRUNCATED)
[88,-31,-64,-32,46,38,93,-59,-78,-85,56,-54,61,-1,96,53,-7,42,-43,28,84,65,-24,-45,-72,68,28,-19,61,(...TRUNCATED)
[88,57,-7,-39,7,45,138,79,1,-84,-28,-82,-21,-22,74,127,120,162,119,147,231,296,272,227,155,223,251,2(...TRUNCATED)
1
["brief","extra","check","crime","album","chair","cream","claim","black","audio","enjoy","fifth","ag(...TRUNCATED)
[-12,17,-97,-63,-7,-54,9,26,92,-87,64,31,-83,-21,61,-54,-38,-59,61,-12,-78,-78,50,-71,53,-7,-45,39,4(...TRUNCATED)
[-12,5,-92,-155,-162,-216,-207,-181,-89,-176,-112,-81,-164,-185,-124,-178,-216,-275,-214,-226,-304,-(...TRUNCATED)
2
["alarm","brand","admit","black","first","claim","burst","check","admit","cheap","event","bread","co(...TRUNCATED)
[88,12,75,92,-71,26,85,-97,75,-72,-99,-35,25,-59,-35,97,56,-60,-83,75,-38,-17,92,75,-2,10,-38,75,-64(...TRUNCATED)
[88,100,175,267,196,222,307,210,285,213,114,79,104,45,10,107,163,103,20,95,57,40,132,207,205,215,177(...TRUNCATED)
3
["below","black","extra","drive","chart","apple","brown","about","extra","apply","above","alone","ch(...TRUNCATED)
[28,92,17,-45,54,2,-60,94,17,65,46,46,-54,10,-99,-32,-43,20,38,30,75,-32,-82,-63,36,-60,50,-71,75,-6(...TRUNCATED)
[28,120,137,92,146,148,88,182,199,264,310,356,302,312,213,181,138,158,196,226,301,269,187,124,160,10(...TRUNCATED)
4
["death","beach","brand","fault","below","drive","basic","dress","coach","field","brown","cable","ad(...TRUNCATED)
[-21,-7,12,-19,28,-45,-1,-38,-31,-76,-60,-32,75,-45,54,10,46,38,29,3,97,-76,12,3,39,68,92,91,28,9,2,(...TRUNCATED)
[-21,-28,-16,-35,-7,-52,-53,-91,-122,-198,-258,-290,-215,-260,-206,-196,-150,-112,-83,-80,17,-59,-47(...TRUNCATED)
5
["agree","bread","drink","avoid","crime","equal","blind","faith","death","fight","brief","extra","bl(...TRUNCATED)
[38,-35,-49,-4,-63,18,68,53,-21,44,-12,17,92,-32,-76,38,-36,-7,68,20,53,-58,-23,65,36,-32,50,53,42,3(...TRUNCATED)
[38,3,-46,-50,-113,-95,-27,26,5,49,37,54,146,114,38,76,40,33,101,121,174,116,93,158,194,162,212,265,(...TRUNCATED)
6
["ahead","depth","agree","first","burst","drama","brain","arise","agree","alone","event","death","ab(...TRUNCATED)
[96,36,38,-71,85,10,-45,-59,38,46,-99,-21,94,54,-21,-79,36,-83,53,-58,3,17,3,-45,-7,30,36,-2,-54,-1,(...TRUNCATED)
[96,132,170,99,184,194,149,90,128,174,75,54,148,202,181,102,138,55,108,50,53,70,73,28,21,51,87,85,31(...TRUNCATED)
7
["cream","alone","blind","album","again","break","bread","being","alone","cream","false","check","br(...TRUNCATED)
[9,46,68,-7,-83,-71,-35,-32,46,9,30,-97,-71,-85,-59,-21,-45,53,26,-21,-59,56,91,-71,-59,46,96,85,31,(...TRUNCATED)
[9,55,123,116,33,-38,-73,-105,-59,-50,-20,-117,-188,-273,-332,-353,-398,-345,-319,-340,-399,-343,-25(...TRUNCATED)
8
["brown","award","dance","crime","enter","burst","doubt","cloud","agree","award","close","brand","fa(...TRUNCATED)
[-60,53,61,-63,50,85,-64,93,38,53,96,12,53,50,-12,-63,56,36,-7,-60,56,85,28,75,-99,54,28,-36,56,-82,(...TRUNCATED)
[-60,-7,54,-9,41,126,62,155,193,246,342,354,407,457,445,382,438,474,467,407,463,548,576,651,552,606,(...TRUNCATED)
9
["arise","brain","below","among","buyer","beach","earth","earth","cheap","crime","doubt","close","be(...TRUNCATED)
[-59,-45,28,-85,85,-7,-70,-70,-72,-63,-64,96,-7,-72,-36,75,30,93,30,-21,-76,75,93,-49,56,2,3,61,84,4(...TRUNCATED)
[-59,-104,-76,-161,-76,-83,-153,-223,-295,-358,-422,-326,-333,-405,-441,-366,-336,-243,-213,-234,-31(...TRUNCATED)
End of preview. Expand in Data Studio

Long Horizon Execution

This project contains the dataset accompanying the paper "The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs"

Abstract

Does continued scaling of large language models (LLMs) yield diminishing returns? Real-world value often stems from the length of task an agent can complete. We start this work by observing the simple but counterintuitive fact that marginal gains in single-step accuracy can compound into exponential improvements in the length of a task a model can successfully complete. Then, we argue that failures of LLMs when simple tasks are made longer arise from mistakes in execution, rather than an inability to reason. We propose isolating execution capability, by explicitly providing the knowledge and plan needed to solve a long-horizon task. We find that larger models can correctly execute significantly more turns even when small models have 100% single-turn accuracy. We observe that the per-step accuracy of models degrades as the number of steps increases. This is not just due to long-context limitations -- curiously, we observe a self-conditioning effect -- models become more likely to make mistakes when the context contains their errors from prior turns. Self-conditioning does not reduce by just scaling the model size. In contrast, recent thinking models do not self-condition, and can also execute much longer tasks in a single turn. We conclude by benchmarking frontier thinking models on the length of task they can execute in a single turn. Overall, by focusing on the ability to execute, we hope to reconcile debates on how LLMs can solve complex reasoning problems yet fail at simple tasks when made longer, and highlight the massive benefits of scaling model size and sequential test-time compute for long-horizon tasks.

GitHub: https://github.com/long-horizon-execution/measuring-execution/

Description

Our task.

This dataset is a synthetic benchmark designed to measure the pure execution capability of LLMs over long horizons. The core task is key-value dictionary addition. A fixed, in-context dictionary mapping five-letter English words (keys) to integer values is provided in dictionary.json. The model's goal is to maintain a running sum. In each turn, it receives one or more keys (defined by the turn complexity, K), retrieves their corresponding values from the dictionary, adds them to the running sum, and outputs the new sum. The primary metric for evaluation is the task length: the number of steps a model can execute before its accuracy drops below a certain threshold.

The dataset is designed to be programmatically generated and thus contamination-free. We only provide 100 samples here for ease of access, but more can be generated using the script here.

Using the dataset

test.jsonl contains the individual samples that can be used to prompt the LLM.

  • "input": contains the keys to be processed.
  • "values": contains the values mapped to the corresponding keys as described in dictionary.json.
  • "output": contains the expected running sum answers.

The provided dataset is configured with a turn complexity of K=1 (one key per turn). To evaluate models on a higher turn complexity, such as K=N, you can post-process the data by grouping every N consecutive turns:

  • "input": Concatenate every N items into a single comma-separated string.
  • "output": The new running sum for the grouped turn is simply the last running sum from the original group of N turns.

Sample Usage

To load and inspect the test.jsonl dataset:

import json

file_path = "test.jsonl"
samples = []
with open(file_path, 'r', encoding='utf-8') as f:
    for line in f:
        samples.append(json.loads(line))

print(f"Loaded {len(samples)} samples.")
print("First sample:")
print(samples[0])

Benchmark

Benchmark of Frontier models.

Citation

Find out more about our work at https://arxiv.org/abs/2509.09677. Consider citing us if you use our dataset.

@misc{
      sinha2025illusiondiminishingreturnsmeasuring,
      title={The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs}, 
      author={Akshit Sinha and Arvindh Arun and Shashwat Goel and Steffen Staab and Jonas Geiping},
      year={2025},
      eprint={2509.09677},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2509.09677}, 
}
Downloads last month
693