This is the general version of TaH-plus-1.7B, trained on a mixture of math, code, and science data, presented in the paper Think-at-Hard: Selective Latent Iterations to Improve Reasoning Language Models.

Think-at-Hard(TaH0 uses a neural decider to dynamically initiate latent iterations only where needed. Compared with baselines that iterate twice for all output tokens, TaH delivers 8.1-11.3% accuracy gains while exempting 94% of tokens from the second iteration. Against strong single-iteration Qwen3 models finetuned with the same data, it also delivers 4.0-5.0% accuracy gains. When allowing less than 3% additional parameters from LoRA and the iteration decider, the gains increase to 8.5-12.6% and 5.3-5.4%, respectively.

Please visit our GitHub repo for more information.

Sample Usage

Please see Github Example for sample usage.

Downloads last month
15
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nics-efc/TaH-plus-1.7B

Finetuned
(208)
this model
Quantizations
1 model

Dataset used to train nics-efc/TaH-plus-1.7B

Collection including nics-efc/TaH-plus-1.7B