Mistral-7b with continued pretraining using Quiet-STaR (https://arxiv.org/abs/2403.09629) for generating 8 thought tokens before each output token.

Forked from Crystalcareai/Quiet-Star-Custom

Downloads last month
14
Safetensors
Model size
7.29B params
Tensor type
BF16
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Dataset used to train pharaouk/Quiet-Star-Custom