The First Open-Source Reasoning LLM

December 28, 2023 - This model was created 11 months before OpenAI's o1 release.

Historical Context

In late 2023, I was experimenting with fine-tuning open-source models. Working with limited computational resources (primarily free Colab notebooks with T4 GPUs), I focused on developing novel approaches and new paradigms to significantly enhance LLM capabilities without simply scaling the number of parameters, since that would have required substantial computational resources.

Proof of timeline: Check the initial commit - December 28, 2023.

Technical Approach

The model uses a custom chat template that includes a "reasoning" step before providing the output to the user:

<|system|>sys_message
<|prompt|>prompt
<|reasoning|>reasoning
<|response|>response<|endoftext|>

To test this approach, I created the ArtificialThinkerSet dataset to fine-tune Phi-2.

I also wrote "Reasoning Is All You Need" - a blog post explaining this approach.

You can follow me on X/Twitter.

If you want to contact me about this, you can do so at main@freecs.org.

Downloads last month
21
Safetensors
Model size
2.78B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for freecs/ArtificialThinker-Phi2

Base model

microsoft/phi-2
Finetuned
(355)
this model
Merges
1 model

Dataset used to train freecs/ArtificialThinker-Phi2