Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
Jaward 
posted an update Sep 24
Post
1340
Some interesting findings in this paper:
- They consider o1 a Large Reasoning Model (LRM) with a different arch from SOTA LLMs.
- Creative justifications: “It is almost as if o1 has gone from hallucinating to gaslighting!”. This is so true, I noticed also it can “hallucinate” its chain-of-thoughts lol.
- Accuracy/Cost Tradeoffs: o1 provides high accuracy but at significant computational and monetary costs due to hidden "reasoning tokens."
Paper: https://www.arxiv.org/abs/2409.13373
In this post