Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
1
1
27
Mitchell Furler
reonyy
Follow
esselte974's profile picture
0xjoseph's profile picture
karolklp's profile picture
3 followers
·
14 following
AI & ML interests
multimodal LLM
Recent Activity
liked
a model
6 days ago
jinaai/ReaderLM-v2
reacted
to
MoritzLaurer
's
post
with 🚀
6 days ago
Microsoft's rStar-Math paper claims that 🤏 ~7B models can match the math skills of o1 using clever train- and test-time techniques. You can now download their prompt templates from Hugging Face ! 📏 The paper introduces rStar-Math, which claims to rival OpenAI o1's math reasoning capabilities by integrating Monte Carlo Tree Search (MCTS) with step-by-step verified reasoning trajectories. 🤖 A Process Preference Model (PPM) enables fine-grained evaluation of intermediate steps, improving training data quality. 🧪 The system underwent four rounds of self-evolution, progressively refining both the policy and reward models to tackle Olympiad-level math problems—without GPT-4-based data distillation. 💾 While we wait for the release of code and datasets, you can already download the prompts they used from the HF Hub! Details and links here 👇 Prompt-templates docs: https://moritzlaurer.github.io/prompt_templates/ Templates on the hub: https://huggingface.co/datasets/MoritzLaurer/rstar-math-prompts Prompt-templates collection: https://huggingface.co/collections/MoritzLaurer/prompt-templates-6776aa0b0b8a923957920bb4 Paper: https://arxiv.org/pdf/2501.04519
reacted
to
AdinaY
's
post
with 🔥
6 days ago
MiniMax, the company behind Hailuo_AI, has joined the open source community by releasing both models and demos of MiniMax-Text-01 & MiniMax-VL-01🔥 - Model https://huggingface.co/MiniMaxAI/MiniMax-VL-01 https://huggingface.co/MiniMaxAI/MiniMax-Text-01 - Demo https://huggingface.co/spaces/MiniMaxAI/MiniMax-VL-01 https://huggingface.co/spaces/MiniMaxAI/MiniMax-Text-01 ✨ MiniMax-text-01: - 456B with 45.9B activated per token - Combines Lightning Attention, Softmax Attention, and MoE for optimal performance - Training context up to 1M tokens, inference handles 4M tokens ✨ MiniMax-VL-01: - ViT-MLP-LLM framework ( non-transformer👀) - Handles image inputs from 336×336 to 2016×2016 - 694M image-caption pairs + 512B tokens processed across 4 stages
View all activity
Organizations
None yet
reonyy
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
upvoted
a
collection
14 days ago
Cosmos
Collection
The collection of Cosmos models
•
31 items
•
Updated
5 days ago
•
241