llrehf

community
Activity Feed

AI & ML interests

None defined yet.

Recent Activity

sergiopaniego 
posted an update about 8 hours ago
view post
Post
54
NEW: @mistralai released a fantastic family of multimodal models, Ministral 3.

You can fine-tune them for free on Colab using TRL ⚡️, supporting both SFT and GRPO

Link to the notebooks:
- SFT: https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/sft_ministral3_vl.ipynb
- GRPO: https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/grpo_ministral3_vl.ipynb
- TRL and more examples: https://huggingface.co/docs/trl/index
sergiopaniego 
posted an update 1 day ago
sergiopaniego 
posted an update 3 days ago
view post
Post
2974
want to use open models easily through an API?

Inference Providers might be exactly what you’re looking for sooo here’s a complete beginner-friendly walkthrough 🧐

https://www.youtube.com/watch?v=oxwsizy1Spw
  • 2 replies
·
sergiopaniego 
posted an update 6 days ago
view post
Post
1647
nanochat is now in transformers!

The LLM by @karpathy is officially in the library, and we wrote a blog covering: how did we port the model, differences from the original, and how to run or train it.

go read it 🤓

nanochat-students/transformers
sergiopaniego 
posted an update 8 days ago
sergiopaniego 
posted an update 10 days ago
sergiopaniego 
posted an update 14 days ago
sergiopaniego 
posted an update 14 days ago
view post
Post
2552
we've just added several example scripts to TRL showing how to train models with GRPO using some of the new OpenEnv environments

train a model to interact with a browser (🎮 BrowserGym Env), play Wordle (🎮 Wordle Env) and moooore!

TRL (GRPO + vLLM) + OpenEnv! ⚡️

📝 go play with them: https://github.com/huggingface/trl/tree/main/examples/scripts/openenv

📝 examples list: https://huggingface.co/docs/trl/main/en/example_overview#scripts
sergiopaniego 
posted an update 16 days ago
sergiopaniego 
posted an update 29 days ago
sergiopaniego 
posted an update about 1 month ago
sergiopaniego 
posted an update about 1 month ago
sergiopaniego 
posted an update about 1 month ago
sergiopaniego 
posted an update about 1 month ago
merve 
posted an update about 1 month ago
view post
Post
6432
deepseek-ai/DeepSeek-OCR is out! 🔥 my take ⤵️
> pretty insane it can parse and re-render charts in HTML
> it uses CLIP and SAM features concatenated, so better grounding
> very efficient per vision tokens/performance ratio
> covers 100 languages
·
sergiopaniego 
posted an update about 2 months ago
view post
Post
1967
New drop! 💥 The VLM Object Understanding Comparison Space now runs with Qwen3-VL-4B and moondream3.

You can compare how models reason about images 🧠

Bonus: thanks to @ariG23498 , you now get auto-suggested prompts to explore faster.

Let’s gooo

sergiopaniego/vlm_object_understanding
sergiopaniego 
posted an update about 2 months ago
view post
Post
914
New drop! 💥 The VLM Object Understanding Comparison Space now runs with Qwen3-VL-4B and moondream3.



You can compare how models reason about images 🧠

Bonus: thanks to @ariG23498 , you now get auto-suggested prompts to explore faster.

Let’s gooo

sergiopaniego/vlm_object_understanding
sergiopaniego 
posted an update about 2 months ago
view post
Post
2329
@Qwen released their new small and dense VLMs (Qwen3-VL).

They're incredibly capable and one of my all-time favourite VLMs.

🤗 We’ve prepared some resources to help you get started.

> Fine-tune Qwen3-VL-4B with SFT or GRPO (free Colab notebooks):
> SFT: https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/sft_qwen_vl.ipynb
> GRPO: https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/grpo_qwen3_vl.ipynb

> Compare object detection vs. Moondream3:
sergiopaniego/vlm_object_understanding

> Fine-tune from the CLI using TRL:
https://github.com/kashif/Qwen3-VL/blob/trl-sft/qwen-vl-finetune/README.md#trl-based-training-single-gpu
sergiopaniego 
posted an update about 2 months ago
view post
Post
1484
Super nice intro to fine-tuning with TRL, just dropped by @google (runs free on Colab)!

They use SFT + QLoRA to fine-tune the tiny Gemma 3 270M model for emoji generation

Here’s what the fine-tuned model generates for the prompt: “I'm learning to tweet” → 🐦🗣💻

Colab: https://colab.research.google.com/github/google-gemini/gemma-cookbook/blob/main/Demos/Emoji-Gemma-on-Web/resources/Fine_tune_Gemma_3_270M_for_emoji_generation.ipynb
Try it out: google/emoji-gemma
Learn more: https://developers.googleblog.com/en/own-your-ai-fine-tune-gemma-3-270m-for-on-device/