Thomas Wolf PRO

thomwolf

AI & ML interests

NLP and open-source :-)

Recent Activity

liked a Space about 4 hours ago
featherless-ai/try-this-model
liked a model about 4 hours ago
Qwen/QwQ-32B-Preview
liked a Space about 6 hours ago
camenduru/hunyuan-video
View all activity

Articles

Organizations

Hugging Face's profile picture Natural Language Processing with Transformers's profile picture BigScience Workshop's profile picture Flax Community's profile picture datablations's profile picture Training Transformers Together's profile picture BigScience Data's profile picture Evaluation datasets's profile picture HuggingFaceBR4's profile picture Godot Engine Demos's profile picture OpenAssistant's profile picture Evaluation on the Hub's profile picture HuggingFaceM4's profile picture Simulation Environments Tests and Builds's profile picture (De)fusing's profile picture HuggingFaceGECLM's profile picture CodeParrot's profile picture BigCode's profile picture Hugging Face H4's profile picture CV as NLP's profile picture Explorer of Simulate alpha's profile picture BigCode Data's profile picture Hugging Face Extreme-Scale's profile picture Hugging Face H4 Community's profile picture GAIA's profile picture Hugging Face TB Research's profile picture Hugging Face Smol Cluster's profile picture Open LLM Leaderboard's profile picture TTS Eval (OLD)'s profile picture the circle of truth - war scene's profile picture Nanotron Research's profile picture LeRobot's profile picture Journalists on Hugging Face's profile picture NewTechKids's profile picture MLX Community's profile picture Hugging Face Assignments's profile picture HuggingFaceFW's profile picture TTS AGI's profile picture Social Post Explorers's profile picture dora-rs's profile picture HuggingFaceEval's profile picture HuggingFaceFW-Dev's profile picture Hugging Face Discord Community's profile picture DataComp 's profile picture Data Agents's profile picture Hugging Face FineVideo's profile picture HuggingFace Science Team's profile picture Art's profile picture smol-explorers's profile picture Hugging Face Science's profile picture LeMaterial's profile picture open/ acc's profile picture

thomwolf's activity

reacted to m-ric's post with ๐Ÿš€๐Ÿ”ฅ about 6 hours ago
view post
Post
762
๐Ÿค– ๐—”๐—ฑ๐—ผ๐—ฏ๐—ฒ'๐˜€ ๐—ฐ๐—ผ๐—ฑ๐—ฒ-๐—ด๐—ฒ๐—ป๐—ฒ๐—ฟ๐—ฎ๐˜๐—ถ๐—ป๐—ด ๐—ฎ๐—ด๐—ฒ๐—ป๐˜ ๐—ฟ๐—ฒ๐—ฎ๐—ฐ๐—ต๐—ฒ๐˜€ ๐˜๐—ต๐—ฒ ๐˜๐—ผ๐—ฝ ๐—ผ๐—ณ ๐—š๐—”๐—œ๐—” ๐—น๐—ฒ๐—ฎ๐—ฑ๐—ฒ๐—ฟ๐—ฏ๐—ผ๐—ฎ๐—ฟ๐—ฑ - and their paper cites my work!

๐Ÿ’ก Reminder:ย In short, Agentic systems are a vehicle in which you put your LLM to allow it access to the outside world.

โžก๏ธ The team of researchers at Adobe started from the idea that current agentic systems lack the ability to define their own tools. So they decided to make an agent that writes actions as code, thus allowing it to write python functions that can be re-used later as tools!

Here's what the LLM generations can look like with the proper prompt:

Thought: I need to access the excel file using a different method.
Action:
def access_excel_file(file_path)
	... # rest of the code (the agent does writes it, but I don't have room in this post)
	return rows


Then your system executes this and appends the observation to the agent's memory.

Why is this code formulation better than classical tool use formulation as JSON? The paper explains:

"Most existing work uses text or JSON as the representation of actions, which significantly lacks the two criteria mentioned earlier: generality and composability. In contrast, DynaSaur can utilize available actions or create new ones if necessary, using code as a unified representation. In principle, acting with code enables agents to solve any Turing-complete problem."

The idea of using code is not new: in fact, we do it in transformers.agents (thus the citation that I got). They implementation adds further refinements, like using RAG to retrieve relevant functions before generating an action, which increases performance further.

And they observe that code agents perform much better, reaching the top of GAIA leaderboard! ๐Ÿฅ‡

Go take a look, it's really clear and informative!

Paper added to my agents collection ๐Ÿ‘‰ m-ric/agents-65ba776fbd9e29f771c07d4e
reacted to merve's post with ๐Ÿ”ฅ๐Ÿ‘ 5 days ago
view post
Post
2068
The authors of ColPali trained a retrieval model based on SmolVLM ๐Ÿค  vidore/colsmolvlm-alpha
TLDR;

- ColSmolVLM performs better than ColPali and DSE-Qwen2 on all English tasks

- ColSmolVLM is more memory efficient than ColQwen2 ๐Ÿ’—
reacted to davanstrien's post with โค๏ธ 7 days ago
view post
Post
2293
First dataset for the new Hugging Face Bluesky community organisation: bluesky-community/one-million-bluesky-posts ๐Ÿฆ‹

๐Ÿ“Š 1M public posts from Bluesky's firehose API
๐Ÿ” Includes text, metadata, and language predictions
๐Ÿ”ฌ Perfect to experiment with using ML for Bluesky ๐Ÿค—

Excited to see people build more open tools for a more open social media platform!
reacted to ZennyKenny's post with ๐Ÿ‘ 8 days ago
view post
Post
1184
I've joined the Bluesky community. Interested to see what decentralized social media looks like in action: https://bsky.app/profile/kghamilton.bsky.social

Looking forward to following other AI builders, tech enthusiasts, goth doomscrollers, and ironic meme creators.
reacted to as-cle-bert's post with ๐Ÿ”ฅ 8 days ago
view post
Post
1242
Hi HuggingFacers!๐Ÿค—
I'm thrilled to introduce my latest project: ๐—ฆ๐—ฒ๐—ป๐—ง๐—ฟ๐—˜๐˜ƒ (๐—ฆ๐—ฒ๐—ปtence ๐—ง๐—ฟansformers ๐—˜๐˜ƒaluator), a python package that offers simple customizable evaluation for text retrieval accuracy and time performance of Sentence Transformers-compatible text embedders on PDF data!๐Ÿ“Š

Learn more in my LinkedIn post: https://www.linkedin.com/posts/astra-clelia-bertelli-583904297_python-embedders-semanticsearch-activity-7266754133557190656-j1e3

And on the GitHub repo: https://github.com/AstraBert/SenTrEv

Have fun!๐Ÿ•
posted an update 9 days ago
replied to nyuuzyou's post 14 days ago
reacted to nyuuzyou's post with ๐Ÿ”ฅ 14 days ago
view post
Post
948
๐Ÿ–ผ๏ธ Introducing Public Domain Pictures Dataset - nyuuzyou/publicdomainpictures

Dataset highlights:
- 644,412 public domain images with comprehensive metadata from publicdomainpictures.net
- English language metadata including titles, descriptions, and keywords
- Each entry contains rich metadata including:
- Unique image ID and full-size image URLs
- Detailed titles and descriptions
- Keyword/tag collections
- Creator attribution
- Released to the public domain under Creative Commons Zero (CC0) license
  • 2 replies
ยท
posted an update 14 days ago
replied to sequelbox's post 14 days ago
reacted to sequelbox's post with ๐Ÿ‘ 14 days ago
reacted to LukeNeumann's post with ๐Ÿ‘๐Ÿ”ฅ 14 days ago
view post
Post
1857
Hello Hugging Face community!

I wanted to introduce myself and my company @Overlaiapp . We are a collective of filmmakers, photographers, and AI engineers working on high resolution (8K+) training data.

We plan to share a lot of our datasets with the community and are kicking things off with two curated datasets:

- Overlaiai/OregonCoastin4K

- Overlaiai/SubArcticPolarBear


Overlai.ai Dataset Features

๐ŸŽฅ Oversampled: Every clip is captured in stunning 8K resolution, delivering rich detail ideal for fine tuning scenic landscapes and ocean dynamics.

๐Ÿ“ธ Variance: Includes close-up details, slow-motion footage of crashing waves, sweeping landscapes, and wildlife shots.

๐Ÿ“‹ Detailed Metadata: Every clip is paired with structured metadata, including creative descriptions, precise camera movements, lens information, field of view calculations, and shot settings, ensuring AI models can fully understand and replicate real-world cinematography with accuracy.

โš™๏ธ Consistency: Re-thinking training data at the point of capture by "overshooting" a subject, enabling models to learn more nuanced relationships and views across scenes.

๐ŸŒ… Light: Shot during early morning and sunset light for optimal color contrast and dynamic range, maximizing visual quality for color and lighting-sensitive tasks.

๐Ÿ” Curation: Curated specifically for machine learning, providing clean, high-quality data for next generation model training.
reacted to merve's post with โค๏ธ about 1 month ago
view post
Post
2457
Lotus ๐Ÿชท is a new foundation model on monocular depth estimation โœจ
Compared to previous diffusion-based MDE models, Lotus is modified for dense prediction tasks
Authors also released a model for normal prediction ๐Ÿค—
Find everything in this collection merve/lotus-6718fb957dc1c85a47ca1210
reacted to singhsidhukuldeep's post with โค๏ธ about 1 month ago
view post
Post
2797
If you have ~300+ GB of V-RAM, you can run Mochi from @genmo

A SOTA model that dramatically closes the gap between closed and open video generation models.

Mochi 1 introduces revolutionary architecture featuring joint reasoning over 44,520 video tokens with full 3D attention. The model implements extended learnable rotary positional embeddings (RoPE) in three dimensions, with network-learned mixing frequencies for space and time axes.

The model incorporates cutting-edge improvements, including:
- SwiGLU feedforward layers
- Query-key normalization for enhanced stability
- Sandwich normalization for controlled internal activations

What is currently available?
The base model delivers impressive 480p video generation with exceptional motion quality and prompt adherence. Released under the Apache 2.0 license, it's freely available for both personal and commercial applications.

What's Coming?
Genmo has announced Mochi 1 HD, scheduled for release later this year, which will feature:
- Enhanced 720p resolution
- Improved motion fidelity
- Better handling of complex scene warping
  • 2 replies
ยท
reacted to fdaudens's post with โค๏ธ about 1 month ago
view post
Post
2801
๐Ÿคฏ Plot twist: Size isn't everything in AI! A lean 32B parameter model just showed up to the party and outperformed a 70B one. Efficiency > Scale? The AI world just got more interesting...

Cohere For AI released Aya Expanse, a new family of multilingual models (8B and 32B) spanning 23 popular languages.

Models: CohereForAI/c4ai-aya-expanse-671a83d6b2c07c692beab3c3
Blog post: https://huggingface.co/blog/aya-expanse
Demo: CohereForAI/aya_expanse
posted an update about 1 month ago
view post
Post
4087
Parents in the 1990: Teach the kids to code
Parents now: Teach the kids to fix the code when it starts walking around ๐Ÿค–โœจ
  • 2 replies
ยท
reacted to singhsidhukuldeep's post with ๐Ÿ”ฅ 3 months ago
view post
Post
1221
Remember when @Google launched MediaPipe in an effort to create efficient on-device pipelines?

They've just unlocked the ability to run 7B+ parameter language models directly in your browser. This is a game-changer for on-device AI!

Yes, they are streaming 8.6 GB model files!

Currently, they have Gemma 2B/7B running, but imagine Dynamic LoRA, multimodal support, quantization, and you never leaving Chrome!

This is a significant technical advancement, especially in Memory Optimization:

- Redesigned the model-loading code to work around WebAssembly's 4 GB memory limit.
- Implemented asynchronous loading of transformer stack layers (28 for Gemma 1.1 7B).
- Reduced peak WebAssembly memory usage to less than 1% of previous requirements.

Cross-Platform Compatibility
- Compiled the C++ codebase to WebAssembly for broad browser support.
- Utilized the WebGPU API for native GPU acceleration in browsers.

Here's why this matters:

1. Privacy: No need to send data to remote servers.
2. Cost-Efficiency: Eliminates server expenses.
3. Offline Capabilities: Use powerful AI without an internet connection.

Blog: https://research.google/blog/unlocking-7b-language-models-in-your-browser-a-deep-dive-with-google-ai-edges-mediapipe/