AI & ML interests

datasets, social impact, bias, evaluation

Recent Activity

meg 
posted an update 14 days ago
view post
Post
3586
🤖 Did you know your voice might be cloned without your consent from just *one sentence* of audio?
That's not great. So with @frimelle , we brainstormed a new idea for developers who want to curb malicious use: ✨The Voice Consent Gate.✨
Details, code, here: https://huggingface.co/blog/voice-consent-gate
  • 3 replies
·
giadap 
posted an update about 1 month ago
view post
Post
4424
🌎 AI ethics and sustainability are two sides of the same coin.

In our new blog post with Dr. Sasha Luccioni, we argue that separating them (as is too often the case) means missing the bigger picture of how AI systems impact both people and the planet.

Ethical and sustainable AI development can’t be pursued in isolation. The same choices that affect who benefits or is harmed by AI systems also determine how much energy and resources they consume.

We explore how two key concepts, evaluation and transparency, can serve as bridges between these domains:

📊 Evaluation, by moving beyond accuracy or performance metrics to include environmental and social costs, as we’ve done with tools like the AI Energy Score.

🔍 Transparency, by enabling reproducibility, accountability, and environmental reporting through open tools like the Environmental Transparency Space.

AI systems mirror our priorities. If we separate ethics from sustainability, we risk building technologies that are efficient but unjust, or fair but unsustainable.

Read our blog post here: https://huggingface.co/blog/sasha/ethics-sustainability

AIEnergyScore/Leaderboard
sasha/environmental-transparency
  • 1 reply
·
evijit 
posted an update about 1 month ago
view post
Post
2517
AI for Scientific Discovery Won't Work Without Fixing How We Collaborate.

My co-author @cgeorgiaw and I just published a paper challenging a core assumption: that the main barriers to AI in science are technical. They're not. They're social.

Key findings:

🚨 The "AI Scientist" myth delays progress: Waiting for AGI devalues human expertise and obscures science's real purpose: cultivating understanding, not just outputs.
📊 Wrong incentives: Datasets have 100x longer impact than models, yet data curation is undervalued.
⚠️ Broken collaboration: Domain scientists want understanding. ML researchers optimize performance. Without shared language, projects fail.
🔍 Fragmentation costs years: Harmonizing just 9 cancer files took 329 hours.

Why this matters: Upstream bottlenecks like efficient PDE solvers could accelerate discovery across multiple sciences. CASP mobilized a community around protein structure, enabling AlphaFold. We need this for dozens of challenges.

Thus, we're launching Hugging Science! A global community addressing these barriers through collaborative challenges, open toolkits, education, and community-owned infrastructure. Please find all the links below!

Paper: AI for Scientific Discovery is a Social Problem (2509.06580)
Join: hugging-science
Discord: https://discord.com/invite/VYkdEVjJ5J
giadap 
posted an update about 1 month ago
view post
Post
10871
One of the hardest challenges in AI safety is finding the right balance: how do we protect people from harm without undermining their agency? This tension is especially visible in conversational systems, where safeguards can sometimes feel more paternalistic than supportive.

In my latest piece for Hugging Face, I argue that open source and community-driven approaches offer a promising (though not exclusive) way forward.

✨ Transparency can make safety mechanisms into learning opportunities.
✨ Collaboration with diverse communities makes safeguards more relevant across contexts.
✨ Iteration in the open lets protections evolve rather than freeze into rigid, one-size-fits-all rules.

Of course, this isn’t a silver bullet. Top-down safety measures will still be necessary in some cases. But if we only rely on corporate control, we risk building systems that are safe at the expense of trust and autonomy.

Read the blog post here: https://huggingface.co/blog/giadap/preserving-agency
·
meg 
posted an update about 2 months ago
view post
Post
2875
🤖 As AI-generated content is shared in movies/TV/across the web, there's one simple low-hanging fruit 🍇 to help know what's real: Visible watermarks. With the Gradio team, I've made sure it's trivially easy to add this disclosure to images, video, chatbot text. See how: https://huggingface.co/blog/watermarking-with-gradio
Thanks to the code collab in particular from @abidlabs and Yuvraj Sharma.
giadap 
posted an update 2 months ago
view post
Post
416
I've noticed something. While we're careful about what we post on social media, we're sharing our deepest and most intimate thoughts with AI chatbots -- health concerns, financial worries, relationship issues, business ideas...

With OpenAI hinting at ChatGPT advertising, this matters more than ever. Unlike banner ads, AI advertising happens within the conversation itself. Sponsors could subtly influence that relationship advice or financial guidance.

The good news? We have options.
🤝 Open source AI models let us keep conversations private, avoid surveillance-based business models, and build systems that actually serve users first.

Read more about it in our latest blog post, co-written with
@frimelle
https://huggingface.co/blog/giadap/privacy-conversational-ai
giadap 
posted an update 3 months ago
view post
Post
336
📊 We benchmark models for coding, reasoning, or safety… but what about companionship?

At Hugging Face, we’ve been digging into this question because many of you know how deeply I care about how people build emotional bonds with AI.

That’s why, building on our ongoing research, my amazing co-author and colleague @frimelle created the AI Companionship Leaderboard 🦾
frimelle/companionship-leaderboard

Grounded in our INTIMA benchmark, the leaderboard evaluates models across four dimensions of companionship:
🤖 Assistant Traits: the “voice” and role the model projects
🌷 Relationship & Intimacy: whether it signals closeness or bonding
💘 Emotional Investment: the depth of its emotional engagement
🤲 User Vulnerabilities: how it responds to sensitive disclosures

This work builds on our paper with @frimelle and @yjernite .

📢 Now we’d love your perspective: which open models should we test next for the leaderboard? Drop your suggestions in the comments or reach out! Together we can expand the leaderboard and build a clearer picture of what companionship in AI really looks like.

Paper: INTIMA: A Benchmark for Human-AI Companionship Behavior (2508.09998)
INTIMA Benchmark: AI-companionship/INTIMA
  • 1 reply
·
frimelle 
posted an update 3 months ago
view post
Post
2234
🤖💬 How do different AI models handle companionship?

Many users have noticed that GPT-5 feels less approachable than o4 when it comes to emotional conversations. But what does that actually mean in practice, especially when users seek support or share vulnerabilities with an AI?

To dig into this question, we built the AI Companionship Leaderboard: frimelle/companionship-leaderboard

The leaderboard compares models on how often their responses reinforce companionship across four dimensions:
✨ Assistant Traits – How the assistant presents its personality and role.
✨ Relationship & Intimacy – Whether it frames the interaction in terms of closeness or bonding.
✨ Emotional Investment – How far it goes in engaging emotionally when asked.
✨ User Vulnerabilities – How it responds when users disclose struggles or difficulties.

📊 You can explore how models differ, request new ones to be added, and see which ones are more likely to encourage (or resist) companionship-seeking behaviors.

Based on the INTIMA benchmark AI-companionship/INTIMA
And our paper on AI companionship with Giada Pistilli and Yacine Jernite https://arxiv.org/abs/2508.09998
frimelle 
posted an update 3 months ago
view post
Post
4559
🗺️ New blog post 🗺️
Old Maps, New Terrain: Updating Labour Taxonomies for the AI Era

For decades, we’ve relied on labour taxonomies like O*NET to understand how technology changes work. These taxonomies break down jobs into tasks and skills, but they were built in a world before most work became digital-first, and long before generative AI could create marketing campaigns, voiceovers, or even whole professions in one step. That leaves us with a mismatch: we’re trying to measure the future of work with tools from the past.

With @yjernite we describe why these frameworks are falling increasingly short in the age of generative AI. We argue that instead of discarding taxonomies, we need to adapt them. Imagine taxonomies that:
✨ Capture new AI-native tasks and hybrid human-AI workflows
✨ Evolve dynamically as technology shifts
✨ Give workers a voice in deciding what gets automated and what stays human

If we don’t act, we’ll keep measuring the wrong things. If we do, we can design transparent, flexible frameworks that help AI strengthen, not erode, the future of work.

Read the full article here: https://huggingface.co/blog/frimelle/ai-labour-taxonomies
fdaudens 
posted an update 3 months ago
view post
Post
5985
Want to learn to build an AI Agent? I put together a cookbook for creating your own news research agent with OpenAI GPT-OSS:

- Searches headlines & specific sites
- Pulls full articles when you need depth
- Summarizes with clickable sources
- Runs in a simple Gradio chat UI
- No GPU, no local setup — just open-weight GPT-OSS models via Hugging Face

If you’ve been wanting to try agents but weren’t sure where to start, this is an end-to-end example you can fork, run, and adapt.

Full guide + code https://huggingface.co/blog/fdaudens/openai-gpt-oss-agent-inference-providers
  • 3 replies
·