Daniel Vila

dvilasuero

AI & ML interests

RLHF, RLAIF, DPO, data, data, data

Recent Activity

Articles

Organizations

dvilasuero's activity

Reacted to elliesleightholm's post with πŸ”₯πŸ€— 4 days ago
Reacted to andito's post with ❀️ 6 days ago
view post
Post
1050
Hugging face presents FineVideo πŸŽ₯! Unlocking the next generation of Video understanding πŸš€

🀯3400 hours of annotated Creative Common videos with rich character descriptions, scene splits, mood, and content descriptions per scene as well as QA pairs.
πŸ”₯
@mfarre processed over 2M videos of Youtube-CC to make this incredibly powerful selection.

Very psyched to fine-tune idefics on this dataset. ⚑️
Explore the videos: HuggingFaceFV/FineVideo-Explorer
Reacted to singhsidhukuldeep's post with πŸ‘πŸ‘€ 7 days ago
view post
Post
1234
Sorry judge, my lawyer hallucinated? πŸ˜‚ If you get an AI lawyer, you would want it to be hallucination-free!

New @Stanford -@Yale research reveals surprising findings about leading AI legal research tools. Here's what you need to know:

>> Key Findings
The study tested LexisNexis (Lexis+ AI), Thomson Reuters (Westlaw AI & Ask Practical Law AI), and GPT-4, finding hallucination rates between 17-33% despite claims of being "hallucination-free".

>> Technical Deep Dive
The research evaluated these tools using Retrieval-Augmented Generation (RAG) architecture, which operates in two crucial steps:

1. Retrieval System:
- Uses neural text embeddings to capture semantic meaning
- Employs both lexical and semantic search mechanisms
- Implements document filtering and extraction
- Retrieves relevant legal documents from vast databases

2. Generation Pipeline:
- Processes retrieved documents alongside original queries
- Synthesizes information from multiple legal sources
- Generates responses based on retrieved context
- Includes citation verification mechanisms

>> Performance Breakdown:
- Lexis+ AI: 65% accuracy rate
- Westlaw AI: 42% accuracy rate
- Ask Practical Law AI: Over 60% incomplete answers

>> Why This Matters
This research exposes critical vulnerabilities in AI legal tools that lawyers increasingly rely on. It's essential for legal professionals to understand these limitations when incorporating AI into their practice.
Reacted to prithivMLmods's post with β€οΈπŸ€— 7 days ago
view post
Post
2004
πŸ… Glif App's Remixes feature allows you to slap a logo onto anything, seamlessly integrating the input image (logo) into various contexts. The result is stunning remixes that blend the input logo with generated images (img2img logo mapping) for incredible outcomes.

Check out Any Logo Anywhere remixes on Glif: [Glif Remixes](https://glif.app/glifs/cm3o7dfsd002610z48sz89yih/remixes)

🌐The browser extension enables thousands of Glif-based img2img workflows on any image you find online. Experience Glif Remix with WebAI: [Chrome Extension](https://chromewebstore.google.com/detail/glif-remix-the-web-with-a/abfbooehhdjcgmbmcpkcebcmpfnlingo)

.
.
.
πŸ€—Have fun with the cool stuff !!
@prithivMLmods
posted an update 8 days ago
posted an update 22 days ago
view post
Post
662
Build datasets for AI on the Hugging Face Hubβ€”10x easier than ever!

Today, I'm excited to share our biggest feature since we joined Hugging Face.

Here’s how it works:

1. Pick a datasetβ€”upload your own or choose from 240K open datasets.
2. Paste the Hub dataset ID into Argilla and set up your labeling interface.
3. Share the URL with your team or the whole community!

And the best part? It’s:
- No code – no Python needed
- Integrated – all within the Hub
- Scalable – from solo labeling to 100s of contributors

I am incredibly proud of the team for shipping this after weeks of work and many quick iterations.

Let's make this sentence obsolete: "Everyone wants to do the model work, not the data work."


Read, share, and like the HF blog post:
https://huggingface.co/blog/argilla-ui-hub
Reacted to davidberenstein1957's post with πŸ‘€ 22 days ago
view post
Post
2083
Import any dataset from the Hub and configure your labeling tasks without needing any code!

Really excited about extending the Hugging Face Hub integration with many more streamlined features and workflows, and we would love to hear your feedback and ideas, so don't feel shy and reach out 🫢🏽

https://huggingface.co/blog/argilla-ui-hub
Reacted to albertvillanova's post with πŸ‘ about 1 month ago
view post
Post
1935
🚨 We’ve just released a new tool to compare the performance of models in the πŸ€— Open LLM Leaderboard: the Comparator πŸŽ‰
open-llm-leaderboard/comparator

Want to see how two different versions of LLaMA stack up? Let’s walk through a step-by-step comparison of LLaMA-3.1 and LLaMA-3.2. πŸ¦™πŸ§΅πŸ‘‡

1/ Load the Models' Results
- Go to the πŸ€— Open LLM Leaderboard Comparator: open-llm-leaderboard/comparator
- Search for "LLaMA-3.1" and "LLaMA-3.2" in the model dropdowns.
- Press the Load button. Ready to dive into the results!

2/ Compare Metric Results in the Results Tab πŸ“Š
- Head over to the Results tab.
- Here, you’ll see the performance metrics for each model, beautifully color-coded using a gradient to highlight performance differences: greener is better! 🌟
- Want to focus on a specific task? Use the Task filter to hone in on comparisons for tasks like BBH or MMLU-Pro.

3/ Check Config Alignment in the Configs Tab βš™οΈ
- To ensure you’re comparing apples to apples, head to the Configs tab.
- Review both models’ evaluation configurations, such as metrics, datasets, prompts, few-shot configs...
- If something looks off, it’s good to know before drawing conclusions! βœ…

4/ Compare Predictions by Sample in the Details Tab πŸ”
- Curious about how each model responds to specific inputs? The Details tab is your go-to!
- Select a Task (e.g., MuSR) and then a Subtask (e.g., Murder Mystery) and then press the Load Details button.
- Check out the side-by-side predictions and dive into the nuances of each model’s outputs.

5/ With this tool, it’s never been easier to explore how small changes between model versions affect performance on a wide range of tasks. Whether you’re a researcher or enthusiast, you can instantly visualize improvements and dive into detailed comparisons.

πŸš€ Try the πŸ€— Open LLM Leaderboard Comparator now and take your model evaluations to the next level!
Reacted to m-ric's post with πŸ‘€ about 1 month ago
view post
Post
1695
By far the coolest release of the day!
> The Open LLM Leaderboard, most comprehensive suite for comparing Open LLMs on many benchmarks, just released a comparator tool that lets you dig into the detail of differences between any models.

Here's me checking how the new Llama-3.1-Nemotron-70B that we've heard so much compares to the original Llama-3.1-70B. πŸ€”πŸ”Ž

Try it out here πŸ‘‰ open-llm-leaderboard/comparator
  • 2 replies
Β·
Reacted to davidberenstein1957's post with ❀️ about 1 month ago
view post
Post
1690
You can now build a custom text classifier without days of human labeling!

πŸ‘ LLMs work reasonably well as text classifiers.
πŸ‘Ž They are expensive to run at scale and their performance drops in specialized domains.

πŸ‘ Purpose-built classifiers have low latency and can potentially run on CPU.
πŸ‘Ž They require labeled training data.

Combine the best of both worlds: the automatic labeling capabilities of LLMs and the high-quality annotations from human experts to train and deploy a specialized model.

Blog: https://huggingface.co/blog/sdiazlor/custom-text-classifier-ai-human-feedback
posted an update about 1 month ago
view post
Post
986
Big news! You can now build strong ML models without days of human labelling

You simply:
- Define your dataset, including annotation guidelines, labels and fields
- Optionally label some records manually.
- Use an LLM to auto label your data with a human (you? your team?) in the loop!

Get started with this blog post:
https://huggingface.co/blog/sdiazlor/custom-text-classifier-ai-human-feedback
Reacted to clem's post with πŸ‘πŸš€πŸ˜Žβ€οΈ about 2 months ago
view post
Post
4151
Open-source AI creates healthy competition in a field where natural tendencies lead to extreme concentration of power. Imagine a world where only one or two companies could build software. This is the biggest risk and ethical challenge of them all IMO. Let's fight this!
  • 3 replies
Β·
posted an update 2 months ago
view post
Post
395
Explore FinePersonas, visually with Argilla and black-forest-labs/FLUX.1-schnell


Excited to share this space where the community can explore a tiny subset of FinePersonas

argilla/finepersonas


Dataset built with distilabel and Free Serveless endpoints

This is just a first step towards more interesting experiments with FinePersonas, for example can we use it to assess biases in text2image models?

If you have ideas I'd love to hear them in the comments!

Reacted to davidberenstein1957's post with πŸš€ 3 months ago
view post
Post
1821
🌟 Argilla v2.1.0 goes multi-modal: Image Field, Dark Mode, Enhanched Hugging Face Hub imports and more!

πŸ–Ό Image Field: Seamlessly work with multimodal datasets
πŸŒ“ Dark Mode: Reduce eye strain with our sleek new look
πŸ€— Enhanced Hugging Face Hub import with the SDK
πŸ‡ͺπŸ‡Έ Spanish UI: Breaking language barriers

Plus more improvements to supercharge your model curation workflow!

Check out the full announcement for details and code examples: https://github.com/argilla-io/argilla/compare/v2.0.1...v2.1.0