Gabriel Martรญn Blรกzquez

gabrielmbmb

AI & ML interests

ML Engineer

Recent Activity

upvoted a paper 4 days ago
liked a dataset 6 days ago
microsoft/orca-agentinstruct-1M-v1
liked a model 6 days ago
numind/NuExtract-1.5-smol

Articles

Organizations

gabrielmbmb's activity

reacted to erinys's post with ๐Ÿš€ about 1 month ago
reacted to albertvillanova's post with ๐Ÿ‘ about 1 month ago
view post
Post
1929
๐Ÿšจ Weโ€™ve just released a new tool to compare the performance of models in the ๐Ÿค— Open LLM Leaderboard: the Comparator ๐ŸŽ‰
open-llm-leaderboard/comparator

Want to see how two different versions of LLaMA stack up? Letโ€™s walk through a step-by-step comparison of LLaMA-3.1 and LLaMA-3.2. ๐Ÿฆ™๐Ÿงต๐Ÿ‘‡

1/ Load the Models' Results
- Go to the ๐Ÿค— Open LLM Leaderboard Comparator: open-llm-leaderboard/comparator
- Search for "LLaMA-3.1" and "LLaMA-3.2" in the model dropdowns.
- Press the Load button. Ready to dive into the results!

2/ Compare Metric Results in the Results Tab ๐Ÿ“Š
- Head over to the Results tab.
- Here, youโ€™ll see the performance metrics for each model, beautifully color-coded using a gradient to highlight performance differences: greener is better! ๐ŸŒŸ
- Want to focus on a specific task? Use the Task filter to hone in on comparisons for tasks like BBH or MMLU-Pro.

3/ Check Config Alignment in the Configs Tab โš™๏ธ
- To ensure youโ€™re comparing apples to apples, head to the Configs tab.
- Review both modelsโ€™ evaluation configurations, such as metrics, datasets, prompts, few-shot configs...
- If something looks off, itโ€™s good to know before drawing conclusions! โœ…

4/ Compare Predictions by Sample in the Details Tab ๐Ÿ”
- Curious about how each model responds to specific inputs? The Details tab is your go-to!
- Select a Task (e.g., MuSR) and then a Subtask (e.g., Murder Mystery) and then press the Load Details button.
- Check out the side-by-side predictions and dive into the nuances of each modelโ€™s outputs.

5/ With this tool, itโ€™s never been easier to explore how small changes between model versions affect performance on a wide range of tasks. Whether youโ€™re a researcher or enthusiast, you can instantly visualize improvements and dive into detailed comparisons.

๐Ÿš€ Try the ๐Ÿค— Open LLM Leaderboard Comparator now and take your model evaluations to the next level!
reacted to tomaarsen's post with ๐Ÿ”ฅ about 1 month ago
view post
Post
6312
๐Ÿ“ฃ Sentence Transformers v3.2.0 is out, marking the biggest release for inference in 2 years! 2 new backends for embedding models: ONNX (+ optimization & quantization) and OpenVINO, allowing for speedups up to 2x-3x AND Static Embeddings for 500x speedups at 10-20% accuracy cost.

1๏ธโƒฃ ONNX Backend: This backend uses the ONNX Runtime to accelerate model inference on both CPU and GPU, reaching up to 1.4x-3x speedup depending on the precision. We also introduce 2 helper methods for optimizing and quantizing models for (much) faster inference.
2๏ธโƒฃ OpenVINO Backend: This backend uses Intel their OpenVINO instead, outperforming ONNX in some situations on CPU.

Usage is as simple as SentenceTransformer("all-MiniLM-L6-v2", backend="onnx"). Does your model not have an ONNX or OpenVINO file yet? No worries - it'll be autoexported for you. Thank me later ๐Ÿ˜‰

๐Ÿ”’ Another major new feature is Static Embeddings: think word embeddings like GLoVe and word2vec, but modernized. Static Embeddings are bags of token embeddings that are summed together to create text embeddings, allowing for lightning-fast embeddings that don't require any neural networks. They're initialized in one of 2 ways:

1๏ธโƒฃ via Model2Vec, a new technique for distilling any Sentence Transformer models into static embeddings. Either via a pre-distilled model with from_model2vec or with from_distillation where you do the distillation yourself. It'll only take 5 seconds on GPU & 2 minutes on CPU, no dataset needed.
2๏ธโƒฃ Random initialization. This requires finetuning, but finetuning is extremely quick (e.g. I trained with 3 million pairs in 7 minutes). My final model was 6.6% worse than bge-base-en-v1.5, but 500x faster on CPU.

Full release notes: https://github.com/UKPLab/sentence-transformers/releases/tag/v3.2.0
Documentation on Speeding up Inference: https://sbert.net/docs/sentence_transformer/usage/efficiency.html
  • 1 reply
ยท
reacted to erinys's post with ๐Ÿ”ฅ about 2 months ago
view post
Post
1370
We did a thing! Eight weeks into our Hugging Face tenure, we can demo a round-trip of Xet-backed files from our local machine to a prod Hugging Face S3 bucket and back. ๐Ÿš€

Itโ€™s been exciting to dive into how the Hub is built and design our steel thread through the infrastructure. Now that the thread is up, we can kick off project Capacious Extremis ๐Ÿช„ to add all the other goodies: authentication, authorization, deduplication, privacy, and more.

What does this mean for you? Youโ€™re one step closer to โšก faster downloads, uploads, and iterative development on Hugging Face Hub!โ€จThis is our first step toward replacing Git LFS as the Hub's storage backend: https://huggingface.co/blog/xethub-joins-hf

Check out the demo on LinkedIn to see the transfer in action: https://www.linkedin.com/posts/annux_youve-heard-of-blue-steel-but-have-activity-7245062126535405568-3cvJ
reacted to davidberenstein1957's post with ๐Ÿš€ about 2 months ago
view post
Post
2146
๐ŸŽ‰ Exciting News: Argilla 2.2.0 is Here! ๐Ÿš€

We're thrilled to announce the release of Argilla 2.2.0, packed with powerful new features to enhance your data annotation and LLM workflow:

๐Ÿ—จ๏ธ ChatField: Work with text conversations natively in Argilla. Perfect for building datasets for conversational LLMs!
โš™๏ธ Adjustable Task Distribution: Modify settings on the fly and automatically recalculate completed and pending records.
๐Ÿ“Š Progress Tracking: Monitor annotation progress directly from the SDK, including user-specific metrics.
๐Ÿง  Automatic Settings Inference: Importing datasets from Hugging Face Hub just got easier with automatic settings detection.
๐Ÿ“‹ Task Templates: Jump-start your projects with pre-built templates for common dataset types.
๐Ÿ”ง Background Jobs Support: Improved performance for long-running tasks (requires Redis).

Upgrade now and supercharge your data workflows!

Check out our full changelog for more details: https://github.com/argilla-io/argilla/compare/v2.1.0...v2.2.0
reacted to Wauplin's post with ๐Ÿ”ฅ 2 months ago
view post
Post
4509
๐Ÿš€ Exciting News! ๐Ÿš€

We've just released ๐š‘๐šž๐š๐š๐š’๐š—๐š๐š๐šŠ๐šŒ๐šŽ_๐š‘๐šž๐š‹ v0.25.0 and it's packed with powerful new features and improvements!

โœจ ๐—ง๐—ผ๐—ฝ ๐—›๐—ถ๐—ด๐—ต๐—น๐—ถ๐—ด๐—ต๐˜๐˜€:

โ€ข ๐Ÿ“ ๐—จ๐—ฝ๐—น๐—ผ๐—ฎ๐—ฑ ๐—น๐—ฎ๐—ฟ๐—ด๐—ฒ ๐—ณ๐—ผ๐—น๐—ฑ๐—ฒ๐—ฟ๐˜€ with ease using huggingface-cli upload-large-folder. Designed for your massive models and datasets. Much recommended if you struggle to upload your Llama 70B fine-tuned model ๐Ÿคก
โ€ข ๐Ÿ”Ž ๐—ฆ๐—ฒ๐—ฎ๐—ฟ๐—ฐ๐—ต ๐—”๐—ฃ๐—œ: new search filters (gated status, inference status) and fetch trending score.
โ€ข โšก๐—œ๐—ป๐—ณ๐—ฒ๐—ฟ๐—ฒ๐—ป๐—ฐ๐—ฒ๐—–๐—น๐—ถ๐—ฒ๐—ป๐˜: major improvements simplifying chat completions and handling async tasks better.

Weโ€™ve also introduced tons of bug fixes and quality-of-life improvements - thanks to the awesome contributions from our community! ๐Ÿ’ช

๐Ÿ’ก Check out the release notes: Wauplin/huggingface_hub#8

Want to try it out? Install the release with:

pip install huggingface_hub==0.25.0

  • 1 reply
ยท
reacted to jeffboudier's post with ๐Ÿ”ฅ 2 months ago
view post
Post
4009
Pro Tip - if you're a Firefox user, you can set up Hugging Chat as integrated AI Assistant, with contextual links to summarize or simplify any text - handy!

In this short video I show how to set it up
  • 2 replies
ยท
reacted to louisbrulenaudet's post with ๐Ÿ”ฅ 2 months ago
view post
Post
2586
The Romulus model series has been released on Hugging Face, continually pre-trained on 34,864,949 tokens of French laws and intended to serve as a foundation for fine-tuning on labeled data ๐Ÿค—

The training code, dataset and model weights are open and available free on HF and the training was based on H100 provided by Microsoft for Startups using Unsloth AI by @danielhanchen and @shimmyshimmer ๐Ÿฆฅ

Link to the base model: louisbrulenaudet/Romulus-cpt-Llama-3.1-8B-v0.1

Link to the instruct model: louisbrulenaudet/Romulus-cpt-Llama-3.1-8B-v0.1-Instruct

Link to the dataset: louisbrulenaudet/Romulus-cpt-fr

Please note that these models have not been aligned for the production of usable texts as they stand, and will certainly need to be refined for the desired tasks in order to produce satisfactory results.
  • 1 reply
ยท
reacted to davidberenstein1957's post with ๐Ÿค— 2 months ago
view post
Post
1342
Distilabel and synthetic data community interviews - the outcomes

We've been doing some interview with community members to understand the needs surrounding synthetic data. Many thanks to the participants. Note that, given they interviewees were sourced from our community, so the results will likely represent that.

Things distilabel does well
- security and reliability by caching generations and having serializable pipelines.
- scaling up generation by parallelising inference and Anyscale Ray
- solid implementations of state of the art research papers

Things to improve
- communication about the fact we support structured generation
- customization of existing prompt implementations are difficult
- creation of new tasks prove difficult
- arguments and parameters for tasks aren't available at first glance
- the learning curve can be steep
- more tutorials that represent real-life usage

Things to note
- create small scale and large scale dataset to Millions of records
- people use synthetic data to move away from frontier model providers
- people mostly use 7B or 70B models for generating

Participate here: https://github.com/argilla-io/distilabel/issues
reacted to their post with ๐Ÿ”ฅ 2 months ago
view post
Post
1787
Yesterday ย  @mattshumer released mattshumer/Reflection-Llama-3.1-70B, an impressive model that achieved incredible results in benchmarks like MMLU. The model was fine-tuned using Reflection-Tuning and the dataset used wasn't released, but I created a small recipe with distilabel that allows generating a dataset with a similar output format:

1. We use MagPie ๐Ÿฆ in combination with https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct to generate reasoning instructions.
2. We generate a response again using https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct, but we steer the LLM to generate an specific output format using a custom system prompt. In the system prompt, we instruct the LLM that it will have first to think ๐Ÿ’ญ and have reflections that will help resolving ambiguities. After that, we instruct the LLM to generate an output based on the previous thinking

In this dataset gabrielmbmb/distilabel-reflection-tuning you can found 5 rows that I generated with this recipe. You can also found the code of the pipeline in the file called reflection.py.

reacted to davidberenstein1957's post with ๐Ÿ”ฅ 2 months ago
view post
Post
1820
๐ŸŒŸ Argilla v2.1.0 goes multi-modal: Image Field, Dark Mode, Enhanched Hugging Face Hub imports and more!

๐Ÿ–ผ Image Field: Seamlessly work with multimodal datasets
๐ŸŒ“ Dark Mode: Reduce eye strain with our sleek new look
๐Ÿค— Enhanced Hugging Face Hub import with the SDK
๐Ÿ‡ช๐Ÿ‡ธ Spanish UI: Breaking language barriers

Plus more improvements to supercharge your model curation workflow!

Check out the full announcement for details and code examples: https://github.com/argilla-io/argilla/compare/v2.0.1...v2.1.0
posted an update 3 months ago
view post
Post
1787
Yesterday ย  @mattshumer released mattshumer/Reflection-Llama-3.1-70B, an impressive model that achieved incredible results in benchmarks like MMLU. The model was fine-tuned using Reflection-Tuning and the dataset used wasn't released, but I created a small recipe with distilabel that allows generating a dataset with a similar output format:

1. We use MagPie ๐Ÿฆ in combination with https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct to generate reasoning instructions.
2. We generate a response again using https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct, but we steer the LLM to generate an specific output format using a custom system prompt. In the system prompt, we instruct the LLM that it will have first to think ๐Ÿ’ญ and have reflections that will help resolving ambiguities. After that, we instruct the LLM to generate an output based on the previous thinking

In this dataset gabrielmbmb/distilabel-reflection-tuning you can found 5 rows that I generated with this recipe. You can also found the code of the pipeline in the file called reflection.py.

reacted to davidberenstein1957's post with ๐Ÿš€๐Ÿ˜Ž 3 months ago
view post
Post
1820
๐ŸŒŸ Argilla v2.1.0 goes multi-modal: Image Field, Dark Mode, Enhanched Hugging Face Hub imports and more!

๐Ÿ–ผ Image Field: Seamlessly work with multimodal datasets
๐ŸŒ“ Dark Mode: Reduce eye strain with our sleek new look
๐Ÿค— Enhanced Hugging Face Hub import with the SDK
๐Ÿ‡ช๐Ÿ‡ธ Spanish UI: Breaking language barriers

Plus more improvements to supercharge your model curation workflow!

Check out the full announcement for details and code examples: https://github.com/argilla-io/argilla/compare/v2.0.1...v2.1.0
reacted to maximuspowers's post with ๐Ÿš€๐Ÿ‘€ 3 months ago
view post
Post
2489
Here's my favorite piece of the summer bias detection research project (paper coming in Sept). We trained BERT for token classification (multi-label), to identify:
- Generalizations
- Unfairness
- Stereotypes

HF Space: maximuspowers/bias-detection-ner
Article on Training: https://huggingface.co/blog/maximuspowers/bias-entity-recognition

Pls reach out with ideas!! Lot's more info coming soon, our research group has workshops and a hackathon planned for launching this open source project. Thanks
reacted to alvarobartt's post with ๐Ÿ‘๐Ÿ”ฅ 3 months ago
view post
Post
2725
๐Ÿค— Serving Meta Llama 3.1 405B on Google Cloud is now possible via the Hugging Face Deep Learning Containers (DLCs) for Text Generation Inference (TGI)

In this post, we showcase how to deploy https://huggingface.co/meta-llama/Meta-Llama-3.1-405B-Instruct-FP8 on an A3 instance with 8 x H100 GPUs on Vertex AI

Thanks to the Hugging Face DLCs for TGI and Google Cloud Vertex AI, deploying a high-performance text generation container for serving Large Language Models (LLMs) has never been easier. And weโ€™re not going to stop here โ€“ stay tuned as we enable more experiences to build AI with open models on Google Cloud!

Read the full post at https://huggingface.co/blog/llama31-on-vertex-ai
reacted to joylarkin's post with ๐Ÿ”ฅ 3 months ago
view post
Post
3004
Introducing Fineweb-Edu-Fortified: An enhanced Fineweb-Edu dataset. ๐Ÿ“š

This dataset is tailored for NLP tasks and helps streamline model training by offering a more refined, unique dataset. Perfect for startups and researchers looking for high-quality educational content to train, evaluate, or fine-tune AI models. The dataset is based on the Fineweb-Edu subset of the large Fineweb dataset and includes:

- Exact-match deduplication across all crawls
- Embeddings for each row using the TaylorAI/bge-micro model
- Count column indicating duplication frequency
- Includes data from 95 Common Crawl crawls (2013-2024)
- Rows have been reduced from 1.279B to 0.324B after deduplication
- It is comprised of ~375B tokens (down from 1,320B in Fineweb-Edu)

Access the entire Fineweb-Edu-Fortified dataset on Hugging Face โ†’ airtrain-ai/fineweb-edu-fortified

Try a semantic search demo via this Hugging Face Space โ†’ airtrain-ai/fineweb-edu-fortified-search-demo

Many thanks to the amazing @josh-sematic for his work on this project, the Fineweb/Fineweb-Edu team at Hugging Face for producing the original datasets and for their support during our work on Fineweb-Edu-Fortified, and also thanks toย  @underspirit forย pointing outย the reduction in dataset size that could be achieved via deduplication. ๐Ÿค—

reacted to Avelina's post with ๐Ÿ‘€ 3 months ago
view post
Post
2163
Hey HF. I just released a new reward modelling dataset: Avelina/UltraSteer-v0

UltraSteer-V0 is a massive collection of single- and multi-turn dialogue with fine-grained reward labels produced by Nvidia's nvidia/Llama2-13B-SteerLM-RM reward model. We have a total of 2.3M labelled sequences taken from high quality datasets with a total of 2.8M labelled turns each containing 9 attributes produced as is from the reward model.

This is still very much an early version of the dataset (but it's fully usable!) and an updated version will be on the way with a full paper.

I would really appreciate if people could take a look at the dataset and suggest any improvements (e.g. more data sources, different cleaning approaches, different label schema, etc) in the community section.
  • 2 replies
ยท