URL
stringlengths 30
87
| Headline
stringlengths 11
143
| Authors
stringlengths 5
190
| Publication Date
stringlengths 11
18
| Article Text
stringlengths 140
47.6k
|
---|---|---|---|---|
https://huggingface.co/blog/leaderboard-hebrew | Introducing the Open Leaderboard for Hebrew LLMs! | Shaltiel Shmidman, Tal Geva, Omer Koren, Clémentine Fourrier | May 5, 2024 | This project addresses the critical need for advancement in Hebrew NLP. As Hebrew is considered a low-resource language, existing LLM leaderboards often lack benchmarks that accurately reflect its unique characteristics. Today, we are excited to introduce a pioneering effort to change this narrative — our new open LLM leaderboard, specifically designed to evaluate and enhance language models in Hebrew.Hebrew is a morphologically rich language with a complex system of roots and patterns. Words are built from roots with prefixes, suffixes, and infixes used to modify meaning, tense, or form plurals (among other functions). This complexity can lead to the existence of multiple valid word forms derived from a single root, making traditional tokenization strategies, designed for morphologically simpler languages, ineffective. As a result, existing language models may struggle to accurately process and understand the nuances of Hebrew, highlighting the need for benchmarks that cater to these unique linguistic properties.LLM research in Hebrew therefore needs dedicated benchmarks that cater specifically to the nuances and linguistic properties of the language. Our leaderboard is set to fill this void by providing robust evaluation metrics on language-specific tasks, and promoting an open community-driven enhancement of generative language models in Hebrew. We believe this initiative will be a platform for researchers and developers to share, compare, and improve Hebrew LLMs.Leaderboard Metrics and TasksWe have developed four key datasets, each designed to test language models on their understanding and generation of Hebrew, irrespective of their performance in other languages. These benchmarks use a few-shot prompt format to evaluate the models, ensuring that they can adapt and respond correctly even with limited context.Below is a summary of each of the benchmarks included in the leaderboard. For a more comprehensive breakdown of each dataset, scoring system, prompt construction, please visit the About tab of our leaderboard. Hebrew Question Answering: This task evaluates a model's ability to understand and process information presented in Hebrew, focusing on comprehension and the accurate retrieval of answers based on context. It checks the model's grasp of Hebrew syntax and semantics through direct question-and-answer formats. Source: HeQ dataset's test subset.Sentiment Accuracy: This benchmark tests the model's ability to detect and interpret sentiments in Hebrew text. It assesses the model's capability to classify statements accurately as positive, negative, or neutral based on linguistic cues. Source: Hebrew Sentiment - a Sentiment-Analysis Dataset in Hebrew.Winograd Schema Challenge: The task is designed to measure the model’s understanding of pronoun resolution and contextual ambiguity in Hebrew. It tests the model’s ability to use logical reasoning and general world knowledge to disambiguate pronouns correctly in complex sentences.Source: A Translation of the Winograd Schema Challenge to Hebrew, by Dr. Vered Schwartz.Translation: This task assesses the model's proficiency in translating between English and Hebrew. It evaluates the linguistic accuracy, fluency, and the ability to preserve meaning across languages, highlighting the model’s capability in bilingual translation tasks.Source: NeuLabs-TedTalks aligned translation corpus.Technical SetupThe leaderboard is inspired by the Open LLM Leaderboard, and uses the Demo Leaderboard template. Models that are submitted are deployed automatically using HuggingFace’s Inference Endpoints and evaluated through API requests managed by the lighteval library.The implementation was straightforward, with the main task being to set up the environment; the rest of the code ran smoothly.Engage with UsWe invite researchers, developers, and enthusiasts to participate in this initiative. Whether you're interested in submitting your model for evaluation or joining the discussion on improving Hebrew language technologies, your contribution is crucial. Visit the submission page on the leaderboard for guidelines on how to submit models for evaluation, or join the discussion page on the leaderboard’s HF space.This new leaderboard is not just a benchmarking tool; we hope it will encourage the Israeli tech community to recognize and address the gaps in language technology research for Hebrew. By providing detailed, specific evaluations, we aim to catalyze the development of models that are not only linguistically diverse but also culturally accurate, paving the way for innovations that honor the richness of the Hebrew language. Join us in this exciting journey to reshape the landscape of language modeling!SponsorshipThe leaderboard is proudly sponsored by DDR&D IMOD / The Israeli National Program for NLP in Hebrew and Arabic in collaboration with DICTA: The Israel Center for Text Analysis and Webiks, a testament to the commitment towards advancing language technologies in Hebrew. We would like to extend our gratitude to Prof. Reut Tsarfaty from Bar-Ilan University for her scientific consultation and guidance. |
https://huggingface.co/blog/leaderboard-artificial-analysis | Bringing the Artificial Analysis LLM Performance Leaderboard to Hugging Face | Micah Hill-Smith, George Cameron, Clémentine Fourrier | May 3, 2024 | Building applications with LLMs requires considering more than just quality: for many use-cases, speed and price are equally or more important. For consumer applications and chat experiences, speed and responsiveness are critical to user engagement. Users expect near-instant responses, and delays can directly lead to reduced engagement. When building more complex applications involving tool use or agentic systems, speed and cost become even more important, and can become the limiting factor on overall system capability. The time taken by sequential requests to LLMs can quickly stack up for each user request adding to the cost. This is why Artificial Analysis (@ArtificialAnlys) developed a leaderboard evaluating price, speed and quality across >100 serverless LLM API endpoints, now coming to Hugging Face.Find the leaderboard here!The LLM Performance Leaderboard The LLM Performance Leaderboard aims to provide comprehensive metrics to help AI engineers make decisions on which LLMs (both open & proprietary) and API providers to use in AI-enabled applications.When making decisions regarding which AI technologies to use, engineers need to consider quality, price and speed (latency & throughput). The LLM Performance Leaderboard brings all three together to enable decision making in one place across both proprietary & open models. Source: LLM Performance LeaderboardMetric coverage The metrics reported are:Quality: a simplified index for comparing model quality and accuracy, calculated based on metrics such as MMLU, MT-Bench, HumanEval scores, as reported by the model authors, and Chatbot Arena ranking.Context window: the maximum number of tokens an LLM can work with at any one time (including both input and output tokens).Pricing: the prices charged by a provider to query the model for inference. We report input/output per-token pricing, as well as "blended" pricing to compare hosting providers with a single metric. We blend input and output pricing at a 3:1 ratio (i.e., an assumption that the length of input is 3x longer than the output).Throughput: how fast an endpoint outputs tokens during inference, measured in tokens per second (often referred to as tokens/s or "TPS"). We report the median, P5, P25, P75 and P95 values measured over the prior 14 days.Latency: how long the endpoint takes to respond after the request has been sent, known as Time to First Token ("TTFT") and measured in seconds. We report the median, P5, P25, P75 and P95 values measured over the prior 14 days.For further definitions, see our full methodology page. Test Workloads The leaderboard allows exploration of performance under several different workloads (6 combinations in total):varying the prompt length: ~100 tokens, ~1k tokens, ~10k tokens.running parallel queries: 1 query, 10 parallel queries.Methodology We test every API endpoint on the leaderboard 8 times per day, and leaderboard figures represent the median measurement of the last 14 days. We also have percentile breakdowns within the collapsed tabs.Quality metrics are currently collected on a per-model basis and show results reports by model creators, but watch this space as we begin to share results from our independent quality evaluations across each endpoint. For further definitions, see our full methodology page.Highlights (May 2024, see the leaderboard for the latest) The language models market has exploded in complexity over the last year. Launches that have shaken up the market just within the last two months include proprietary models like Anthropic's Claude 3 series and open models such as Databricks' DBRX, Cohere's Command R Plus, Google's Gemma, Microsoft's Phi-3, Mistral's Mixtral 8x22B and Meta's Llama 3.Price and speed vary considerably between models and providers. From Claude 3 Opus to Llama 3 8B, there is a 300x pricing spread - that's more than two orders of magnitude!API providers have increased the speed of launching models. Within 48 hours, 7 providers were offering the Llama 3 models. Speaking to the demand for new, open-source models and the competitive dynamics between API providers.Key models to highlight across quality segments:High quality, typically higher price & slower: GPT-4 Turbo and Claude 3 OpusModerate quality, price & speed: Llama 3 70B, Mixtral 8x22B, Command R+, Gemini 1.5 Pro, DBRXLower quality, but with much faster speed and lower pricing available: Llama 3 8B, Claude 3 Haiku, Mixtral 8x7BOur chart of Quality vs. Throughput (tokens/s) shows the range of options with different quality and performance characteristics. Source: artificialanalysis.ai/modelsUse Case Example: Speed and Price can be as important as Quality In some cases, design patterns involving multiple requests with faster and cheaper models can result in not only lower cost but better overall system quality compared to using a single larger model. For example, consider a chatbot that needs to browse the web to find relevant information from recent news articles. One approach would be to use a large, high-quality model like GPT-4 Turbo to run a search then read and process the top handful of articles. Another would be to use a smaller, faster model like Llama 3 8B to read and extract highlights from dozens web pages in parallel, and then use GPT-4 Turbo to assess and summarize the most relevant results. The second approach will be more cost effective, even after accounting for reading 10x more content, and may result in higher quality results. Get in touch Please follow us on Twitter and LinkedIn for updates. We're available via message on either, as well as on our website and via email. |
https://huggingface.co/blog/asr-diarization | Powerful ASR + diarization + speculative decoding with Hugging Face Inference Endpoints | Sergei Petrov, Vaibhav Srivastav, Pedro Cuenca, Philipp Schmid | May 1, 2024 | Whisper is one of the best open source speech recognition models and definitely the one most widely used. Hugging Face Inference Endpoints make it very easy to deploy any Whisper model out of the box. However, if you’d like tointroduce additional features, like a diarization pipeline to identify speakers, or assisted generation for speculative decoding, things get trickier. The reason is that you need to combine Whisper with additional models, while still exposing a single API endpoint.We'll solve this challenge using a custom inference handler, which will implement the Automatic Speech Recogniton (ASR) and Diarization pipeline on Inference Endpoints, as well as supporting speculative decoding. The implementation of the diarization pipeline is inspired by the famous Insanely Fast Whisper, and it uses a Pyannote model for diarization. This will also be a demonstration of how flexible Inference Endpoints are and that you can host pretty much anything there. Here is the code to follow along. Note that during initialization of the endpoint, the whole repository gets mounted, so your handler.py can refer to other files in your repository if you prefer not to have all the logic in a single file. In this case, we decided to separate things into several files to keep things clean:handler.py contains initialization and inference codediarization_utils.py has all the diarization-related pre- and post-processingconfig.py has ModelSettings and InferenceConfig. ModelSettings define which models will be utilized in the pipeline (you don't have to use all of them), and InferenceConfig defines the default inference parametersStarting with Pytorch 2.2, SDPA supports Flash Attention 2 out-of-the-box, so we'll use that version for faster inference.The main modulesThis is a high-level diagram of what the endpoint looks like under the hood:The implementation of ASR and diarization pipelines is modularized to cater to a wider range of use cases - the diarization pipeline operates on top of ASR outputs, and you can use only the ASR part if diarization is not needed. For diarization, we propose using the Pyannote model, currently a SOTA open source implementation.We’ll also add speculative decoding as a way to speed up inference. The speedup is achieved by using a smaller and faster model to suggest generations that are validated by the larger model. Learn more about how it works with Whisper specifically in this great blog post.Speculative decoding comes with restrictions:at least the decoder part of an assistant model should have the same architecture as that of the main modelthe batch size much be 1Make sure to take the above into account. Depending on your production use case, supporting larger batches can be faster than speculative decoding. If you don't want to use an assistant model, just keep the assistant_model in the configuration as None.If you do use an assistant model, a great choice for Whisper is a distilled version.Set up your own endpointThe easiest way to start is to clone the custom handler repository using the repo duplicator.Here is the model loading piece from the handler.py:from pyannote.audio import Pipelinefrom transformers import pipeline, AutoModelForCausalLM...self.asr_pipeline = pipeline("automatic-speech-recognition",model=model_settings.asr_model,torch_dtype=torch_dtype,device=device)self.assistant_model = AutoModelForCausalLM.from_pretrained(model_settings.assistant_model,torch_dtype=torch_dtype,low_cpu_mem_usage=True,use_safetensors=True) ...self.diarization_pipeline = Pipeline.from_pretrained(checkpoint_path=model_settings.diarization_model,use_auth_token=model_settings.hf_token,) ...You can customize the pipeline based on your needs. ModelSettings, in the config.py file, holds the parameters used for initialization, defining the models to use during inference:class ModelSettings(BaseSettings):asr_model: strassistant_model: Optional[str] = Nonediarization_model: Optional[str] = Nonehf_token: Optional[str] = NoneThe parameters can be adjusted by passing environment variables with corresponding names - this works both with a custom container and an inference handler. It’s a Pydantic feature. To pass environment variables to a container during build time you’ll have to create an endpoint via an API call (not via the interface). You could hardcode model names instead of passing them as environment variables, but note that the diarization pipeline requires a token to be passed explicitly (hf_token). You are not allowed to hardcode your token for security reasons, which means you will have to create an endpoint via an API call in order to use a diarization model.As a reminder, all the diarization-related pre- and postprocessing utils are in diarization_utils.pyThe only required component is an ASR model. Optionally, an assistant model can be specified to be used for speculative decoding, and a diarization model can be used to partition a transcription by speakers.Deploy on Inference EndpointsIf you only need the ASR part you could specify asr_model/assistant_model in the config.py and deploy with a click of a button:To pass environment variables to containers hosted on Inference Endpoints you’ll need to create an endpoint programmatically using the provided API. Below is an example call:body = {"compute": {"accelerator": "gpu","instanceSize": "medium","instanceType": "g5.2xlarge","scaling": {"maxReplica": 1,"minReplica": 0}},"model": {"framework": "pytorch","image": {# a default container"huggingface": {"env": {# this is where a Hub model gets mounted"HF_MODEL_DIR": "/repository", "DIARIZATION_MODEL": "pyannote/speaker-diarization-3.1","HF_TOKEN": "<your_token>","ASR_MODEL": "openai/whisper-large-v3","ASSISTANT_MODEL": "distil-whisper/distil-large-v3"}}},# a model repository on the Hub"repository": "sergeipetrov/asrdiarization-handler","task": "custom"},# the endpoint name"name": "asr-diarization-1","provider": {"region": "us-east-1","vendor": "aws"},"type": "private"}When to use an assistant modelTo give a better idea on when using an assistant model is beneficial, here's a benchmark performed with k6:# Setup:# GPU: A10ASR_MODEL=openai/whisper-large-v3ASSISTANT_MODEL=distil-whisper/distil-large-v3# long: 60s audio; short: 8s audiolong_assisted..................: avg=4.15s min=3.84s med=3.95s max=6.88s p(90)=4.03s p(95)=4.89s long_not_assisted..............: avg=3.48s min=3.42s med=3.46s max=3.71s p(90)=3.56s p(95)=3.61s short_assisted.................: avg=326.96ms min=313.01ms med=319.41ms max=960.75ms p(90)=325.55ms p(95)=326.07msshort_not_assisted.............: avg=784.35ms min=736.55ms med=747.67ms max=2s p(90)=772.9ms p(95)=774.1msAs you can see, assisted generation gives dramatic performance gains when an audio is short (batch size is 1). If an audio is long, inference will automatically chunk it into batches, and speculative decoding may hurt inference time because of the limitations we discussed before.Inference parametersAll the inference parameters are in config.py:class InferenceConfig(BaseModel):task: Literal["transcribe", "translate"] = "transcribe"batch_size: int = 24assisted: bool = Falsechunk_length_s: int = 30sampling_rate: int = 16000language: Optional[str] = Nonenum_speakers: Optional[int] = Nonemin_speakers: Optional[int] = Nonemax_speakers: Optional[int] = NoneOf course, you can add or remove parameters as needed. The parameters related to the number of speakers are passed to a diarization pipeline, while all the others are mostly for the ASR pipeline. sampling_rate indicates the sampling rate of the audio to process and is used for preprocessing; the assisted flag tells the pipeline whether to use speculative decoding. Remember that for assisted generation the batch_size must be set to 1.PayloadOnce deployed, send your audio along with the inference parameters to your inference endpoint, like this (in Python):import base64import requestsAPI_URL = "<your endpoint URL>"filepath = "/path/to/audio"with open(filepath, "rb") as f:audio_encoded = base64.b64encode(f.read()).decode("utf-8")data = {"inputs": audio_encoded,"parameters": {"batch_size": 24}}resp = requests.post(API_URL, json=data, headers={"Authorization": "Bearer <your token>"})print(resp.json())Here the "parameters" field is a dictionary that contains all the parameters you'd like to adjust from the InferenceConfig. Note that parameters not specified in the InferenceConfig will be ignored.Or with InferenceClient (there is also an async version):from huggingface_hub import InferenceClientclient = InferenceClient(model = "<your endpoint URL>", token="<your token>")with open("/path/to/audio", "rb") as f:audio_encoded = base64.b64encode(f.read()).decode("utf-8")data = {"inputs": audio_encoded,"parameters": {"batch_size": 24}}res = client.post(json=data)RecapIn this blog, we discussed how to set up a modularized ASR + diarization + speculative decoding pipeline with Hugging Face Inference Endpoints. We did our best to make it easy to configure and adjust the pipeline as needed, and deployment with Inference Endpoints is always a piece of cake! We are lucky to have great models and tools openly available to the community that we used in the implementation:A family of Whisper models by OpenAIA diarization model by PyannoteThe Insanely Fast Whisper repository, which was the main source of inspirationThere is a repo that implements the same pipeline along with the server part (FastAPI+Uvicorn). It may come in handy if you'd like to customize it even further or host somewhere else. |
https://huggingface.co/blog/evaluation-structured-outputs | Improving Prompt Consistency with Structured Generations | Will Kurt, Remi Louf, Clémentine Fourrier | April 30, 2024 | Recently, the Leaderboards and Evals research team at Hugging Face did small experiments, which highlighted how fickle evaluation can be. For a given task, results are extremely sensitive to minuscule changes in prompt format! However, this is not what we want: a model prompted with the same amount of information as input should output similar results.We discussed this with our friends at Dottxt, who had an idea - what if there was a way to increase consistency across prompt formats? So, let's dig in!Context: Evaluation Sensitivity to Format ChangesIt has become increasingly clear that LLM benchmark performance is closely, and somewhat surprisingly, dependent on the format of the prompt itself, even though a number of methods have been introduced through the years to reduce prompt-related variance. For example, when we evaluate models in few-shot, we provide format examples to the model to force a specific pattern in output; when we compare the log-likelihood of plausible answers instead of allowing free-form generation, we attempt to constrain the answer space.The Leaderboards and Evals team provided a demonstration of this by looking at 8 different prompt formats for a well known task, MMLU (looking at 4 subsets of the task). These prompt variations were provided to 5 different models (chosen because they were SOTA at the time for their size, and covered a variety of tokenization and languages). Scores were computed using a log-probability evaluation, where the most probable answer is considered the correct one, a classic metric for multi-choice tasks. Let's look at the different formats in more detail, by using the first question of the global_facts subset of MMLU.Question: “As of 2016, about what percentage of adults aged 18 years or older were overweight?”Choices: [ "10%", "20%", "40%", "80%" ]Correct choice: “40%”Without choices in the prompt As of 2016, about what percentage of adults aged 18 years or older were overweight?Q: As of 2016, about what percentage of adults aged 18 years or older were overweight? A: Question: As of 2016, about what percentage of adults aged 18 years or older were overweight? Answer: With choices in the prompt Question: As of 2016, about what percentage of adults aged 18 years or older were overweight?Choices: 10% 20% 40% 80% Answer: Question: As of 2016, about what percentage of adults aged 18 years or older were overweight?Choices: A. 10% B. 20% C. 40% D. 80% Answer: Question: As of 2016, about what percentage of adults aged 18 years or older were overweight?Choices: (A) 10% (B) 20% (C) 40% (D) 80% Answer: Log probs of 10%, 20%, 40%, 80% Log probs of 10%, 20%, 40%, 80% vs A, B, C, D Log probs of 10%, 20%, 40%, 80% vs (A), (B), (C), (D), Prompts either contain just the question, or some tags to indicate that we are in a question/answer format, and possibly the choices in the prompt. In all cases, evaluations compare the log-likelihood of the possible choices only. All these formats appear in the evaluation literature, and should contain virtually the same amount of information in each row. However, just below, you can see the wide variation in performance across these theoretically superficial changes!Each model sees its performance vary by around 10 points, with the exception of the most extreme example, Qwen1.5-7B, dropping all the way to an accuracy of 22.9% with the 7th prompt variation (mostly due to a tokenizer issue), with essentially the same information it was able to achieve an accuracy of up to 51.2% with another prompt.In isolation, a change in score is not necessarily a big deal so long as the ranking is consistent. However, as we can see in the next plot, ranking is impacted by these changes:No model is consistently ranked across prompts even though the only difference is their format, not the information itself. This means that if the authors of Gemma-7b wanted to show that their model was superior to Mistral-7B-v0.1, they could do so simply by choosing the correct prompt. As almost no one reports their precise evaluation setup, this is what has historically happened in model reports, where authors chose to report the setup most advantageous to their model (which is why you’ll see extremely weird reported numbers of few-shots in some papers).However, this is not the only source of variance in model scores. In extended experiments, we compared evaluating the same models, with the same prompt formats, using the exact same few-shot samples shuffled differently before the prompt (A/B/C/D/E Prompt vs C/D/A/B/E Prompt, for example). The following figure shows the model scores delta between these two few-shot orderings: we observe a difference of up to 3 points in performance for the same model/prompt combination!If we want to be able to properly evaluate and compare different models we need a way to overcome this challenge. Sclar, et al’s Quantifying Language Model’s Sensitivity to Spurious Features in Prompt Design also gives a good overview of this issue, and the authors introduce FormatSpread, a software tool that evaluates each model with multiple different variations of formats, then calculate the variance of that model's performance. Solutions such as this allow us to determine with more confidence which models are better than others, but they come at a high computation cost.What if we focused on the output, not the input, to make results more consistent across these small changes to format?While FormatSpread is a great attempt to make leaderboards more fair and honest, what we really want as practical users of LLMs is prompt consistency. That is, we would like to find some way to reduce this variance among prompts.At .txt, we focus on improving and better understanding structured generation, which is when the output of a model is constrained to follow a specific structure. Our library, Outlines, allows us to structure the output of an LLM by defining a regular expression or a context-free grammar (we give examples below). Our initial use case for structured generation was to make LLMs easier to interact with programmatically, by ensuring responses in well formatted JSON. However, we’ve continually been surprised by other benefits of structured generation we’ve uncovered. When working on earlier research exploring the benefits of structured generation, we demonstrated that structured generation consistently improves benchmark performance, and came across an interesting edge case when exploring JSON structured prompts.In most cases, changing the prompt format to JSON, even when using unstructured generation, leads to improved benchmark performance for almost all models. However, this was not the case for MetaMath-Tulpar-7b-v2-Slerp, where we found a dramatic decrease in accuracy when using prompts formatted in JSON. Even more surprising was that when using structured generation to constrain the output of the model, the dip in performance was negligible! This led us to question whether or not structured generation could be exploited for prompt consistency.Note on the experimental setup: Focusing on n-shot and shot orderWhile in the above experiments, Hugging Face’s Leaderboard and Evals research team explored changes to the format of the prompt itself, for the next experiments we’re going to restrict the changes. To focus our exploration of prompt space, we’re going to look at varying just two properties of the prompt:Varying the number of “shots” or examples used in the prompt (n*-shot*)Varying the order of those shots (shot order, specified by a shot seed)For point 2, with a given n-shot we are only shuffling the same n examples. This means that all shuffles of a 1-shot prompt are the same. This is done to avoid conflating the format of a prompt with the information it contains. Clearly a 5-shot prompt contains more information than a 1-shot prompt, but every shuffling of a 5-shot prompt contains the same examples, only in a different order.Initial Exploration: GSM8K 1-8 shot promptingIn order to test this out further, we wanted to explore the behavior of two very similar but strong models in the 7B parameter space: Mistral-7Bv0.1 and Zephyr-7B-beta. The reason behind this choice is to not only study variance in individual outcomes, but to look at the changes in relative ranking. We use the GSM8K task which is a set of grade school math word problems.Here is the basic format of a GSM8K 1-shot prompt with the implied structure highlighted.In order to consistently generate correctly structured answers we create a regular expression that matches the structure we see inherent in the original prompt format. The following regex is used in Outlines to define the structure for generation:We can see in the regex that we allow the model to reason for anywhere from 200 to 700 characters, then it must declare that “The answer is” and then reply with up to 10 digit number (that cannot start with 0).It’s worth mentioning that the regex controlling the structure is similar, but not identical to, the regex used to parse out the answer. We’ve learned there’s an interesting bit of nuance in defining the structure since, like the prompt, it can impact performance. For example, notice that {200,700} in the regex. This means that the model has 200 to 700 characters to “reason” before answering. Changing these values can impact performance and leads to something we refer to as “thought control”, an area we’re hoping to write more about soon.Our first experiment was to continue exploring the GSM8K dataset and iterated on 1 through 8 shot prompting. The results, shown below, were very compelling.There are two major features we see in this figure: variance in performance across the n-shot setups was majorly reduced and there were no instances where the ranking swapped (Mistral consistently leads over Zephyr). It’s also worth pointing out that 1-shot structured performance is substantially better than 1-shot unstructured performance, and on par with 5-shot. This leads to another area of research we’re terming “prompt efficiency”.Diving Deeper: GPQA n-shot and shot order variationsFor the next experiment we wanted to look at varying both n-shots as well as the order of the n-shots. Order was controlled by setting the seed used for shuffling the examples. As mentioned previously, only the first n-shots are shuffled to keep the information consistent between prompts, this means that all 1-shot prompts are the same across seeds. Here’s an example of the shot order for 4-shot:seed4-shot order422-1-3-013371-0-3-219813-2-0-119920-3-1-2123451-0-2-3Additionally, to explore how transferable these results were, we changed the task to Graduate-Level Google-Proof Q&A Benchmark (GPQA). GPQA is a hard knowledge multi-choice evaluation task. Below is the prompt format and highlighted structure. For this next experiment we are specifically using the ‘diamond’ subset which represents curated and cleaned up high quality questions. Of the 198 questions in this dataset we reserve 8 for n-shot prompting (though only ever used the first 5), and then evaluated on the remaining 190 questions.Visualized below we can see a grid representing the accuracy achieved for all the possible combinations for shot seed and n, for the two models, both without (left) and with (right) structured generation.One thing which immediately stands out is that the structured output tends to score higher than the unstructured output across the board. We see the mean of each grid for structured and unstructured below:Mean of results across prompt seed and n-shotmodelunstructuredstructuredMistral-7B-v0.10.23600.2935Zephyr-7b-beta0.23870.3048Additionally, across all the values in the grid we also find reduced variance when comparing the structured with unstructured generation. Standard deviation in results across prompt seed and n-shotmodelunstructuredstructuredMistral-7B-v0.10.02130.0202Zephyr-7b-beta0.02730.0180This reduction in variance across the grid is similar to the reduction in variance we saw when looking at just n-shot changes for GSM8K.While increased expected performance and decreased variance are great properties to have, what we really want to understand is the impact on ranking. In the next plot we examine these grids in terms of which of the two models would be declared a winner:A: Zephyr-7b-betaB: Mistral-7B-v0.1“-”: tieAs we can see from these images, there is a major improvement in the consistency of calling a winner when structured generation is applied. These results paint a consistent picture with the findings we had using GSM8K across various n-shot.Conclusion and Future WorkWhile these results are incredibly promising, we still need to explore these results across more models and more tasks. What we’ve seen so far is that structured generation could prove to be an essential part of evaluation. Simultaneously increasing the expected score and decreasing the variance across prompt changes is a very promising result that deserves further research. |
https://huggingface.co/blog/sc2-instruct | StarCoder2-Instruct: Fully Transparent and Permissive Self-Alignment for Code Generation | Yuxiang Wei, Federico Cassano, Jiawei Liu, Yifeng Ding, Naman Jain, Harm de Vries, Leandro von Werra, Arjun Guha, Lingming Zhang | April 29, 2024 | Instruction tuning is an approach of fine-tuning that gives large language models (LLMs) the capability to follow natural and human-written instructions. However, for programming tasks, most models are tuned on either human-written instructions (which are very expensive) or instructions generated by huge and proprietary LLMs (which may not be permitted). We introduce StarCoder2-15B-Instruct-v0.1, the very first entirely self-aligned code LLM trained with a fully permissive and transparent pipeline. Our open-source pipeline uses StarCoder2-15B to generate thousands of instruction-response pairs, which are then used to fine-tune StarCoder-15B itself without any human annotations or distilled data from huge and proprietary LLMs.StarCoder2-15B-Instruct achieves a 72.6 HumanEval score, even surpassing the 72.0 score of CodeLlama-70B-Instruct! Further evaluation on LiveCodeBench shows that the self-aligned model is even better than the same model trained on data distilled from GPT-4, implying that an LLM could learn more effectively from data within its own distribution than a shifted distribution from a teacher LLM.Method Our data generation pipeline mainly consists of three steps:Extract high-quality and diverse seed functions from The Stack v1, a huge corpus of permissively licensed source code.Create diverse and realistic code instructions that incorporate different code concepts present in the seed functions (e.g., data deserialization, list concatenation, and recursion).For each instruction, generate a high-quality response through execution-guided self-validation.In the following sections, we will explore each of these aspects in detail.Collecting seed code snippets To fully unlock the instruction-following capabilities of a code model, it should be exposed to a diverse set of instructions encompassing a wide range of programming principles and practices. Motivated by OSS-Instruct, we further promote such diversity by mining code concepts from open-source code snippets that are, specifically, well-formed seed Python functions from The Stack V1.For our seed dataset, we carefully extract all Python functions with docstrings in The Stack V1, infer dependencies required using autoimport, and apply the following filtering rules on all functions:Type checking: We apply the Pyright heuristic type-checker to remove all functions that produce static errors, signaling a possibly incorrect item.Decontamination: We detect and remove all benchmark items on which we evaluate. We use exact string match on both the solutions and prompts.Docstring Quality Filtering: We utilize StarCoder2-15B as a judge to remove functions with poor documentation. We prompt the base model with 7 few-shot examples, requiring it to respond with either "Yes" or "No" for retaining the item.Near-Deduplication: We utilize MinHash and locality-sensitive hashing with a Jaccard similarity threshold of 0.5 to filter duplicate seed functions in our dataset. This is the same process applied to StarCoder’s training data.This filtering pipeline results in a dataset of 250k Python functions filtered from 5M functions with docstrings. This process is highly inspired by the data collection pipeline used in MultiPL-T.Self-OSS-Instruct After collecting the seed functions, we use Self-OSS-Instruct to generate diverse instructions. In detail, we employ in-context learning to let the base StarCoder2-15B self-generate instructions from the given seed code snippets. This process utilizes 16 carefully designed few-shot examples, each formatted as (snippet, concepts, instruction). The instruction generation procedure is divided into two steps:Concepts extraction: For each seed function, StarCoder2-15B is prompted to produce a list of code concepts present within the function. Code concepts refer to the foundational principles and techniques used in programming, such as pattern matching and data type conversion, which are crucial for developers to master.Instruction generation: StarCoder2-15B is then prompted to self-generate a coding task that incorporates the identified code concepts.Eventually, 238k instructions are generated from this process.Response self-validation Given the instructions generated from Self-OSS-Instruct, our next step is to match each instruction with a high-quality response. Prior practices commonly rely on distilling responses from stronger teacher models, such as GPT-4, which hopefully exhibit higher quality. However, distilling proprietary models leads to non-permissive licensing and a stronger teacher model might not always be available. More importantly, teacher models can be wrong as well, and the distribution gap between teacher and student can be detrimental.We propose to self-align StarCoder2-15B by explicitly instructing the model to generate tests for self-validation after it produces a response interleaved with natural language. This process is similar to how developers test their code implementations. Specifically, for each instruction, StarCoder2-15B generates 10 samples of the format (NL Response, Test) and we filter out those falsified by the test execution under a sandbox environment. We then randomly select one passing response per instruction to the final SFT dataset. In total, we generated 2.4M (10 x 238k) responses for the 238k instructions with temperature 0.7, where 500k passed the execution test. After deduplication, we are left with 50k instructions, each paired with a random passing response, which we finally use as our SFT dataset.Evaluation On the popular and rigorous EvalPlus benchmark, StarCoder2-15B-Instruct stands out as the top-performing permissive LLM at its scale, outperforming the much larger Grok-1 Command-R+, DBRX, while closely matching Snowflake Arctic 480B and Mixtral-8x22B-Instruct. To our knowledge, StarCoder2-15B-Instruct is the first code LLM with a fully transparent and permissive pipeline reaching a 70+ HumanEval score. It drastically outperforms OctoCoder, which is the previous state-of-the-art permissive code LLM with a transparent pipeline.Even compared to powerful LLMs with restrictive licenses, StarCoder2-15B-Instruct remains competitive, surpassing Gemini Pro and Mistral Large and comparable to CodeLlama-70B-Instruct. Additionally, StarCoder2-15B-Instruct, trained purely on self-generated data, closely rivals OpenCodeInterpreter-SC2-15B, which finetunes StarCoder2-15B on distilled data from GPT-3.5/4.Besides EvalPlus, we also evaluated state-of-the-art open-source models with similar or smaller sizes on LiveCodeBench, which includes fresh coding problems created after 2023-09-01, as well as DS-1000 that targets data science programs. On LiveCodeBench, StarCoder2-15B-Instruct achieves the best results among the models evaluated and consistently outperforms OpenCodeInterpreter-SC2-15B which distills GPT-4 data. On DS-1000, the StarCoder2-15B-Instruct is still competitive despite being trained on very limited data science problems.Conclusion StarCoder2-15B-Instruct-v0.1 showcases for the first time that we can create powerful instruction-tuned code models without relying on stronger teacher models like GPT-4. This model demonstrates that self-alignment, where a model uses its own generated content to learn, is also effective for code. It is fully transparent and allows for distillation, setting it apart from other larger permissive but non-transparent models such as Snowflake-Arctic, Grok-1, Mixtral-8x22B, DBRX, and CommandR+. We have made our datasets and the entire pipeline, including data curation and training, fully open-source. We hope this seminal work can inspire more future research and development in this field.Resources StarCoder2-15B-Instruct-v0.1: the instruction-tuned modelstarcoder2-self-align: the self-alignment pipelineStarCoder2-Self-OSS-Instruct: the self-generated, instruction-tuning dataset |
https://huggingface.co/blog/leaderboard-cot | Introducing the Open Chain of Thought Leaderboard | Gregor Betz, Sebastian Cacean, Clémentine Fourrier, Kyle Richardson | April 23, 2024 | Chain-of-thought prompting is emerging as a powerful and effective design pattern for LLM-based apps and agents. The basic idea of chain-of-thought prompting is to let a model generate a step-by-step solution (“reasoning trace”) before answering a question or taking a decision. With the Open CoT Leaderboard we’re tracking LLMs’ ability to generate effective chain-of-thought traces for challenging reasoning tasks. Unlike most performance based leaderboards, we’re not scoring the absolute accuracy a model achieves on a given task, but the difference between the accuracy with and without chain-of-thought prompting:accuracy gain Δ = accuracy with CoT – accuracy w/o CoT.This allows us to truly inspect the impact that chain-of-thought has on model accuracy.Note: without CoT prompting, we use the loglikelihood accuracy to score the model on multiple choice evaluation.What’s the motivation behind such a leaderboard for chain-of-thought?Chain-of-thought prompting is a universally applicable prompting strategy that may improve explainability and accuracy of LLM-based apps and agents (see, e.g., this collection for recent research and implementations)). With frameworks like Langchain or LMQL, it’s straightforward to insert sophisticated reasoning chains in your apps. But even if you’ve never heard about chain-of-thought before, you may have noticed, while using a ChatBot, that it tends to proceed step by step before answering your query. So, a systematic, up-to-date comparison of LLMs’ ability to generate effective chain-of-thought traces may inform the decisions of builders and users when choosing a model. Over time, static "accuracy-based" benchmarks risk becoming less informative: does a model score well because of its superior skill, because it has seen the correct answers during training, or because it has been developed in a competitive context that is governed by this very benchmark? These widely acknowledged issues are addressed by recent eval approaches such as ChatBot arenas, the use of LLMs as judges, or dynamic benchmarks with programmatically generated tasks. We hope the Open CoT Leaderboard contributes to these efforts, notably by being more robust to training data contamination: knowing the answer to a question doesn’t ensure that one can reason effectively about it. Which tasks are used?The Open CoT Leaderboard evaluates LLMs’ ability to generate effective chain-of-thought reasoning traces for the following tasks:LogiQA (new translation of original version, and version 2.0 with new examples)LSAT dataset (including subsets on analytical reasoning, logical reasoning, and reading comprehension)Except for the original version of LogiQA, all these tasks are part of the AGIEval benchmark, and have been re-published as logikon-bench.We’ve chosen these tasks because theyare generic, i.e. can be solved through reasoning and just require commonsense knowledge;are still relatively difficult even for the most powerful LLMs (leaving enough room for improvement through chain-of-thought);have been introduced as AI benchmarks before (in AGIEval) and are widely used (e.g., in the Nous benchmark suite).All tasks are rendered as multiple-choice problems, with the answer options being enumerated in the prompt.We use the following prompt template for assessing baseline and CoT accuracies – the reasoning traces (starting with Reasoning) are only added in the case “with CoT”:Answer the following question about the given passage. Base your answer on the reasoning below.Passage: <passage>Question: <question>A. <choice1>B. <choice2>…Reasoning: <reasoning>Answer:How are chain-of-thought traces generated?[#cot-generation]The literature on chain-of-thought prompting has exploded in the last year, and the wealth of prompting strategies for CoT (involving, e.g., decoding, prompt wording, prompt chains, decomposition, aggregation and revision of reasoning traces) has become sheerly mindblowing.To cope with this variety, the evaluation pipeline behind the Open CoT Leaderboard implements CoT generation through modular and easily extensible prompt chains. So far, we have implemented two fairly simple prompting strategies: Classic: Problem presentation followed by “Let’s think step by step.” instruction.Reflect: Problem presentation followed by instruction to reflect upon the problem in general terms before solving it step by step.Let’s have a look at an example. The following problem is from the LSAT logical reasoning benchmark.Editorial: The structure of the present school calendar was established to satisfy the requirements of early-twentieth-century agricultural life. In those days, farmers needed their children to have long breaks during which they could remain at home and help with the harvest. The contemporary school year is thus made up of periods of study interspersed with long breaks. But agricultural life no longer occupies most of our citizens, so we can now make changes that serve the interests of children. Therefore, long breaks should be removed from the school calendar.Which one of the following is an assumption on which the editorial's argument depends?A) A change in the present school calendar that shortened breaks would serve the interests of agricultural life.B) Children of farmers need to continue observing a school calendar made up of periods of study interspersed with long breaks.C) Long breaks in the school calendar should be replaced with breaks that are no longer than workers' average vacations.D) A school calendar made up of periods of study without long breaks would serve the interests of children more than a school calendar with long breaks.E) During long breaks children have a tendency to forget what they have learned.Mixtral-8x7B-Instruct-v0.1 is doing fairly well in generating the following chain-of-thought traces:With prompting strategy ClassicWith prompting strategy ReflectThe editorial argues that the current school calendar, with long breaks, was designed to accommodate agricultural life but should now be changed because agricultural life no longer occupies most of our citizens. It concludes that long breaks should be removed from the school calendar.The assumption that the argument depends on is the belief that removing long breaks from the school calendar would serve the interests of children more than the current calendar. This is not explicitly stated in the editorial, but it is the underlying assumption that justifies the proposed change.Therefore, the correct answer is:D) A school calendar made up of periods of study without long breaks would serve the interests of children more than a school calendar with long breaks.The decision problem in abstract terms is to identify the assumption on which the editorial's argument depends. Common mistakes for this kind of problem include:Misinterpreting the argument's premises or conclusion.Overlooking the specific requirement of identifying an assumption.Confusing an assumption with a conclusion or a premise.A plan for how to solve this problem could be:Identify the premises and conclusion of the argument.List possible assumptions that could bridge the gap between the premises and the conclusion.Evaluate each assumption to see if it is necessary for the argument.Solving the problem step by step, following the plan:The premises of the argument are:The structure of the present school calendar was established to satisfy the requirements of early-twentieth-century agricultural life.Agricultural life no longer occupies most of our citizens.The conclusion of the argument is:Long breaks should be removed from the school calendar.Possible assumptions could be:A school calendar made up of periods of study without long breaks would serve the interests of children more than a school calendar with long breaks.Children of farmers no longer need to remain at home and help with the harvest during long breaks.A change in the present school calendar that shortened breaks would not serve the interests of agricultural life.Evaluating each assumption:The argument does not depend on the assumption that children of farmers no longer need to remain at home and help with the harvest during long breaks, because the argument is about the interests of all children, not just those of farmers.The argument does not depend on the assumption that a change in the present school calendar that shortened breaks would not serve the interests of agricultural life, because the argument is about the interests of children, not agricultural life.The argument does depend on the assumption that a school calendar made up of periods of study without long breaks would serve the interests of children more than a school calendar with long breaks. This is the assumption that justifies the conclusion.Therefore, the assumption on which the editorial's argument depends is D) A school calendar made up of periods of study without long breaks would serve the interests of children more than a school calendar with long breaks.Each of our two prompting strategies—Classic and Reflect—is combined and run with multiple decoding parameters:Greedy.Beam search (n=2).Sampling (T=.3).This gives us, in combination, six “CoT generation regimes.” When we evaluate a model, we let it generate, for every example in the test datasets, one chain-of-thought trace for each regime. The generated traces are then plugged into the prompt template shown above, which we use to score the models. We obtain, accordingly, six different numerical scores for accuracy with chain-of-thought (namely one for each regime). In the Open CoT Leaderboard, we report (for every model/task) the best marginal accuracy gain achieved under any regime.What are the main take-aways so far?We’re gradually extending the Open CoT Leaderboard by evaluating more and more models, but current results (model count=30) already suggest some interesting insights.Mighty dwarfs: We have been very pleased to see that relatively small (7B) open LLMs are capable of effective, i.e. accuracy-improving, chain-of-thought reasoning, in some cases at a better rate than bigger model. 🎉 For example, a small model like Phi-2 benefits more than the Mixtral model from added CoT traces.Instruction- and chat-finetuning helps: Finetuned models score much better than their corresponding base models. More specifically, finetuning may improve both the baseline accuracy without CoT and the marginal accuracy gains achieved through CoT.Variable and ambiguous effects of CoT: Digging a bit deeper, we see that there is no single preferred or superior CoT generation regime. What works best for one model and one task might not work for another model, or another task. And sometimes CoT reduces accuracy rather than increasing it. We take this as a reminder that finding an implementation of CoT that is universally effective, reliable and robust remains a challenging problem.What are the next steps? – And how to contribute.We’re planning to move ahead in different directions. And contributions to all these efforts are more than welcome. First, we’d love to evaluate your models! You can 📬 submit any open LLMs for evaluation on the Open CoT Leaderboard space, using the Submission tab!Then, we’d love some help on the following coding and data analysis tasks.Carry out in-depth analysis of full evaluation results.For example, a qualitative analysis of the generated CoT traces to check whether they actually point to the correct answer choice. We’ve created a notebook that shows how to access and explore the eval results and reasoning traces which back up the Open Cot Leaderboard. You can build on that and share your own analyses in the corresponding repo (or somewhere else, of course). Feel free to open an issue with suggestions or questions. In case you plan to use the data for research projects and want feedback, just drop a note.Create Open CoT Dashboard.The Open CoT Leaderboard contends with ranking models according to marginal accuracy gains. It doesn’t display the baseline accuracies, the variance, the scores for different CoT generation regimes, properties of the generated reasoning traces (e.g., length), etc. We think it would be super informative to complement the leaderboard with a dashboard (e.g., as an extra tab or a separate HF space) that presents all this info and can be interactively explored by users. In case you’re interested in building such an Open CoT Dashboard (with or without us), just reach out.More CoT chains.We’re pondering implementing further CoT generation regimes. Promising candidates are, for example, self-consistency, tree-of-thought, self-check, or debating. Want to help us with that? Get in touch! (🤫: Why not choose such a project for your master’s or bachelor’s thesis?)More tasks and test datasets.The Open CoT Leaderboard is arguably built on a rather narrow set of benchmarks. Once we have free compute resources, we’d like to include further challenging reasoning tasks. We’d be happy to learn which tasks you’d like to see included in the Open CoT Leaderboard.Here’s where we can exchange our ideas and collaborate:For non-technical suggestions and feedback, join the discussion at the leaderboard’s HF space.For technical feedback and questions, open an issue at our GitHub repo.Looking forward to hearing from you! |
https://huggingface.co/blog/jat | Jack of All Trades, Master of Some, a Multi-Purpose Transformer Agent | Quentin Gallouédec, Edward Beeching, Clément ROMAC, Thomas Wolf | April 22, 2024 | IntroductionWe're excited to share Jack of All Trades (JAT), a project that aims to move in the direction of a generalist agent. The project started as an open reproduction of the Gato (Reed et al., 2022) work, which proposed to train a Transformer able to perform both vision-and-language and decision-making tasks. We thus started by building an open version of Gato’s dataset. We then trained multi-modal Transformer models on it, introducing several improvements over Gato for handling sequential data and continuous values.Overall, the project has resulted in: The release of a large number of expert RL agents on a wide variety of tasks.The release of the JAT dataset, the first dataset for generalist agent training. It contains hundreds of thousands of expert trajectories collected with the expert agentsThe release of the JAT model, a transformer-based agent capable of playing video games, controlling a robot to perform a wide variety of tasks, understanding and executing commands in a simple navigation environment and much more!Datasets & expert policiesThe expert policiesRL traditionally involves training policies on single environments. Leveraging these expert policies is a genuine way to build a versatile agent. We selected a wide range of environments, of varying nature and difficulty, including Atari, BabyAI, Meta-World, and MuJoCo. For each of these environments, we train an agent until it reached state-of-the-art performance. (For BabyAI, we use the BabyAI bot instead). The resulting agents are called expert agents, and have been released on the 🤗 Hub. You'll find a list of all agents in the JAT dataset card.The JAT datasetWe release the JAT dataset, the first dataset for generalist agent training. The JAT dataset contains hundreds of thousands of expert trajectories collected with the above-mentioned expert agents. To use this dataset, simply load it like any other dataset from the 🤗 Hub:>>> from datasets import load_dataset>>> dataset = load_dataset("jat-project/jat-dataset", "metaworld-assembly")>>> first_episode = dataset["train"][0]>>> first_episode.keys()dict_keys(['continuous_observations', 'continuous_actions', 'rewards'])>>> len(first_episode["rewards"])500>>> first_episode["continuous_actions"][0][6.459120273590088, 2.2422609329223633, -5.914587020874023, -19.799840927124023]In addition to RL data, we include textual datasets to enable a unique interface for the user. That's why you'll also find subsets for Wikipedia, Oscar, OK-VQA and Conceptual-Captions.JAT agent architectureJAT's architecture is based on a Transformer, using EleutherAI's GPT-Neo implementation. JAT's particularity lies in its embedding mechanism, which has been built to intrinsically handle sequential decision tasks. We interleave observation embeddings with action embeddings, along with the corresponding rewards.Architecture of the JAT network. For sequential decision-making tasks, observations and rewards on the one hand, and actions on the other, are encoded and interleaved. The model generates the next embedding autoregressively with a causal mask, and decodes according to expected modality.Each embedding therefore corresponds either to an observation (associated with the reward), or to an action. But how does JAT encode this information? It depends on the type of data. If the data (observation or action) is an image (as is the case for Atari), then JAT uses a CNN. If it's a continuous vector, then JAT uses a linear layer. Finally, if it's a discrete value, JAT uses a linear projection layer. The same principle is used for model output, depending on the type of data to be predicted. Prediction is causal, shifting observations by 1 time step. In this way, the agent must predict the next action from all previous observations and actions.In addition, we thought it would be fun to train our agent to perform NLP and CV tasks. To do this, we also gave the encoder the option of taking text and image data as input. For text data, we tokenize using GPT-2 tokenization strategy, and for images, we use a ViT-type encoder.Given that the modality of the data can change from one environment to another, how does JAT compute the loss? It computes the loss for each modality separately. For images and continuous values, it uses the MSE loss. For discrete values, it uses the cross-entropy loss. The final loss is the average of the losses for each element of the sequence.Wait, does that mean we give equal weight to predicting actions and observations? Actually, no, but we'll talk more about that below.Experiments and resultsWe evaluate JAT on all 157 training tasks. We collect 10 episodes and record the total reward. For ease of reading, we aggregate the results by domain.Aggregated expert normalized scores with 95% Confidence Intervals (CIs) for each RL domain as a function of learning step.If we were to summarize these results in one number, it would be 65.8%, the average performance compared to the JAT expert over the 4 domains. This shows that JAT is capable of mimicking expert performance on a very wide variety of tasks.Let's go into a little more detail:For Atari 57, the agent achieves 14.1% of the expert's score, corresponding to 37.6% of human performance. It exceeds human performance on 21 games.For BabyAI, the agent achieves 99.0% of the expert's score, and fails to exceed 50% of the expert on just 1 task.For Meta-World, the agent reached 65.5% of the expert.For MuJoCo, the agent achieves 84.8% of the expert.Human normalized scores for the JAT agent on the Atari 57 benchmark.What's most impressive is that JAT achieves this performance using a single network for all domains. To take the measure of this performance, let's watch JAT's rendering on a few tasks:Want to try it out? You can! The JAT model is available on the 🤗 Hub!For textual tasks, our model shows rudimentary capabilities, we refer the reader to the paper for more details.The surprising benefits of predicting observationsWhen training an RL agent, the primary goal is to maximize future rewards. But what if we also ask the agent to predict what it will observe in the future? Will this additional task help or hinder the learning process?There are two opposing views on this question. On one hand, learning to predict observations could provide a deeper understanding of the environment, leading to better and faster learning. On the other hand, it could distract the agent from its main goal, resulting in mediocre performance in both observation and action prediction.To settle this debate, we conducted an experiment using a loss function that combines observation loss and action loss, with a weighting parameter κ \kappa κ to balance the two objectives.Aggregate measures with 95% CIs for the study on the influence of observation prediction learning for selected tasks. The results presented cover the selected range of κ values and are based on 100 evaluations per task. Optimal κ \kappa κ selection can significantly improve agent performance.The results were noteworthy. When κ \kappa κ was too high (0.5), the additional objective of predicting observations seemed to hinder the learning process. But when κ \kappa κ was lower, the impact on learning was negligible, and the agent's performance was similar to that obtained when observation prediction was not part of the objective.However, we found a sweet spot around κ=0.005 \kappa= 0.005 κ=0.005, where learning to predict observations actually improved the agent's learning efficiency.Our study suggests that adding observation prediction to the learning process can be beneficial, as long as it's balanced correctly. This finding has important implications for the design of such agents, highlighting the potential value of auxiliary objectives in improving learning efficiency.So, the next time you're training an RL agent, consider asking it to predict what it will observe in the future. It might just lead to better performance and faster learning!ConclusionsIn this work, we introduced JAT, a multi-purpose transformer agent capable of mastering a wide variety of sequential decision-making tasks, and showing rudimentary capabilities in NLP and CV tasks. For all these tasks, JAT uses a single network. Our contributions include the release of expert RL agents, the JAT dataset, and the JAT model. We hope that this work will inspire future research in the field of generalist agents and contribute to the development of more versatile and capable AI systems.What's next? A request for researchWe believe that the JAT project has opened up a new direction for research in the field of generalist agents, and we've only just scratched the surface. Here are some ideas for future work:Improving the data: Although pioneering, the JAT dataset is still in its early stages. The expert trajectories come from only one expert agent per environment which may cause some bias. Although we've done our best to reach state-of-the-art performance, some environments are still challenging. We believe that collecting more data and training more expert agents could help a lot.Use offline RL: The JAT agent is trained using basic Behavioral Cloning. This implies two things: (1) we can't take advantage of sub-optimal trajectories and (2) the JAT agent can't outperform the expert. We've chosen this approach for simplicity, but we believe that using offline RL could really help improve the agent's performance, while not being too complex to implement.Unlock the full potential of a smarter multi-task sampling strategy: Currently, the JAT agent samples data uniformly from all tasks, but this approach may be holding it back. By dynamically adjusting the sampling rate to focus on the most challenging tasks, we can supercharge the agent's learning process and unlock significant performance gains.Links📄 Paper💻 Source code🗂️ JAT dataset🤖 JAT modelCitation@article{gallouedec2024jack,title = {{Jack of All Trades, Master of Some, a Multi-Purpose Transformer Agent}},author = {Gallouédec, Quentin and Beeching, Edward and Romac, Clément and Dellandréa, Emmanuel},journal = {arXiv preprint arXiv:2402.09844},year = {2024},url = {https://arxiv.org/abs/2402.09844}} |
https://huggingface.co/blog/llama3 | Welcome Llama 3 - Meta’s new open LLM | Philipp Schmid, Omar Sanseviero, Pedro Cuenca, Younes Belkada, Leandro von Werra | April 18, 2024 | IntroductionMeta’s Llama 3, the next iteration of the open-access Llama family, is now released and available at Hugging Face. It's great to see Meta continuing its commitment to open AI, and we’re excited to fully support the launch with comprehensive integration in the Hugging Face ecosystem.Llama 3 comes in two sizes: 8B for efficient deployment and development on consumer-size GPU, and 70B for large-scale AI native applications. Both come in base and instruction-tuned variants. In addition to the 4 models, a new version of Llama Guard was fine-tuned on Llama 3 8B and is released as Llama Guard 2 (safety fine-tune).We’ve collaborated with Meta to ensure the best integration into the Hugging Face ecosystem. You can find all 5 open-access models (2 base models, 2 fine-tuned & Llama Guard) on the Hub. Among the features and integrations being released, we have:Models on the Hub, with their model cards and licenses🤗 Transformers integrationHugging Chat integration for Meta Llama 3 70bInference Integration into Inference Endpoints, Google Cloud & Amazon SageMakerAn example of fine-tuning Llama 3 8B on a single GPU with 🤗 TRLTable of contentsWhat’s new with Llama 3?Llama 3 evaluationHow to prompt Llama 3DemoUsing 🤗 TransformersInference IntegrationsFine-tuning with 🤗 TRLAdditional ResourcesAcknowledgmentsWhat’s new with Llama 3?The Llama 3 release introduces 4 new open LLM models by Meta based on the Llama 2 architecture. They come in two sizes: 8B and 70B parameters, each with base (pre-trained) and instruct-tuned versions. All the variants can be run on various types of consumer hardware and have a context length of 8K tokens. Meta-Llama-3-8b: Base 8B modelMeta-Llama-3-8b-instruct: Instruct fine-tuned version of the base 8b modelMeta-Llama-3-70b: Base 70B modelMeta-Llama-3-70b-instruct: Instruct fine-tuned version of the base 70b modelIn addition to these 4 base models, Llama Guard 2 was also released. Fine-tuned on Llama 3 8B, it’s the latest iteration in the Llama Guard family. Llama Guard 2, built for production use cases, is designed to classify LLM inputs (prompts) as well as LLM responses in order to detect content that would be considered unsafe in a risk taxonomy.A big change in Llama 3 compared to Llama 2 is the use of a new tokenizer that expands the vocabulary size to 128,256 (from 32K tokens in the previous version). This larger vocabulary can encode text more efficiently (both for input and output) and potentially yield stronger multilingualism. This comes at a cost, though: the embedding input and output matrices are larger, which accounts for a good portion of the parameter count increase of the small model: it goes from 7B in Llama 2 to 8B in Llama 3. In addition, the 8B version of the model now uses Grouped-Query Attention (GQA), which is an efficient representation that should help with longer contexts. The Llama 3 models were trained ~8x more data on over 15 trillion tokens on a new mix of publicly available online data on two clusters with 24,000 GPUs. We don’t know the exact details of the training mix, and we can only guess that bigger and more careful data curation was a big factor in the improved performance. Llama 3 Instruct has been optimized for dialogue applications and was trained on over 10 Million human-annotated data samples with combination of supervised fine-tuning (SFT), rejection sampling, proximal policy optimization (PPO), and direct policy optimization (DPO). Regarding the licensing terms, Llama 3 comes with a permissive license that allows redistribution, fine-tuning, and derivative works. The requirement for explicit attribution is new in the Llama 3 license and was not present in Llama 2. Derived models, for instance, need to include "Llama 3" at the beginning of their name, and you also need to mention "Built with Meta Llama 3" in derivative works or services. For full details, please make sure to read the official license.Llama 3 evaluationNote: We are currently evaluating Meta Llama 3 individually and will update this section as soon as we get the results. How to prompt Llama 3The base models have no prompt format. Like other base models, they can be used to continue an input sequence with a plausible continuation or for zero-shot/few-shot inference. They are also a great foundation for fine-tuning your own use cases. The Instruct versions use the following conversation structure:<|begin_of_text|><|start_header_id|>system<|end_header_id|>{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>{{ user_msg_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>{{ model_answer_1 }}<|eot_id|>This format has to be exactly reproduced for effective use. We’ll later show how easy it is to reproduce the instruct prompt with the chat template available in transformers. DemoYou can chat with the Llama 3 70B instruct on Hugging Chat! Check out the link here: https://huggingface.co/chat/models/meta-llama/Meta-Llama-3-70B-instructUsing 🤗 TransformersWith Transformers release 4.40, you can use Llama 3 and leverage all the tools within the Hugging Face ecosystem, such as:training and inference scripts and examplessafe file format (safetensors)integrations with tools such as bitsandbytes (4-bit quantization), PEFT (parameter efficient fine-tuning), and Flash Attention 2utilities and helpers to run generation with the modelmechanisms to export the models to deployIn addition, Llama 3 models are compatible with torch.compile() with CUDA graphs, giving them a ~4x speedup at inference time!To use Llama 3 models with transformers, make sure to use the latest transformers release:pip install -U "transformers==4.40.0" --upgradeThe following snippet shows how to use Llama-3-8b-instruct with transformers. It requires about 16 GB of RAM, which includes consumer GPUs such as 3090 or 4090.import transformersimport torchmodel_id = "meta-llama/Meta-Llama-3-8B-Instruct"pipeline = transformers.pipeline("text-generation",model=model_id,model_kwargs={"torch_dtype": torch.bfloat16},device="cuda",)messages = [{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},{"role": "user", "content": "Who are you?"},]prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)terminators = [pipeline.tokenizer.eos_token_id,pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")]outputs = pipeline(prompt,max_new_tokens=256,eos_token_id=terminators,do_sample=True,temperature=0.6,top_p=0.9,)print(outputs[0]["generated_text"][len(prompt):])Arrrr, me hearty! Me name be Captain Chat, the scurviest pirate chatbot to ever sail the Seven Seas! Me be here to swab the decks o' yer mind with me trusty responses, savvy? I be ready to hoist the Jolly Roger and set sail fer a swashbucklin' good time, matey! So, what be bringin' ye to these fair waters?A couple of details:We loaded the model in bfloat16. This is the type used by the original checkpoint published by Meta, so it’s the recommended way to run to ensure the best precision or to conduct evaluations. For real world use, it’s also safe to use float16, which may be faster depending on your hardware.Assistant responses may end with the special token <|eot_id|>, but we must also stop generation if the regular EOS token is found. We can stop generation early by providing a list of terminators in the eos_token_id parameter.We used the default sampling parameters (temperature and top_p) taken from the original meta codebase. We haven’t had time to conduct extensive tests yet, feel free to explore!You can also automatically quantize the model, loading it in 8-bit or even 4-bit mode. 4-bit loading takes about 7 GB of memory to run, making it compatible with a lot of consumer cards and all the GPUs in Google Colab. This is how you’d load the generation pipeline in 4-bit:pipeline = transformers.pipeline("text-generation",model=model_id,model_kwargs={"torch_dtype": torch.float16,"quantization_config": {"load_in_4bit": True},"low_cpu_mem_usage": True,},)For more details on using the models with transformers, please check the model cards.Inference IntegrationsIn this section, we’ll go through different approaches to running inference of the Llama 3 models. Before using these models, make sure you have requested access to one of the models in the official Meta Llama 3 repositories.Integration with Inference EndpointsYou can deploy Llama 3 on Hugging Face's Inference Endpoints, which uses Text Generation Inference as the backend. Text Generation Inference is a production-ready inference container developed by Hugging Face to enable easy deployment of large language models. It has features such as continuous batching, token streaming, tensor parallelism for fast inference on multiple GPUs, and production-ready logging and tracing.To deploy Llama 3, go to the model page and click on the Deploy -> Inference Endpoints widget. You can learn more about Deploying LLMs with Hugging Face Inference Endpoints in a previous blog post. Inference Endpoints supports Messages API through Text Generation Inference, which allows you to switch from another closed model to an open one by simply changing the URL.from openai import OpenAI# initialize the client but point it to TGIclient = OpenAI(base_url="<ENDPOINT_URL>" + "/v1/", # replace with your endpoint urlapi_key="<HF_API_TOKEN>", # replace with your token)chat_completion = client.chat.completions.create(model="tgi",messages=[{"role": "user", "content": "Why is open-source software important?"},],stream=True,max_tokens=500)# iterate and print streamfor message in chat_completion:print(message.choices[0].delta.content, end="")Integration with Google CloudYou can deploy Llama 3 on Google Cloud through Vertex AI or Google Kubernetes Engine (GKE), using Text Generation Inference. To deploy the Llama 3 model from Hugging Face, go to the model page and click on Deploy -> Google Cloud. This will bring you to the Google Cloud Console, where you can 1-click deploy Llama 3 on Vertex AI or GKE.Integration with Amazon SageMakerYou can deploy and train Llama 3 on Amazon SageMaker through AWS Jumpstart or using the Hugging Face LLM Container. To deploy the Llama 3 model from Hugging Face, go to the model page and click on Deploy -> Amazon SageMaker. This will display a code snippet you can copy and execute in your environment. Amazon SageMaker will now create a dedicated inference endpoint you can use to send requests. Fine-tuning with 🤗 TRLTraining LLMs can be technically and computationally challenging. In this section, we’ll look at the tools available in the Hugging Face ecosystem to efficiently train Llama 3 on consumer-size GPUs. Below is an example command to fine-tune Llama 3 on the No Robots dataset. We use 4-bit quantization, and QLoRA and TRL’s SFTTrainer will automatically format the dataset into chatml format. Let’s get started!First, install the latest version of 🤗 TRL. pip install -U transformers trl accelerateIf you just want to chat with the model in the terminal you can use the chat command of the TRL CLI (for more info see the docs):trl chat \--model_name_or_path meta-llama/Meta-Llama-3-8B-Instruct \--device cuda \--eos_tokens "<|end_of_text|>,<|eod_id|>"You can also use TRL CLI to supervise fine-tuning (SFT) Llama 3 on your own, custom dataset. Use the trl sft command and pass your training arguments as CLI argument. Make sure you are logged in and have access the Llama 3 checkpoint. You can do this with huggingface-cli login.trl sft \--model_name_or_path meta-llama/Meta-Llama-3-8B \--dataset_name HuggingFaceH4/no_robots \--learning_rate 0.0001 \--per_device_train_batch_size 4 \--max_seq_length 2048 \--output_dir ./llama3-sft \--use_peft \--load_in_4bit \--log_with wandb \--gradient_checkpointing \--logging_steps 10This will run the fine-tuning from your terminal and takes about 4 hours to train on a single A10G, but can be easily parallelized by tweaking --num_processes to the number of GPUs you have available.Note: You can also replace the CLI arguments with a yaml file. Learn more about the TRL CLI here. Additional ResourcesModels on the HubOpen LLM LeaderboardChat demo on Hugging ChatMeta BlogGoogle Cloud Vertex AI model gardenAcknowledgmentsReleasing such models with support and evaluations in the ecosystem would not be possible without the contributions of many community members, includingClémentine Fourrier, Nathan Habib, and Eleuther Evaluation Harness for LLM evaluationsOlivier Dehaene and Nicolas Patry for Text Generation Inference SupportArthur Zucker and Lysandre Debut for adding Llama 3 support in transformers and tokenizersNathan Sarrazin, Victor Mustar, and Kevin Cathaly for making Llama 3 available in Hugging Chat.Yuvraj Sharma for the Gradio demo.Xenova and Vaibhav Srivastav for debugging and experimentation with quantization and prompt templates.Brigitte Tousignant, Florent Daudens, Morgan Funtowicz, and Simon Brandeis for different items during the launch!Thank you to the whole Meta team, including Samuel Selvan, Eleonora Presani, Hamid Shojanazeri, Azadeh Yazdan, Aiman Farooq, Ruan Silva, Ashley Gabriel, Eissa Jamil, Binh Tang, Matthias Reso, Lovish Madaan, Joe Spisak, and Sergey Edunov.Thank you to the Meta Team for releasing Llama 3 and making it available to the open-source AI community! |
https://huggingface.co/blog/leaderboard-medicalllm | The Open Medical-LLM Leaderboard: Benchmarking Large Language Models in Healthcare | Aaditya Ura (looking for PhD), Pasquale Minervini, Clémentine Fourrier | April 19, 2024 | Over the years, Large Language Models (LLMs) have emerged as a groundbreaking technology with immense potential to revolutionize various aspects of healthcare. These models, such as GPT-3, GPT-4 and Med-PaLM 2 have demonstrated remarkable capabilities in understanding and generating human-like text, making them valuable tools for tackling complex medical tasks and improving patient care. They have notably shown promise in various medical applications, such as medical question-answering (QA), dialogue systems, and text generation. Moreover, with the exponential growth of electronic health records (EHRs), medical literature, and patient-generated data, LLMs could help healthcare professionals extract valuable insights and make informed decisions.However, despite the immense potential of Large Language Models (LLMs) in healthcare, there are significant and specific challenges that need to be addressed. When models are used for recreational conversational aspects, errors have little repercussions; this is not the case for uses in the medical domain however, where wrong explanation and answers can have severe consequences for patient care and outcomes. The accuracy and reliability of information provided by language models can be a matter of life or death, as it could potentially affect healthcare decisions, diagnosis, and treatment plans.For example, when given a medical query (see below), GPT-3 incorrectly recommended tetracycline for a pregnant patient, despite correctly explaining its contraindication due to potential harm to the fetus. Acting on this incorrect recommendation could lead to bone growth problems in the baby.To fully utilize the power of LLMs in healthcare, it is crucial to develop and benchmark models using a setup specifically designed for the medical domain. This setup should take into account the unique characteristics and requirements of healthcare data and applications. The development of methods to evaluate the Medical-LLM is not just of academic interest but of practical importance, given the real-life risks they pose in the healthcare sector.The Open Medical-LLM Leaderboard aims to address these challenges and limitations by providing a standardized platform for evaluating and comparing the performance of various large language models on a diverse range of medical tasks and datasets. By offering a comprehensive assessment of each model's medical knowledge and question-answering capabilities, the leaderboard aims to foster the development of more effective and reliable medical LLMs. This platform enables researchers and practitioners to identify the strengths and weaknesses of different approaches, drive further advancements in the field, and ultimately contribute to better patient care and outcomesDatasets, Tasks, and Evaluation SetupThe Medical-LLM Leaderboard includes a variety of tasks, and uses accuracy as its primary evaluation metric (accuracy measures the percentage of correct answers provided by a language model across the various medical QA datasets).MedQAThe MedQA dataset consists of multiple-choice questions from the United States Medical Licensing Examination (USMLE). It covers general medical knowledge and includes 11,450 questions in the development set and 1,273 questions in the test set. Each question has 4 or 5 answer choices, and the dataset is designed to assess the medical knowledge and reasoning skills required for medical licensure in the United States.MedMCQAMedMCQA is a large-scale multiple-choice QA dataset derived from Indian medical entrance examinations (AIIMS/NEET). It covers 2.4k healthcare topics and 21 medical subjects, with over 187,000 questions in the development set and 6,100 questions in the test set. Each question has 4 answer choices and is accompanied by an explanation. MedMCQA evaluates a model's general medical knowledge and reasoning capabilities.PubMedQAPubMedQA is a closed-domain QA dataset, In which each question can be answered by looking at an associated context (PubMed abstract). It is consists of 1,000 expert-labeled question-answer pairs. Each question is accompanied by a PubMed abstract as context, and the task is to provide a yes/no/maybe answer based on the information in the abstract. The dataset is split into 500 questions for development and 500 for testing. PubMedQA assesses a model's ability to comprehend and reason over scientific biomedical literature.MMLU Subsets (Medicine and Biology)The MMLU benchmark (Measuring Massive Multitask Language Understanding) includes multiple-choice questions from various domains. For the Open Medical-LLM Leaderboard, we focus on the subsets most relevant to medical knowledge:Clinical Knowledge: 265 questions assessing clinical knowledge and decision-making skills.Medical Genetics: 100 questions covering topics related to medical genetics.Anatomy: 135 questions evaluating the knowledge of human anatomy.Professional Medicine: 272 questions assessing knowledge required for medical professionals.College Biology: 144 questions covering college-level biology concepts.College Medicine: 173 questions assessing college-level medical knowledge.Each MMLU subset consists of multiple-choice questions with 4 answer options and is designed to evaluate a model's understanding of specific medical and biological domains.The Open Medical-LLM Leaderboard offers a robust assessment of a model's performance across various aspects of medical knowledge and reasoning.Insights and AnalysisThe Open Medical-LLM Leaderboard evaluates the performance of various large language models (LLMs) on a diverse set of medical question-answering tasks. Here are our key findings:Commercial models like GPT-4-base and Med-PaLM-2 consistently achieve high accuracy scores across various medical datasets, demonstrating strong performance in different medical domains.Open-source models, such as Starling-LM-7B, gemma-7b, Mistral-7B-v0.1, and Hermes-2-Pro-Mistral-7B, show competitive performance on certain datasets and tasks, despite having smaller sizes of around 7 billion parameters.Both commercial and open-source models perform well on tasks like comprehension and reasoning over scientific biomedical literature (PubMedQA) and applying clinical knowledge and decision-making skills (MMLU Clinical Knowledge subset).Google's model, Gemini Pro demonstrates strong performance in various medical domains, particularly excelling in data-intensive and procedural tasks like Biostatistics, Cell Biology, and Obstetrics & Gynecology. However, it shows moderate to low performance in critical areas such as Anatomy, Cardiology, and Dermatology, revealing gaps that require further refinement for comprehensive medical application.Submitting Your Model for EvaluationTo submit your model for evaluation on the Open Medical-LLM Leaderboard, follow these steps:1. Convert Model Weights to Safetensors FormatFirst, convert your model weights to the safetensors format. Safetensors is a new format for storing weights that is safer and faster to load and use. Converting your model to this format will also allow the leaderboard to display the number of parameters of your model in the main table.2. Ensure Compatibility with AutoClassesBefore submitting your model, make sure you can load your model and tokenizer using the AutoClasses from the Transformers library. Use the following code snippet to test the compatibility:from transformers import AutoConfig, AutoModel, AutoTokenizerconfig = AutoConfig.from_pretrained(MODEL_HUB_ID)model = AutoModel.from_pretrained("your model name")tokenizer = AutoTokenizer.from_pretrained("your model name")If this step fails, follow the error messages to debug your model before submitting it. It's likely that your model has been improperly uploaded.3. Make Your Model PublicEnsure that your model is publicly accessible. The leaderboard cannot evaluate models that are private or require special access permissions.4. Remote Code Execution (Coming Soon)Currently, the Open Medical-LLM Leaderboard does not support models that require use_remote_code=True. However, the leaderboard team is actively working on adding this feature, so stay tuned for updates.5. Submit Your Model via the Leaderboard WebsiteOnce your model is in the safetensors format, compatible with AutoClasses, and publicly accessible, you can submit it for evaluation using the "Submit here!" panel on the Open Medical-LLM Leaderboard website. Fill out the required information, such as the model name, description, and any additional details, and click the submit button.The leaderboard team will process your submission and evaluate your model's performance on the various medical QA datasets. Once the evaluation is complete, your model's scores will be added to the leaderboard, allowing you to compare its performance with other submitted models.What's next? Expanding the Open Medical-LLM LeaderboardThe Open Medical-LLM Leaderboard is committed to expanding and adapting to meet the evolving needs of the research community and healthcare industry. Key areas of focus include:Incorporating a wider range of medical datasets covering diverse aspects of healthcare, such as radiology, pathology, and genomics, through collaboration with researchers, healthcare organizations, and industry partners.Enhancing evaluation metrics and reporting capabilities by exploring additional performance measures beyond accuracy, such as Pointwise score and domain-specific metrics that capture the unique requirements of medical applications.A few efforts are already underway in this direction. If you are interested in collaborating on the next benchmark we are planning to propose, please join our Discord community to learn more and get involved. We would love to collaborate and brainstorm ideas!If you're passionate about the intersection of AI and healthcare, building models for the healthcare domain, and care about safety and hallucination issues for medical LLMs, we invite you to join our vibrant community on Discord.Credits and AcknowledgmentsSpecial thanks to all the people who helped make this possible, including Clémentine Fourrier and the Hugging Face team. I would like to thank Andreas Motzfeldt, Aryo Gema, & Logesh Kumar Umapathi for their discussion and feedback on the leaderboard during development. Sincere gratitude to Prof. Pasquale Minervini for his time, technical assistance, and for providing GPU support from the University of Edinburgh.About Open Life Science AIOpen Life Science AI is a project that aims to revolutionize the application of Artificial intelligence in the life science and healthcare domains. It serves as a central hub for list of medical models, datasets, benchmarks, and tracking conference deadlines, fostering collaboration, innovation, and progress in the field of AI-assisted healthcare. We strive to establish Open Life Science AI as the premier destination for anyone interested in the intersection of AI and healthcare. We provide a platform for researchers, clinicians, policymakers, and industry experts to engage in dialogues, share insights, and explore the latest developments in the field.CitationIf you find our evaluations useful, please consider citing our workMedical-LLM Leaderboard@misc{Medical-LLM Leaderboard,author = {Ankit Pal, Pasquale Minervini, Andreas Geert Motzfeldt, Aryo Pradipta Gema and Beatrice Alex},title = {openlifescienceai/open_medical_llm_leaderboard},year = {2024},publisher = {Hugging Face},howpublished = "\url{https://huggingface.co/spaces/openlifescienceai/open_medical_llm_leaderboard}"} |
https://huggingface.co/blog/gradio-reload | AI Apps in a Flash with Gradio's Reload Mode | Freddy Boulton | April 16, 2024 | In this post, I will show you how you can build a functional AI application quickly with Gradio's reload mode. But before we get to that, I want to explain what reload mode does and why Gradio implements its own auto-reloading logic. If you are already familiar with Gradio and want to get to building, please skip to the third section.What Does Reload Mode Do?To put it simply, it pulls in the latest changes from your source files without restarting the Gradio server. If that does not make sense yet, please continue reading.Gradio is a popular Python library for creating interactive machine learning apps.Gradio developers declare their UI layout entirely in Python and add some Python logic that triggers whenever a UI event happens. It's easy to learn if you know basic Python. Check out this quickstart if you are not familiar with Gradio yet.Gradio applications are launched like any other Python script, just run python app.py (the file with the Gradio code can be called anything). This will start an HTTP server that renders your app's UI and responds to user actions. If you want to make changes to your app, you stop the server (typically with Ctrl + C), edit your source file, and then re-run the script.Having to stop and relaunch the server can introduce a lot of latency while you are developing your app. It would be better if there was a way to pull in the latest code changes automatically so you can test new ideas instantly.That's exactly what Gradio's reload mode does. Simply run gradio app.py instead of python app.py to launch your app in reload mode!Why Did Gradio Build Its Own Reloader?Gradio applications are run with uvicorn, an asynchronous server for Python web frameworks. Uvicorn already offers auto-reloading but Gradio implements its own logic for the following reasons:Faster Reloading: Uvicorn's auto-reload will shut down the server and spin it back up. This is faster than doing it by hand, but it's too slow for developing a Gradio app. Gradio developers build their UI in Python so they should see how ther UI looks as soon as a change is made. This is standard in the Javascript ecosystem but it's new to Python. Selective Reloading: Gradio applications are AI applications. This means they typically load an AI model into memory or connect to a datastore like a vector database. Relaunching the server during development will mean reloading that model or reconnecting to that database, which introduces too much latency between development cycles. To fix this issue, Gradio introduces an if gr.NO_RELOAD: code-block that you can use to mark code that should not be reloaded. This is only possible because Gradio implements its own reloading logic.I will now show you how you can use Gradio reload mode to quickly build an AI App. Building a Document Analyzer ApplicationOur application will allow users to upload pictures of documents and ask questions about them. They will receive answers in natural language. We will use the free Hugging Face Inference API so you should be able to follow along from your computer. No GPU required!To get started, let's create a barebones gr.Interface. Enter the following code in a file called app.py and launch it in reload mode with gradio app.py:import gradio as grdemo = gr.Interface(lambda x: x, "text", "text")if __name__ == "__main__":demo.launch()This creates the following simple UI.Since I want to let users upload image files along with their questions, I will switch the input component to be a gr.MultimodalTextbox(). Notice how the UI updates instantly!This UI works but, I think it would be better if the input textbox was below the output textbox. I can do this with the Blocks API. I'm also customizing the input textbox by adding a placeholder text to guide users.Now that I'm satisfied with the UI, I will start implementing the logic of the chat_fn.Since I'll be using Hugging Face's Inference API, I will import the InferenceClient from the huggingface_hub package (it comes pre-installed with Gradio). I'll be using the impira/layouylm-document-qa model to answer the user's question. I will then use the HuggingFaceH4/zephyr-7b-beta LLM to provide a response in natural language.from huggingface_hub import InferenceClientclient = InferenceClient()def chat_fn(multimodal_message):question = multimodal_message["text"]image = multimodal_message["files"][0]answer = client.document_question_answering(image=image, question=question, model="impira/layoutlm-document-qa")answer = [{"answer": a.answer, "confidence": a.score} for a in answer]user_message = {"role": "user", "content": f"Question: {question}, answer: {answer}"}message = ""for token in client.chat_completion(messages=[user_message],max_tokens=200, stream=True,model="HuggingFaceH4/zephyr-7b-beta"):if token.choices[0].finish_reason is not None:continuemessage += token.choices[0].delta.contentyield messageHere is our demo in action!I will also provide a system message so that the LLM keeps answers short and doesn't include the raw confidence scores. To avoid re-instantiating the InferenceClient on every change, I will place it inside a no reload code block.if gr.NO_RELOAD:client = InferenceClient()system_message = {"role": "system","content": """You are a helpful assistant.You will be given a question and a set of answers along with a confidence score between 0 and 1 for each answer.You job is to turn this information into a short, coherent response.For example:Question: "Who is being invoiced?", answer: {"answer": "John Doe", "confidence": 0.98}You should respond with something like:With a high degree of confidence, I can say John Doe is being invoiced.Question: "What is the invoice total?", answer: [{"answer": "154.08", "confidence": 0.75}, {"answer": "155", "confidence": 0.25}You should respond with something like:I believe the invoice total is $154.08 but it can also be $155."""}Here is our demo in action now! The system message really helped keep the bot's answers short and free of long decimals.As a final improvement, I will add a markdown header to the page:ConclusionIn this post, I developed a working AI application with Gradio and the Hugging Face Inference API. When I started developing this, I didn't know what the final product would look like so having the UI and server logic reload instanty let me iterate on different ideas very quickly. It took me about an hour to develop this entire app!If you'd like to see the entire code for this demo, please check out this space! |
https://huggingface.co/blog/leaderboard-livecodebench | Introducing the LiveCodeBench Leaderboard - Holistic and Contamination-Free Evaluation of Code LLMs | Naman Jain, Alex Gu, Tianjun Zhang, Wen-Ding Li, King Han, Fanjia Yan, Clémentine Fourrier | April 16, 2024 | We are excited to introduce the LiveCodeBench leaderboard, based on LiveCodeBench, a new benchmark developed by researchers from UC Berkeley, MIT, and Cornell for measuring LLMs’ code generation capabilities. LiveCodeBench collects coding problems over time from various coding contest platforms, annotating problems with their release dates. Annotations are used to evaluate models on problem sets released in different time windows, allowing an “evaluation over time” strategy that helps detect and prevent contamination. In addition to the usual code generation task, LiveCodeBench also assesses self-repair, test output prediction, and code execution, thus providing a more holistic view of coding capabilities required for the next generation of AI programming agents.LiveCodeBench Scenarios and Evaluation LiveCodeBench problems are curated from coding competition platforms: LeetCode, AtCoder, and CodeForces. These websites periodically host contests containing problems that assess the coding and problem-solving skills of participants. Problems consist of a natural language problem statement along with example input-output examples, and the goal is to write a program that passes a set of hidden tests. Thousands of participants engage in the competitions, which ensures that the problems are vetted for clarity and correctness.LiveCodeBench uses the collected problems for building its four coding scenariosCode Generation. The model is given a problem statement, which includes a natural language description and example tests (input-output pairs), and is tasked with generating a correct solution. Evaluation is based on the functional correctness of the generated code, which is determined using a set of test cases.Self Repair. The model is given a problem statement and generates a candidate program, similar to the code generation scenario above. In case of a mistake, the model is provided with error feedback (either an exception message or a failing test case) and is tasked with generating a fix. Evaluation is performed using the same functional correctness as above.Code Execution. The model is provided a program snippet consisting of a function (f) along with a test input, and is tasked with predicting the output of the program on the input test case. Evaluation is based on an execution-based correctness metric: the model's output is considered correct if the assertion assert f(input) == generated_output passes.Test Output Prediction. The model is given the problem statement along with a test case input and is tasked with generating the expected output for the input. Tests are generated solely from problem statements, without the need for the function’s implementation, and outputs are evaluated using an exact match checker.For each scenario, evaluation is performed using the Pass@1 metric. The metric captures the probability of generating a correct answer and is computed using the ratio of the count of correct answers over the count of total attempts, following Pass@1 = total_correct / total_attempts.Preventing Benchmark Contamination Contamination is one of the major bottlenecks in current LLM evaluations. Even within LLM coding evaluations, there have been evidential reports of contamination and overfitting on standard benchmarks like HumanEval ([1] and [2]). For this reason, we annotate problems with release dates in LiveCodeBench: that way, for new models with a training-cutoff date D, we can compute scores on problems released after D to measure their generalization on unseen problems. LiveCodeBench formalizes this with a “scrolling over time” feature, that allows you to select problems within a specific time window. You can try it out in the leaderboard above!Findings We find that:while model performances are correlated across different scenarios, the relative performances and orderings can vary on the 4 scenarios we use GPT-4-Turbo is the best-performing model across most scenarios. Furthermore, its margin grows on self-repair tasks, highlighting its capability to take compiler feedback.Claude-3-Opus overtakes GPT-4-Turbo in the test output prediction scenario, highlighting stronger natural language reasoning capabilities. Mistral-Large performs considerably better on natural language reasoning tasks like test output prediction and code execution.How to Submit? To evaluate your code models on LiveCodeBench, you can follow these stepsEnvironment Setup: You can use conda to create a new environment, and install LiveCodeBenchgit clone https://github.com/LiveCodeBench/LiveCodeBench.git cd LiveCodeBenchpip install poetrypoetry installFor evaluating new Hugging Face models, you can easily evaluate the model usingpython -m lcb_runner.runner.main --model {model_name} --scenario {scenario_name}for different scenarios. For new model families, we have implemented an extensible framework and you can support new models by modifying lcb_runner/lm_styles.py and lcb_runner/prompts as described in the github README.Once you results are generated, you can submit them by filling out this form.How to contribute Finally, we are looking for collaborators and suggestions for LiveCodeBench. The dataset and code are available online, so please reach out by submitting an issue or mail. |
https://huggingface.co/blog/fhe-endpoints | Running Privacy-Preserving Inferences on Hugging Face Endpoints | Benoit Chevallier-Mames | April 16, 2024 | This is a guest blog post by the Zama team. Zama is an open source cryptography company building state-of-the-art FHE solutions for blockchain and AI.Eighteen months ago, Zama started Concrete ML, a privacy-preserving ML framework with bindings to traditional ML frameworks such as scikit-learn, ONNX, PyTorch, and TensorFlow. To ensure privacy for users' data, Zama uses Fully Homomorphic Encryption (FHE), a cryptographic tool that allows to make direct computations over encrypted data, without ever knowing the private key.From the start, we wanted to pre-compile some FHE-friendly networks and make them available somewhere on the internet, allowing users to use them trivially. We are ready today! And not in a random place on the internet, but directly on Hugging Face.More precisely, we use Hugging Face Endpoints and custom inference handlers, to be able to store our Concrete ML models and let users deploy on HF machines in one click. At the end of this blog post, you will understand how to use pre-compiled models and how to prepare yours. This blog can also be considered as another tutorial for custom inference handlers.Deploying a pre-compiled modelLet's start with deploying an FHE-friendly model (prepared by Zama or third parties - see Preparing your pre-compiled model section below for learning how to prepare yours).First, look for the model you want to deploy: We have pre-compiled a bunch of models on Zama's HF page (or you can find them with tags). Let's suppose you have chosen concrete-ml-encrypted-decisiontree: As explained in the description, this pre-compiled model allows you to detect spam without looking at the message content in the clear.Like with any other model available on the Hugging Face platform, select Deploy and then Inference Endpoint (dedicated):Inference Endpoint (dedicated)Next, choose the Endpoint name or the region, and most importantly, the CPU (Concrete ML models do not use GPUs for now; we are working on it) as well as the best machine available - in the example below we chose eight vCPU. Now click on Create Endpoint and wait for the initialization to finish.Create EndpointAfter a few seconds, the Endpoint is deployed, and your privacy-preserving model is ready to operate.Endpoint is created: Don’t forget to delete the Endpoint (or at least pause it) when you are no longer using it, or else it will cost more than anticipated.Using the EndpointInstalling the client sideThe goal is not only to deploy your Endpoint but also to let your users play with it. For that, they need to clone the repository on their computer. This is done by selecting Clone Repository, in the dropdown menu:Clone RepositoryThey will be given a small command line that they can run in their terminal:git clone https://huggingface.co/zama-fhe/concrete-ml-encrypted-decisiontreeOnce the command is done, they go to the concrete-ml-encrypted-decisiontree directory and open play_with_endpoint.py with their editor. Here, they will find the line with API_URL = … and should replace it with the new URL of the Endpoint created in the previous section.API_URL = "https://vtx9w974oxrq54ff.us-east-1.aws.endpoints.huggingface.cloud"Of course, fill it in with with your Entrypoint’s URL. Also, define an access token and store it in an environment variable:export HF_TOKEN=[your token hf_XX..XX]Lastly, your user machines need to have Concrete ML installed locally: Make a virtual environment, source it, and install the necessary dependencies:python3.10 -m venv .venvsource .venv/bin/activatepip install -U setuptools pip wheelpip install -r requirements.txtRemark that we currently force the use of Python 3.10 (which is also the default python version used in Hugging Face Endpoints). This is because our development files currently depend on the Python version. We are working on making them independent. This should be available in a further version.Running inferencesNow, your users can run inference on the Endpoint launching the script:python play_with_endpoint.pyIt should generate some logs similar to the following:Sending 0-th piece of the key (remaining size is 71984.14 kbytes)Storing the key in the database under uid=3307376977Sending 1-th piece of the key (remaining size is 0.02 kbytes)Size of the payload: 0.23 kilobytesfor 0-th input, prediction=0 with expected 0 in 3.242 secondsfor 1-th input, prediction=0 with expected 0 in 3.612 secondsfor 2-th input, prediction=0 with expected 0 in 4.765 seconds(...)for 688-th input, prediction=0 with expected 1 in 3.176 secondsfor 689-th input, prediction=1 with expected 1 in 4.027 secondsfor 690-th input, prediction=0 with expected 0 in 4.329 secondsAccuracy on 691 samples is 0.8958031837916064Total time: 2873.860 secondsDuration per inference: 4.123 secondsAdapting to your application or needsIf you edit play_with_endpoint.py, you'll see that we iterate over different samples of the test dataset and run encrypted inferences directly on the Endpoint.for i in range(nb_samples):# Quantize the input and encrypt itencrypted_inputs = fhemodel_client.quantize_encrypt_serialize(X_test[i].reshape(1, -1))# Prepare the payloadpayload = {"inputs": "fake","encrypted_inputs": to_json(encrypted_inputs),"method": "inference","uid": uid,}if is_first:print(f"Size of the payload: {sys.getsizeof(payload) / 1024:.2f} kilobytes")is_first = False# Run the inference on HF serversduration -= time.time()duration_inference = -time.time()encrypted_prediction = query(payload)duration += time.time()duration_inference += time.time()encrypted_prediction = from_json(encrypted_prediction)# Decrypt the result and dequantizeprediction_proba = fhemodel_client.deserialize_decrypt_dequantize(encrypted_prediction)[0]prediction = np.argmax(prediction_proba)if verbose:print(f"for {i}-th input, {prediction=} with expected {Y_test[i]} in {duration_inference:.3f} seconds")# Measure accuracynb_good += Y_test[i] == predictionOf course, this is just an example of the Entrypoint's usage. Developers are encouraged to adapt this example to their own use-case or application.Under the hoodPlease note that all of this is done thanks to the flexibility of custom handlers, and we express our gratitude to the Hugging Face developers for offering such flexibility. The mechanism is defined in handler.py. As explained in the Hugging Face documentation, you can define the __call__ method of EndpointHandler pretty much as you want: In our case, we have defined a method parameter, which can be save_key (to save FHE evaluation keys), append_key (to save FHE evaluation keys piece by piece if the key is too large to be sent in one single call) and finally inference (to run FHE inferences). These methods are used to set the evaluation key once and then run all the inferences, one by one, as seen in play_with_endpoint.py.LimitsOne can remark, however, that keys are stored in the RAM of the Endpoint, which is not convenient for a production environment: At each restart, the keys are lost and need to be re-sent. Plus, when you have several machines to handle massive traffic, this RAM is not shared between the machines. Finally, the available CPU machines only provide eight vCPUs at most for Endpoints, which could be a limit for high-load applications.Preparing your pre-compiled modelNow that you know how easy it is to deploy a pre-compiled model, you may want to prepare yours. For this, you can fork one of the repositories we have prepared. All the model categories supported by Concrete ML (linear models, tree-based models, built-in MLP, PyTorch models) have at least one example, that can be used as a template for new pre-compiled models.Then, edit creating_models.py, and change the ML task to be the one you want to tackle in your pre-compiled model: For example, if you started with concrete-ml-encrypted-decisiontree, change the dataset and the model kind.As explained earlier, you must have installed Concrete ML to prepare your pre-compiled model. Remark that you may have to use the same python version than Hugging Face use by default (3.10 when this blog is written), or your models may need people to use a container with your python during the deployment.Now you can launch python creating_models.py. This will train the model and create the necessary development files (client.zip, server.zip, and versions.json) in the compiled_model directory. As explained in the documentation, these files contain your pre-compiled model. If you have any issues, you can get support on the fhe.org discord.The last step is to modify play_with_endpoint.py to also deal with the same ML task as in creating_models.py: Set the dataset accordingly.Now, you can save this directory with the compiled_model directory and files, as well as your modifications in creating_models.py and play_with_endpoint.py on Hugging Face models. Certainly, you will need to run some tests and make slight adjustments for it to work. Do not forget to add a concrete-ml and FHE tag, such that your pre-compiled model appears easily in searches.Pre-compiled models available todayFor now, we have prepared a few pre-compiled models as examples, hoping the community will extend this soon. Pre-compiled models can be found by searching for the concrete-ml or FHE tags.Model kindDatasetExecution time on HF EndpointLogistic RegressionSynthetic0.4 secDecisionTreeSpam2.0 secQNNIris3.7 secCNNMNIST24 secKeep in mind that there's a limited set of configuration options in Hugging Face for CPU-backed Endpoints (up to 8 vCPU with 16 GB of RAM today). Depending on your production requirements and model characteristics, execution times could be faster on more powerful cloud instances. Hopefully, more powerful machines will soon be available on Hugging Face Endpoints to improve these timings.Additional resourcesCheck out Zama libraries Concrete and Concrete-ML and start using FHE in your own applications.Check out Zama's Hugging Face profile to read more blog posts and try practical FHE demos.Check out @zama_fhe on twitter to get our latest updates.Conclusion and next stepsIn this blog post, we have shown that custom Endpoints are pretty easy yet powerful to use. What we do in Concrete ML is pretty different from the regular workflow of ML practitioners, but we are still able to accommodate the custom Endpoints to deal with most of our needs. Kudos to Hugging Face engineers for developing such a generic solution.We explained how:Developers can create their own pre-compiled models and make them available on Hugging Face models.Companies can deploy developers' pre-compiled models and make them available to their users via HF Endpoints.Final users can use these Endpoints to run their ML tasks over encrypted data.To go further, it would be useful to have more powerful machines available on Hugging Face Endpoints to make inferences faster. Also, we could imagine that Concrete ML becomes more integrated into Hugging Face's interface and has a Private-Preserving Inference Endpoint button, simplifying developers' lives even more. Finally, for integration in several server machines, it could be helpful to have a way to share a state between machines and keep this state non-volatile (FHE inference keys would be stored there). |
https://huggingface.co/blog/ryght-case-study | Ryght’s Journey to Empower Healthcare and Life Sciences with Expert Support from Hugging Face | Andrew Reed, Johnny Crupi | April 16, 2024 | This is a guest blog post by the Ryght team. Who is Ryght? Ryght is building an enterprise-grade generative AI platform tailored for the healthcare and life sciences sectors. Today is their official launch of Ryght Preview, now publicly available for all.Life science companies are amassing a wealth of data from diverse sources (lab data, EMR, genomics, claims, pharmacy, clinical, etc.), but analysis of that data is archaic, requiring large teams for everything from simple queries to developing useful ML models. There is huge demand for actionable knowledge to drive drug development, clinical trials, and commercial activity, and the rise of precision medicine is only accelerating this demand.Ryght’s goal is to empower life science professionals to get the insights they need swiftly and securely. To do so, they’re building a SaaS platform that offers industry-specific AI copilots and custom built solutions for professionals and organizations to accelerate their research, analysis, and documentation across a variety of complex data sources.Recognizing how fast paced and ever changing the AI landscape is, Ryght sought out Hugging Face as a technical advisory partner early in their journey via the Expert Support Program. Overcoming challenges, together Our partnership with Hugging Face's expert support has played a crucial role in expediting the development of our generative AI platform. The rapidly evolving landscape of AI has the potential to revolutionize our industry, and Hugging Face’s highly performant and enterprise-ready Text Generation Inference (TGI) and Text Embeddings Inference (TEI) services are game changers in their own right. - Johnny Crupi, CTO at RyghtRyght faced several challenges as they set out to build their generative AI platform. 1. The need to quickly upskill a team and stay informed in a highly dynamic environment With AI and ML technologies advancing so quickly, ensuring that the team remains abreast of the latest techniques, tools, and best practices is critical. This continuous learning curve is steep and requires a concerted effort to stay informed.Having access to Hugging Face’s team of experts who operate at the center of the AI ecosystem helps Ryght keep up with the latest developments and models that are relevant to their domain. This is achieved through open, asynchronous channels of communication, regular advisory meetings, and dedicated technical workshops. 2. Identifying the most [cost] effective ML approaches amidst the noisy sea of options The AI field is bustling with innovation, leading to an abundance of tools, libraries, models, and methodologies. For a startup like Ryght, it's imperative to cut through this noise and identify which ML strategies are most applicable to their unique use cases in the life sciences sector. This involves not just understanding the current state of the art, but also looking ahead to which technologies will remain relevant and scalable for the future.Hugging Face serves as a partner to Ryght’s technical team – assisting in solution design, proof-of-concept development, and production workload optimization. This includes tailored recommendations on libraries, frameworks, and models best fit for Ryght’s specific needs, along with demonstrable examples of how to use them. This guidance ultimately streamlines the decision-making process and reduces the time to development. 3. Requirement to develop performant solutions that emphasize security, privacy, and flexibility Given the focus on enterprise-level solutions, Ryght prioritizes security, privacy, and governance. This necessitates a flexible architecture capable of interfacing with various large language models (LLMs) in real-time, a crucial feature for their life science-specific content generation and query handling.Understanding the rapid innovation within the open-source community, especially regarding medical LLMs, they embraced an architectural approach that supports "pluggable" LLMs. This design choice allows them to seamlessly evaluate and integrate new or specialized medical LLMs as they emerge.In Ryght’s platform, each LLM is registered and linked to one or more, customer-specific inference endpoints. This setup not only secures the connections, but also provides the ability to switch between different LLMs, offering unparalleled flexibility – a design choice that is made possible by the adoption of Hugging Face’s Text Generation Inference (TGI) and Inference Endpoints.In addition to TGI, Ryght has also integrated Text Embeddings Inference (TEI) into their ML platform. Serving open-source embedding models with TEI marks a significant improvement over relying solely on proprietary embeddings – enabling Ryght to benefit from faster inference speeds, the elimination of rate limit worries, and the flexibility to serve their own fine-tuned models, tailored to the unique requirements of the life sciences domain.Catering to multiple customers simultaneously, their system is designed to handle high volumes of concurrent requests while maintaining low latency. Their embedding and inference services go beyond simple model invocation and encompass a suite of services adept at batching, queuing, and distributing model processing across GPUs. This infrastructure is critical to avoiding performance bottlenecks and ensuring users do not experience delays, thereby maintaining an optimal system response time. Conclusion Ryght's strategic partnership with and integration of Hugging Face's ML services underscores their commitment to delivering cutting-edge solutions in healthcare and life sciences. By embracing a flexible, secure, and scalable architecture, they ensure that their platform remains at the forefront of innovation, offering their clients unparalleled service and expertise in navigating the complexities of modern medical domains. Sign up for Ryght Preview, now publicly available to life sciences knowledge workers as a free, secure platform with frictionless onboarding. Ryght’s copilot library consists of a diverse collection of tools to accelerate information retrieval, synthesis and structuring of complex unstructured data, and document builders, taking what might have taken weeks to complete down to days or hours. To inquire about custom building and collaborations, contact their team of AI experts to discuss Ryght for Enterprise.If you’re interested to know more about Hugging Face Expert Support, please contact us here - our team will reach out to discuss your requirements! |
https://huggingface.co/blog/idefics2 | Introducing Idefics2: A Powerful 8B Vision-Language Model for the community | Leo Tronchon, Hugo Laurençon, Victor Sanh | April 15, 2024 | We are excited to release Idefics2, a general multimodal model that takes as input arbitrary sequences of texts and images, and generates text responses. It can answer questions about images, describe visual content, create stories grounded in multiple images, extract information from documents, and perform basic arithmetic operations. Idefics2 improves upon Idefics1: with 8B parameters, an open license (Apache 2.0), and enhanced OCR (Optical Character Recognition) capabilities, Idefics2 is a strong foundation for the community working on multimodality. Its performance on Visual Question Answering benchmarks is top of its class size, and competes with much larger models such as LLava-Next-34B and MM1-30B-chat. Idefics2 is also integrated in 🤗 Transformers from the get-go and therefore is straightforward to finetune for many multimodal applications. You can try out the models on the Hub right now!ModelOpen weightsSize# tokens per imageMMMU (val/test)MathVista (testmini)TextVQA (val)MMBench (test)VQAv2 (test-dev)DocVQA (test)DeepSeek-VL✅7B57636.6/-36.164.473.2-49.6LLaVa-NeXT-Mistral-7B✅7B288035.3/-37.765.768.782.2-LLaVa-NeXT-13B✅13B288036.2/-35.367.170.082.8-LLaVa-NeXT-34B✅34B288051.1/44.746.569.579.383.7-MM1-Chat-7B❌7B72037.0/35.635.972.872.382.8-MM1-Chat-30B❌30B72044.7/40.339.473.575.183.7Gemini 1.0 Pro❌🤷♂️🤷♂️47.9/-45.274.6-71.288.1Gemini 1.5 Pro❌🤷♂️🤷♂️58.5/-52.173.5-73.286.5Claude 3 Haiku❌🤷♂️🤷♂️50.2/-46.4---88.8Idefics1 instruct (32-shots)✅80B---39.3-68.8-Idefics2 (w/o im. split)*✅8B6443.5/37.951.670.476.880.867.3Idefics2 (w/ im. split)*✅8B32043.0/37.751.473.076.781.274.0* w/ im. split: Following the strategy from SPHINX and LLaVa-NeXT, we allow for an optional sub-image splitting in 4.Training DataIdefics2 was trained on a mixture of openly available datasets for the pretraining: Interleaved webdocuments (Wikipedia,OBELICS), image-caption pairs (Public Multimodal Dataset, LAION-COCO), OCR data (PDFA (en), IDL and Rendered-text, and image-to-code data (WebSight)). The interactive visualization allows exploring the OBELICS dataset. Following common practices in the foundation model community, we further train the base model on task-oriented data. However, these data are often in disparate formats, and scattered in various places. Gathering them is a barrier for the community. To address that problem, we are releasing the multimodal instruction fine-tuning dataset we've been cooking: The Cauldron, an open compilation of 50 manually-curated datasets formatted for multi-turn conversations. We instruction fine-tuned Idefics2 on the concatenation of The Cauldron and various text-only instruction fine-tuning datasets.Improvements over Idefics1We manipulate images in their native resolutions (up to 980 x 980) and native aspect ratios by following the NaViT strategy. That circumvents the need to resize images to fixed-size squares as it has been historically done in the computer vision community. Additionally, we follow the strategy from SPHINX and (optionally) allow sub-image splitting and passing images of very large resolution.We significantly enhanced OCR abilities by integrating data that requires the model to transcribe text in an image or a document. We also improved abilities in answering questions on charts, figures, and documents with appropriate training data.We departed from the Idefics1's architecture (gated cross-attentions) and simplified the integration of visual features into the language backbone. The images are fed to the vision encoder followed by a learned Perceiver pooling and an MLP modality projection. That pooled sequence is then concatenated with the text embeddings to obtain an (interleaved) sequence of image(s) and text(s).All of these improvements along with better pre-trained backbones yield a significant jump in performance over Idefics1 for a model that is 10x smaller.Getting Started with Idefics2Idefics2 is available on the Hugging Face Hub and supported in the last transformers version. Here is a code sample to try it out:import requestsimport torchfrom PIL import Imagefrom transformers import AutoProcessor, AutoModelForVision2Seqfrom transformers.image_utils import load_imageDEVICE = "cuda:0"# Note that passing the image urls (instead of the actual pil images) to the processor is also possibleimage1 = load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg")image2 = load_image("https://cdn.britannica.com/59/94459-050-DBA42467/Skyline-Chicago.jpg")image3 = load_image("https://cdn.britannica.com/68/170868-050-8DDE8263/Golden-Gate-Bridge-San-Francisco.jpg")processor = AutoProcessor.from_pretrained("HuggingFaceM4/idefics2-8b")model = AutoModelForVision2Seq.from_pretrained("HuggingFaceM4/idefics2-8b",).to(DEVICE)# Create inputsmessages = [{"role": "user","content": [{"type": "image"},{"type": "text", "text": "What do we see in this image?"},]},{"role": "assistant","content": [{"type": "text", "text": "In this image, we can see the city of New York, and more specifically the Statue of Liberty."},]},{"role": "user","content": [{"type": "image"},{"type": "text", "text": "And how about this image?"},]},]prompt = processor.apply_chat_template(messages, add_generation_prompt=True)inputs = processor(text=prompt, images=[image1, image2], return_tensors="pt")inputs = {k: v.to(DEVICE) for k, v in inputs.items()}# Generategenerated_ids = model.generate(**inputs, max_new_tokens=500)generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)print(generated_texts)We also provide a fine-tuning colab which should come in handy for anyone looking to improve Idefics2 on specific use cases.ResourcesIf you wish to deep dive further, here is the compilation of all resources for Idefics2:Idefics2 collectionIdefics2 model with model cardIdefics2-base model with model cardIdefics2-chat model with model card (coming soon)The Cauldron with its dataset cardOBELICS with its dataset cardWebSight with its dataset cardIdefics2 fine-tuning colabIdefics2-8B model demo (not the chatty model)Idefics2 demo: (coming soon)Idefics2 paper: (coming soon)LicenseThe model is built on top of two pre-trained models: Mistral-7B-v0.1 and siglip-so400m-patch14-384. Both of them have been released under Apache-2.0 license.We release Idefics2 weights under an Apache-2.0 license as well.AcknowledgmentsThank you to the Google Team and Mistral AI for releasing and making their models available to the open-source AI community!Special thanks to Chun Te Lee for the barplot, and Merve Noyan for the review and suggestions on the blogpost 🤗 |
https://huggingface.co/blog/vlms | Vision Language Models Explained | Merve Noyan, Edward Beeching | April 11, 2024 | Vision language models are models that can learn simultaneously from images and texts to tackle many tasks, from visual question answering to image captioning. In this post, we go through the main building blocks of vision language models: have an overview, grasp how they work, figure out how to find the right model, how to use them for inference and how to easily fine-tune them with the new version of trl released today!What is a Vision Language Model?Vision language models are broadly defined as multimodal models that can learn from images and text. They are a type of generative models that take image and text inputs, and generate text outputs. Large vision language models have good zero-shot capabilities, generalize well, and can work with many types of images, including documents, web pages, and more. The use cases include chatting about images, image recognition via instructions, visual question answering, document understanding, image captioning, and others. Some vision language models can also capture spatial properties in an image. These models can output bounding boxes or segmentation masks when prompted to detect or segment a particular subject, or they can localize different entities or answer questions about their relative or absolute positions. There’s a lot of diversity within the existing set of large vision language models, the data they were trained on, how they encode images, and, thus, their capabilities.Overview of Open-source Vision Language ModelsThere are many open vision language models on the Hugging Face Hub. Some of the most prominent ones are shown in the table below. There are base models, and models fine-tuned for chat that can be used in conversational mode. Some of these models have a feature called “grounding” which reduces model hallucinations. All models are trained on English unless stated otherwise.ModelPermissive LicenseModel SizeImage ResolutionAdditional CapabilitiesLLaVA 1.6 (Hermes 34B)✅34B672x672deepseek-vl-7b-base✅7B384x384DeepSeek-VL-Chat✅7B384x384Chatmoondream2✅~2B378x378CogVLM-base✅17B490x490CogVLM-Chat✅17B490x490Grounding, chatFuyu-8B❌8B300x300Text detection within imageKOSMOS-2✅~2B224x224Grounding, zero-shot object detectionQwen-VL✅4B448x448Zero-shot object detectionQwen-VL-Chat✅4B448x448ChatYi-VL-34B✅34B448x448Bilingual (English, Chinese)Finding the right Vision Language ModelThere are many ways to select the most appropriate model for your use case.Vision Arena is a leaderboard solely based on anonymous voting of model outputs and is updated continuously. In this arena, the users enter an image and a prompt, and outputs from two different models are sampled anonymously, then the user can pick their preferred output. This way, the leaderboard is constructed solely based on human preferences. Vision ArenaOpen VLM Leaderboard, is another leaderboard where various vision language models are ranked according to these metrics and average scores. You can also filter models according to model sizes, proprietary or open-source licenses, and rank for different metrics.Open VLM LeaderboardVLMEvalKit is a toolkit to run benchmarks on a vision language models that powers the Open VLM Leaderboard. Another evaluation suite is LMMS-Eval, which provides a standard command line interface to evaluate Hugging Face models of your choice with datasets hosted on the Hugging Face Hub, like below:accelerate launch --num_processes=8 -m lmms_eval --model llava --model_args pretrained="liuhaotian/llava-v1.5-7b" --tasks mme,mmbench_en --batch_size 1 --log_samples --log_samples_suffix llava_v1.5_mme_mmbenchen --output_path ./logs/ Both the Vision Arena and the Open VLM Leaderbard are limited to the models that are submitted to them, and require updates to add new models. If you want to find additional models, you can browse the Hub for models under the task image-text-to-text. There are different benchmarks to evaluate vision language models that you may come across in the leaderboards. We will go through a few of them.MMMUA Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI (MMMU) is the most comprehensive benchmark to evaluate vision language models. It contains 11.5K multimodal challenges that require college-level subject knowledge and reasoning across different disciplines such as arts and engineering. MMBenchMMBench is an evaluation benchmark that consists of 3000 single-choice questions over 20 different skills, including OCR, object localization and more. The paper also introduces an evaluation strategy called CircularEval, where the answer choices of a question are shuffled in different combinations, and the model is expected to give the right answer at every turn. There are other more specific benchmarks across different domains, including MathVista (visual mathematical reasoning), AI2D (diagram understanding), ScienceQA (Science Question Answering) and OCRBench (document understanding).Technical DetailsThere are various ways to pretrain a vision language model. The main trick is to unify the image and text representation and feed it to a text decoder for generation. The most common and prominent models often consist of an image encoder, an embedding projector to align image and text representations (often a dense neural network) and a text decoder stacked in this order. As for the training parts, different models have been following different approaches. For instance, LLaVA consists of a CLIP image encoder, a multimodal projector and a Vicuna text decoder. The authors fed a dataset of images and captions to GPT-4 and generated questions related to the caption and the image. The authors have frozen the image encoder and text decoder and have only trained the multimodal projector to align the image and text features by feeding the model images and generated questions and comparing the model output to the ground truth captions. After the projector pretraining, they keep the image encoder frozen, unfreeze the text decoder, and train the projector with the decoder. This way of pre-training and fine-tuning is the most common way of training vision language models.Structure of a Typical Vision Language ModelProjection and text embeddings are concatenatedAnother example is KOSMOS-2, where the authors chose to fully train the model end-to-end, which is computationally expensive compared to LLaVA-like pre-training. The authors later did language-only instruction fine-tuning to align the model. Fuyu-8B, as another example, doesn’t even have an image encoder. Instead, image patches are directly fed to a projection layer and then the sequence goes through an auto-regressive decoder. Most of the time, you don’t need to pre-train a vision language model, as you can either use one of the existing ones or fine-tune them on your own use case. We will go through how to use these models using transformers and fine-tune using SFTTrainer.Using Vision Language Models with transformersYou can infer with Llava using the LlavaNext model as shown below.Let’s initialize the model and the processor first.from transformers import LlavaNextProcessor, LlavaNextForConditionalGenerationimport torchdevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')processor = LlavaNextProcessor.from_pretrained("llava-hf/llava-v1.6-mistral-7b-hf")model = LlavaNextForConditionalGeneration.from_pretrained("llava-hf/llava-v1.6-mistral-7b-hf",torch_dtype=torch.float16,low_cpu_mem_usage=True)model.to(device)We now pass the image and the text prompt to the processor, and then pass the processed inputs to the generate. Note that each model uses its own prompt template, be careful to use the right one to avoid performance degradation.from PIL import Imageimport requestsurl = "https://github.com/haotian-liu/LLaVA/blob/1a91fc274d7c35a9b50b3cb29c4247ae5837ce39/images/llava_v1_5_radar.jpg?raw=true"image = Image.open(requests.get(url, stream=True).raw)prompt = "[INST] <image>What is shown in this image? [/INST]"inputs = processor(prompt, image, return_tensors="pt").to(device)output = model.generate(**inputs, max_new_tokens=100)Call decode to decode the output tokens.print(processor.decode(output[0], skip_special_tokens=True))Fine-tuning Vision Language Models with TRLWe are excited to announce that TRL’s SFTTrainer now includes experimental support for Vision Language Models! We provide an example here of how to perform SFT on a Llava 1.5 VLM using the llava-instruct dataset which contains 260k image-conversation pairs.The dataset contains user-assistant interactions formatted as a sequence of messages. For example, each conversation is paired with an image that the user asks questions about.To use the experimental VLM training support, you must install the latest version of TRL, with pip install -U trl.The full example script can be found here.from trl.commands.cli_utils import SftScriptArguments, TrlParserparser = TrlParser((SftScriptArguments, TrainingArguments))args, training_args = parser.parse_args_and_config()Initialize the chat template for instruction fine-tuning.LLAVA_CHAT_TEMPLATE = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. {% for message in messages %}{% if message['role'] == 'user' %}USER: {% else %}ASSISTANT: {% endif %}{% for item in message['content'] %}{% if item['type'] == 'text' %}{{ item['text'] }}{% elif item['type'] == 'image' %}<image>{% endif %}{% endfor %}{% if message['role'] == 'user' %} {% else %}{{eos_token}}{% endif %}{% endfor %}"""We will now initialize our model and tokenizer. from transformers import AutoTokenizer, AutoProcessor, TrainingArguments, LlavaForConditionalGenerationimport torchmodel_id = "llava-hf/llava-1.5-7b-hf"tokenizer = AutoTokenizer.from_pretrained(model_id)tokenizer.chat_template = LLAVA_CHAT_TEMPLATEprocessor = AutoProcessor.from_pretrained(model_id)processor.tokenizer = tokenizermodel = LlavaForConditionalGeneration.from_pretrained(model_id, torch_dtype=torch.float16)Let’s create a data collator to combine text and image pairs.class LLavaDataCollator:def __init__(self, processor):self.processor = processordef __call__(self, examples):texts = []images = []for example in examples:messages = example["messages"]text = self.processor.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=False)texts.append(text)images.append(example["images"][0])batch = self.processor(texts, images, return_tensors="pt", padding=True)labels = batch["input_ids"].clone()if self.processor.tokenizer.pad_token_id is not None:labels[labels == self.processor.tokenizer.pad_token_id] = -100batch["labels"] = labelsreturn batchdata_collator = LLavaDataCollator(processor)Load our dataset.from datasets import load_datasetraw_datasets = load_dataset("HuggingFaceH4/llava-instruct-mix-vsft")train_dataset = raw_datasets["train"]eval_dataset = raw_datasets["test"]Initialize the SFTTrainer, passing in the model, the dataset splits, PEFT configuration and data collator and call train(). To push our final checkpoint to the Hub, call push_to_hub().from trl import SFTTrainertrainer = SFTTrainer(model=model,args=training_args,train_dataset=train_dataset,eval_dataset=eval_dataset,dataset_text_field="text", # need a dummy fieldtokenizer=tokenizer,data_collator=data_collator,dataset_kwargs={"skip_prepare_dataset": True},)trainer.train()Save the model and push to the Hugging Face Hub.trainer.save_model(training_args.output_dir)trainer.push_to_hub()You can find the trained model here.You can try the model we just trained directly in our VLM playground below ⬇️AcknowledgementsWe would like to thank Pedro Cuenca, Lewis Tunstall, Kashif Rasul and Omar Sanseviero for their reviews and suggestions on this blog post. |
https://huggingface.co/blog/google-cloud-model-garden | Making thousands of open LLMs bloom in the Vertex AI Model Garden | Philipp Schmid, Jeff Boudier | April 10, 2024 | Today, we are thrilled to announce the launch of Deploy on Google Cloud, a new integration on the Hugging Face Hub to deploy thousands of foundation models easily to Google Cloud using Vertex AI or Google Kubernetes Engine (GKE). Deploy on Google Cloud makes it easy to deploy open models as API Endpoints within your own Google Cloud account, either directly through Hugging Face model cards or within Vertex Model Garden, Google Cloud’s single place to discover, customize, and deploy a wide variety of models from Google and Google partners. Starting today, we are enabling the most popular open models on Hugging Face for inference powered by our production solution, Text Generation Inference. With Deploy on Google Cloud, developers can build production-ready Generative AI applications without managing infrastructure and servers, directly within their secure Google Cloud environment.A Collaboration for AI BuildersThis new experience expands upon the strategic partnership we announced earlier this year to simplify the access and deployment of open Generative AI models for Google customers. One of the main problems developers and organizations face is the time and resources it takes to deploy models securely and reliably. Deploy on Google Cloud offers an easy, managed solution to these challenges, providing dedicated configurations and assets to Hugging Face Models. It’s a simple click-through experience to create a production-ready Endpoint on Google Cloud’s Vertex AI. “Vertex AI’s Model Garden integration with the Hugging Face Hub makes it seamless to discover and deploy open models on Vertex AI and GKE, whether you start your journey on the Hub or directly in the Google Cloud Console” says Wenming Ye, Product Manager at Google. “We can’t wait to see what Google Developers build with Hugging Face models”.How it works - from the HubDeploying Hugging Face Models on Google Cloud is super easy. Below, you will find step-by-step instructions on how to deploy Zephyr Gemma. Starting today, all models with the “text-generation-inference” tag will be supported. Open the “Deploy” menu, and select “Google Cloud”. This will now bring you straight into the Google Cloud Console, where you can deploy Zephyr Gemma in 1 click on Vertex AI, or GKE. Once you are in the Vertex Model Garden, you can select Vertex AI or GKE as your deployment environment. With Vertex AI you can deploy the model with 1-click on “Deploy”. For GKE, you can follow instructions and manifest templates on how to deploy the model on a new or running Kubernetes Cluster. How it works - from Vertex Model GardenVertex Model Garden is where Google Developers can find ready-to-use models for their Generative AI projects. Starting today, the Vertex Model Garden offers a new experience to easily deploy the most popular open LLMs available on Hugging Face!You can find the new “Deploy From Hugging Face” option inside Google Vertex AI Model Garden, which allows you to search and deploy Hugging Face models directly within your Google Cloud console. When you click on “Deploy From Hugging Face”, a form will appear where you can quickly search for model IDs. Hundreds of the most popular open LLMs on Hugging Face are available with ready-to-use, tested hardware configurations. Once you find the model you want to deploy, select it, and Vertex AI will prefill all required configurations to deploy your model to Vertex AI or GKE. You can even ensure you selected the right model by “viewing it on Hugging Face.” If you’re using a gated model, make sure to provide your Hugging Face access token so the model download can be authorized. And that’s it! Deploying a model like Zephyr Gemma directly, from the Vertex Model Garden onto your own Google Cloud account is just a couple of clicks.We’re just getting startedWe are excited to collaborate with Google Cloud to make AI more open and accessible for everyone. Deploying open models on Google Cloud has never been easier, whether you start from the Hugging Face Hub, or within the Google Cloud console. And we’re not going to stop there – stay tuned as we enable more experiences to build AI with open models on Google Cloud! |
https://huggingface.co/blog/codegemma | CodeGemma - an official Google release for code LLMs | Pedro Cuenca, Omar Sanseviero, Vaibhav Srivastav, Philipp Schmid, Mishig Davaadorj, Loubna Ben Allal | April 9, 2024 | CodeGemma is a family of open-access versions of Gemma specialized in code, and we’re excited to collaborate with Google on its release to make it as accessible as possible.🤗CodeGemma comes in three flavors:A 2B base model specialized in infilling and open-ended generation.A 7B base model trained with both code infilling and natural language.A 7B instruct model a user can chat with about code.We’ve collaborated with Google to ensure the best integration into the Hugging Face ecosystem. You can find the three open-access models ready to use on the Hub. Among the features and integrations being released, we have:Models on the Hub, with their model cards and licenses. There are versions for the transformers library, checkpoints for use with Google’s original codebases, and full-precision GGUF files that the community can quantize.Transformers integrationIntegration with Google CloudIntegration with Inference EndpointsCode benchmarks Table of contents What is CodeGemmaEvaluation ResultsPrompt formatUsing CodeGemmaDemoUsing TransformersIntegration with Google CloudIntegration with Inference EndpointsAdditional Resources What is CodeGemma? CodeGemma is a family of code-specialist LLM models by Google, based on the pre-trained 2B and 7B Gemma checkpoints. CodeGemma are further trained on an additional 500 billion tokens of primarily English language data, mathematics, and code to improve on logical and mathematical reasoning, and are suitable for code completion and generation.CodeGemma 2B was trained exclusively on Code Infilling and is meant for fast code completion and generation, especially in settings where latency and/or privacy are crucial. CodeGemma 7B training mix includes code infilling data (80%) and natural language. It can be used for code completion, as well as code and language understanding and generation. CodeGemma 7B Instruct was fine-tuned for instruction following on top of CodeGemma 7B. It’s meant for conversational use, especially around code, programming, or mathematical reasoning topics. All the models have the same 8K token context size as their predecessors.This image is from the original report Evaluation Results CodeGemma-7B outperforms similarly-sized 7B models except DeepSeek-Coder-7B on HumanEval, a popular benchmark for evaluating code models on Python. The same goes for the evaluation of other programming languages like Java, JavaScript, and C++ from MultiPL-E, a translation of HumanEval. According to the technical report, the model performs best on GSM8K among 7B models. The instruct version CodeGemma-7B-it improves on the most popular languages on both HumanEval and MBPP (cf paper table 5). For more details, you can check the BigCode leaderboard or some metrics below.ModelPretraining size [tokens]PythonJavaScript10B+ modelsStarCoder 2 15B4,000B+44.1544.24Code Llama 13B2,500B35.0738.267B modelsDeepSeek Coder 7B2,000B45.8345.9CodeGemma 7B500B of extra training40.1343.06Code Llama 7B2,500B29.9831.8StarCoder 2 7B3,500B+34.0935.35StarCoderBase 7B3,000B+28.3727.35<3B modelsCodeGemma 2B500B of extra training27.2829.94Stable Code 3B1,300B30.7228.75StarCoder 2 3B3,000B+31.4435.37ModelPretraining size [tokens]PythonJavaScript10B+ modelsCode Llama 13B2,620B50.640.92Code Llama 13B2,620B42.8940.667B modelsCodeGemma 7B500B52.7447.71Code Llama 7B2,620B40.4836.34Code Llama 7B2,620B25.6533.11Here is a table from the original report with a breakdown per language. Prompt format CodeGemma 2B and CodeGemma 7B use infilling (code, comments, docstrings, import statements) for code completion. CodeGemma was trained for this task using the fill-in-the-middle (FIM) objective, where you provide a prefix and a suffix as context for the completion. The following tokens are used to separate the different parts of the input:<|fim_prefix|> precedes the context before the completion we want to run.<|fim_suffix|> precedes the suffix. You must put this token exactly where the cursor would be positioned in an editor, as this is the location where the model will code complete.<|fim_middle|> is the prompt that invites the model to run the generation.In addition to these, there's also <|file_separator|>, which provides multi-file contexts. We’ll show examples of use in the Using with transformers section.CodeGemma 7B Instruct uses the same prompt format as the base Gemma Instruction-tuned versions, following this conversation structure:<bos><start_of_turn>userknock knock<end_of_turn><start_of_turn>modelwho is there<end_of_turn><start_of_turn>userLaMDA<end_of_turn><start_of_turn>modelLaMDA who?<end_of_turn>As is the case with Gemma, the easiest way to reproduce this format is with the chat template available in transformers. Using CodeGemma Demo You can easily try the CodeGemma Model (7 billion parameters!) in this Space or in the Chatbot embedded below:Under the hood, this playground uses Transformers implementation. You can also duplicate the Space for your use – it's self-contained, so you can examine the source code and adapt it as you wish! Using Transformers With Transformers release 4.39, you can use CodeGemma and leverage all the tools within the Hugging Face ecosystem, such as:training and inference scripts and examplessafe file format (safetensors)integrations with tools such as bitsandbytes (4-bit quantization), PEFT (parameter efficient fine-tuning), and Flash Attention 2utilities and helpers to run generation with the modelmechanisms to export the models to deployLike the Gemma models, CodeGemma is compatible with torch.compile() for an important inference speedup.Bonus: We made a Colab notebook for you to try out the model at the touch of a button here.To use CodeGemma with transformers, make sure to use the latest release:pip install --upgrade transformersThe following snippet shows how to use codegemma-2b for code completion with transformers. It requires about 6 GB of RAM using float16 precision, making it perfectly suitable for consumer GPUs and on-device applications.from transformers import GemmaTokenizer, AutoModelForCausalLMimport torchmodel_id = "google/codegemma-2b"tokenizer = GemmaTokenizer.from_pretrained(model_id)model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16).to("cuda:0")prompt = '''\<|fim_prefix|>import datetimedef calculate_age(birth_year): """Calculates a person's age based on their birth year.""" current_year = datetime.date.today().year <|fim_suffix|> return age<|fim_middle|>\'''inputs = tokenizer(prompt, return_tensors="pt").to(model.device)prompt_len = inputs["input_ids"].shape[-1]outputs = model.generate(**inputs, max_new_tokens=100)print(tokenizer.decode(outputs[0][prompt_len:]))Observe that the <|fim_suffix|> token appears in the position where the cursor would be placed in an editor, marking the position for the generation. <|fim_prefix|> provides the context that precedes the cursor, and the remaining until <|fim_middle|> is additional context after the cursor. Either of them can be empty if the cursor is located at the beginning or end of the file.The previous code may return something like the following:age = current_year - birth_year<|file_separator|>test_calculate_age.py<|fim_suffix|> assert calculate_age(1990) == 33 assert calculate_age(1980) == 43 assert calculate_age(1970) == 53 assert calculate_age(1960) == 63 assert calculate_age(1950) == 73Note the extra content after the correct completion. This is particularly the case for CodeGemma 7B, which is more verbose and tends to provide additional code or comments after completion. We must ignore everything that appears after the FIM tokens or the EOS token for code infilling. We can stop generation early with transformers by providing a list of terminators to the generate function, like this:FIM_PREFIX = '<|fim_prefix|>'FIM_SUFFIX = '<|fim_suffix|>'FIM_MIDDLE = '<|fim_middle|>'FIM_FILE_SEPARATOR = '<|file_separator|>'terminators = tokenizer.convert_tokens_to_ids( [FIM_PREFIX, FIM_MIDDLE, FIM_SUFFIX, FIM_FILE_SEPARATOR])terminators += [tokenizer.eos_token_id]outputs = model.generate( **inputs, max_new_tokens=100, eos_token_id=terminators,)In this case, generation will stop as soon as the first delimiter is found:age = current_year - birth_year<|file_separator|> A note on precision The original CodeGemma checkpoints are released in bfloat16 precision. If you load the model without indicating a torch_dtype, PyTorch will upcast them to float32. Casting to float16 is perfectly fine for use, and it can be much faster than bfloat16 on certain hardware. For maximum precision, we recommend you use bfloat16 rather than float32.You can also automatically quantize the model, loading it in 8-bit or 4-bit mode. 4-bit loading of CodeGemma 7B takes about 9 GB of memory to run, making it compatible with many consumer cards and all the GPUs in Google Colab. This is how you’d load the generation pipeline in 4-bit:pipeline = pipeline( "text-generation", model=model, model_kwargs={ "torch_dtype": torch.float16, "quantization_config": {"load_in_4bit": True} },) Integration with Google Cloud You can deploy and train Gemma on Google Cloud through Vertex AI or Google Kubernetes Engine (GKE), using Text Generation Inference and Transformers. To deploy the CodeGemma model from Hugging Face, go to the model page and click on Deploy -> Google Cloud. This will bring you to the Google Cloud Console, where you can 1-click deploy CodeGemma on Vertex AI or GKE, powered by Text Generation Inference.You can also access CodeGemma directly through the Vertex AI Model Garden. Integration with Inference Endpoints You can deploy CodeGemma on Hugging Face's Inference Endpoints, which uses Text Generation Inference as the backend. Text Generation Inference is a production-ready inference container developed by Hugging Face to enable easy deployment of large language models. It has features such as continuous batching, token streaming, tensor parallelism for fast inference on multiple GPUs, production-ready logging and tracing, and is distributed under the Apache 2 license.To deploy a CodeGemma model, go to the model page and click on the Deploy -> Inference Endpoints widget. You can learn more about Deploying LLMs with Hugging Face Inference Endpoints in a previous blog post. Note that T4s do not support the bfloat16 format, so you will need to use a different GPU option.from huggingface_hub import InferenceClientclient = InferenceClient(model=IE_ENDPOINT)prompt = """\<|fim_prefix|>import <|fim_suffix|>if __name__ == '__main__': sys.exit(0)<|fim_middle|>\"""client.text_generation(prompt=prompt) Additional Resources Models on the HubCode LeaderboardTechnical Report |
https://huggingface.co/blog/hugging-face-wiz-security-blog | Hugging Face partners with Wiz Research to Improve AI Security | Josef Fukano, Guillaume Salou, Michelle Habonneau, Adrien, Luc Georges, Nicolas Patry, Julien Chaumond | April 4, 2024 | We are pleased to announce that we are partnering with Wiz with the goal of improving security across our platform and the AI/ML ecosystem at large.Wiz researchers collaborated with Hugging Face on the security of our platform and shared their findings. Wiz is a cloud security company that helps their customers build and maintain software in a secure manner. Along with the publication of this research, we are taking the opportunity to highlight some related Hugging Face security improvements.Hugging Face has recently integrated Wiz for Vulnerability Management, a continuous and proactive process to keep our platform free of security vulnerabilities. In addition, we are using Wiz for Cloud Security Posture Management (CSPM), which allows us to configure our cloud environment securely, and monitor to ensure it remains secure. One of our favorite Wiz features is a holistic view of Vulnerabilities, from storage to compute to network. We run multiple Kubernetes (k8s) clusters and have resources across multiple regions and cloud providers, so it is extremely helpful to have a central report in a single location with the full context graph for each vulnerability. We’ve also built on top of their tooling, to automatically remediate detected issues in our products, most notably in Spaces.As part of the joint work, Wiz’s security research team identified shortcomings of our sandboxed compute environments by running arbitrary code within the system thanks to pickle. As you read this blog and the Wiz security research paper, it is important to remember that we have resolved all issues related to the exploit and continue to remain diligent in our Threat Detection and Incident Response process. Hugging Face SecurityAt Hugging Face we take security seriously, as AI rapidly evolves, new threat vectors seemingly pop up every day. Even as Hugging Face announces multiple partnerships and business relationships with the largest names in tech, we remain committed to allow our users and the AI community to responsibly experiment with and operationalize AI/ML systems and technologies. We are dedicated to securing our platform as well as democratizing AI/ML, such that the community can contribute to and be a part of this paradigm shifting event that will impact us all. We are writing this blog to reaffirm our commitment to protecting our users and customers from security threats. Below we will also discuss Hugging Face’s philosophy regarding our support of the controversial pickle files as well as discuss the shared responsibility of moving away from the pickle format. There are many other exciting security improvements and announcements coming in the near future. The publications will not only discuss the security risks to the Hugging Face platform community, but also cover systemic security risks of AI as well as best practices for mitigation. We remain committed to making our products, our infrastructure, and the AI community secure, stay tuned for followup security blog posts and whitepapers.Open Source Security Collaboration and Tools for the CommunityWe highly value transparency and collaboration with the community and this includes participation in the identification and disclosure of vulnerabilities, collaborating on resolving security issues, and security tooling. Below are examples of our security wins born from collaboration, which help the entire AI community lower their security risk:Picklescan was built in partnership with Microsoft; Matthieu Maitre started the project and given we had our own internal version of the same tool, we joined forces and contributed to picklescan. Refer to the following documentation page if you are curious to know more on how it works:https://huggingface.co/docs/hub/en/security-pickleSafetensors, which was developed by Nicolas Patry, is a secure alternative to pickle files. Safetensors has been audited by Trail of Bits on a collaborative initiative with EuletherAI & Stability AI.https://huggingface.co/docs/safetensors/en/indexWe have a robust bug bounty program, with many amazing researchers from all around the world. Researchers who have identified a security vuln may inquire about joining our program through security@huggingface.coMalware Scanning: https://huggingface.co/docs/hub/en/security-malwareSecrets Scanning: https://huggingface.co/docs/hub/security-secretsAs previously mentioned, we’re also collaborating with Wiz to lower Platform security risks We are starting a series of security publications which address security issues facing the AI/ML community.Security Best Practices for Open Source AI/ML usersAI/ML has introduced new vectors of attack, but for many of these attacks mitigants are long standing and well known. Security professionals should ensure that they apply relevant security controls to AI resources and models. In addition, below are some resources and best practices when working with open source software and models:Know the contributor: Only use models from trusted sources and pay attention to commit signing. https://huggingface.co/docs/hub/en/security-gpgDon’t use pickle files in production environmentsUse Safetensors: https://huggingface.co/docs/safetensors/en/index Review the OWASP top 10: https://owasp.org/www-project-top-ten/Enable MFA on your Hugging Face accountsEstablish a Secure Development Lifecycle, which includes code review by a security professional or engineer with appropriate security trainingTest models in non-production and virtualized test/dev environmentsPickle Files - The Insecure Elephant in the RoomPickle files have been at the core of most of the research done by Wiz and other recent publications by security researchers about Hugging Face. Pickle files have long been considered to have security risks associated with them, see our doc files for more information: https://huggingface.co/docs/hub/en/security-pickleDespite these known security flaws, the AI/ML community still frequently uses pickles (or similarly trivially exploitable formats). Many of these use cases are low risk or for test purposes making the familiarity and ease of use of pickle files more attractive than the secure alternative.As the open source AI platform, we are left with the following options:Ban pickle files entirelyDo nothing about pickle filesFinding a middle ground that both allows for pickle use as well as reasonably and practicably mitigating the risks associated with pickle filesWe have chosen option 3, the middle ground for now. This option is a burden on our engineering and security teams and we have put in significant effort to mitigate the risks while allowing the AI community to use tools they choose. Some of the key mitigants we have implemented to the risks related to pickle include: Creating clear documentation outlining the risksDeveloping automated scanning toolsUsing scanning tools and labeling models with security vulnerabilities with clear warningsWe have even provided a secure solution to use in lieu of pickle (Safetensors)We have also made Safetensors a first class citizen on our platform to protect the community members who may not understand the risksIn addition to the above, we have also had to significantly segment and enhance security of the areas in which models are used to account for potential vulnerabilities within themWe intend to continue to be the leader in protecting and securing the AI Community. Part of this will be monitoring and addressing risks related to pickle files. Sunsetting support of pickle is also not out of the question either, however, we do our best to balance the impact on the community as part of a decision like this. An important note that the upstream open source communities as well as large tech and security firms, have been largely silent on contributing to solutions here and left Hugging Face to both define philosophy and invest heavily in developing and implementing mitigating controls to ensure the solution is both acceptable and practicable. Closing remarksI spoke extensively to Nicolas Patry, the creator of Safetensors in writing this blog post and he requested that I add a call to action to the AI open source community and AI enthusiasts:Pro-actively start replacing your pickle files with Safetensors. As mentioned earlier, pickle contains inherent security flaws and may be unsupported in the near future.Keep opening issues/PRs upstream about security to your favorite libraries to push secure defaults as much as possible upstream.The AI industry is rapidly changing and new attack vectors / exploits are being identified all the time. Huggingface has a one of a kind community and we partner heavily with you to help us maintain a secure platform. Please remember to responsibly disclose security vulns/bugs through the appropriate channels to avoid potential legal liability and violation of laws.Want to join the discussion? Reach out to us as security@huggingface.co or follow us on Linkedin/Twitter. |
https://huggingface.co/blog/duckdb-nsql-7b | Text2SQL using Hugging Face Dataset Viewer API and Motherduck DuckDB-NSQL-7B | Andrea Soria, Till Döhmen, Sen Wu, Laurel Orr | April 4, 2024 | Today, integrating AI-powered features, particularly leveraging Large Language Models (LLMs), has become increasingly prevalent across various tasks such as text generation, classification, image-to-text, image-to-image transformations, etc.Developers are increasingly recognizing these applications' potential benefits, particularly in enhancing core tasks such as scriptwriting, web development, and, now, interfacing with data. Historically, crafting insightful SQL queries for data analysis was primarily the domain of data analysts, SQL developers, data engineers, or professionals in related fields, all navigating the nuances of SQL dialect syntax. However, with the advent of AI-powered solutions, the landscape is evolving. These advanced models offer new avenues for interacting with data, potentially streamlining processes and uncovering insights with greater efficiency and depth.What if you could unlock fascinating insights from your dataset without diving deep into coding? To glean valuable information, one would need to craft a specialized SELECT statement, considering which columns to display, the source table, filtering conditions for selected rows, aggregation methods, and sorting preferences. This traditional approach involves a sequence of commands: SELECT, FROM, WHERE, GROUP, and ORDER.But what if you’re not a seasoned developer and still want to harness the power of your data? In such cases, seeking assistance from SQL specialists becomes necessary, highlighting a gap in accessibility and usability.This is where groundbreaking advancements in AI and LLM technology step in to bridge the divide. Imagine conversing with your data effortlessly, simply stating your information needs in plain language and having the model translate your request into a query. In recent months, significant strides have been made in this arena. MotherDuck and Numbers Station unveiled their latest innovation: DuckDB-NSQL-7B, a state-of-the-art LLM designed specifically for DuckDB SQL. What is this model’s mission? To empower users with the ability to unlock insights from their data effortlessly.Initially fine-tuned from Meta’s original Llama-2–7b model using a broad dataset covering general SQL queries, DuckDB-NSQL-7B underwent further refinement with DuckDB text-to-SQL pairs. Notably, its capabilities extend beyond crafting SELECT statements; it can generate a wide range of valid DuckDB SQL statements, including official documentation and extensions, making it a versatile tool for data exploration and analysis.In this article, we will learn how to deal with text2sql tasks using the DuckDB-NSQL-7B model, the Hugging Face dataset viewer API for parquet files and duckdb for data retrieval.text2sql flowHow to use the modelUsing Hugging Face transformers pipelinefrom transformers import pipelinepipe = pipeline("text-generation", model="motherduckdb/DuckDB-NSQL-7B-v0.1")Using transformers tokenizer and modelfrom transformers import AutoTokenizer, AutoModelForCausalLMtokenizer = AutoTokenizer.from_pretrained("motherduckdb/DuckDB-NSQL-7B-v0.1")model = AutoModelForCausalLM.from_pretrained("motherduckdb/DuckDB-NSQL-7B-v0.1")Using llama.cpp to load the model in GGUFfrom llama_cpp import Llamallama = Llama(model_path="DuckDB-NSQL-7B-v0.1-q8_0.gguf", # Path to local modeln_gpu_layers=-1,)The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide variety of hardware - locally and in the cloud. We will use this approach.Hugging Face Dataset Viewer API for more than 120K datasetsData is a crucial component in any Machine Learning endeavor. Hugging Face is a valuable resource, offering access to over 120,000 free and open datasets spanning various formats, including CSV, Parquet, JSON, audio, and image files.Each dataset hosted by Hugging Face comes equipped with a comprehensive dataset viewer. This viewer provides users essential functionalities such as statistical insights, data size assessment, full-text search capabilities, and efficient filtering options. This feature-rich interface empowers users to easily explore and evaluate datasets, facilitating informed decision-making throughout the machine learning workflow.For this demo, we will be using the world-cities-geo dataset.Dataset viewer of world-cities-geo datasetBehind the scenes, each dataset in the Hub is processed by the Hugging Face dataset viewer API, which gets useful information and serves functionalities like:List the dataset splits, column names and data typesGet the dataset size (in number of rows or bytes)Download and view rows at any index in the datasetSearch a word in the datasetFilter rows based on a query stringGet insightful statistics about the dataAccess the dataset as parquet files to use in your favorite processing or analytics frameworkIn this demo, we will use the last functionality, auto-converted parquet files.Generate SQL queries from text instructionsFirst, download the quantized models version of DuckDB-NSQL-7B-v0.1Downloading the modelAlternatively, you can execute the following code:huggingface-cli download motherduckdb/DuckDB-NSQL-7B-v0.1-GGUF DuckDB-NSQL-7B-v0.1-q8_0.gguf --local-dir . --local-dir-use-symlinks FalseNow, lets install the needed dependencies:pip install llama-cpp-pythonpip install duckdbFor the text-to-SQL model, we will use a prompt with the following structure:### Instruction:Your task is to generate valid duckdb SQL to answer the following question.### Input:Here is the database schema that the SQL query will run on:{ddl_create}### Question:{query_input}### Response (use duckdb shorthand if possible):ddl_create will be the dataset schema as a SQL CREATE commandquery_input will be the user instructions, expressed with natural languageSo, we need to tell to the model about the schema of the Hugging Face dataset. For that, we are going to get the first parquet file for jamescalam/world-cities-geo dataset:GET https://huggingface.co/api/datasets/jamescalam/world-cities-geo/parquet{"default":{"train":["https://huggingface.co/api/datasets/jamescalam/world-cities-geo/parquet/default/train/0.parquet"]}}The parquet file is hosted in Hugging Face viewer under refs/convert/parquet revision:Parquet fileSimulate a DuckDB table creation from the first row of the parquet fileimport duckdbcon = duckdb.connect()con.execute(f"CREATE TABLE data as SELECT * FROM '{first_parquet_url}' LIMIT 1;")result = con.sql("SELECT sql FROM duckdb_tables() where table_name ='data';").df()ddl_create = result.iloc[0,0]con.close()The CREATE schema DDL is:CREATE TABLE "data"(city VARCHAR, country VARCHAR, region VARCHAR,continent VARCHAR, latitude DOUBLE, longitude DOUBLE, x DOUBLE, y DOUBLE, z DOUBLE);And, as you can see, it matches the columns in the dataset viewer:Dataset columnsNow, we can construct the prompt with the ddl_create and the query inputprompt = """### Instruction:Your task is to generate valid duckdb SQL to answer the following question.### Input:Here is the database schema that the SQL query will run on:{ddl_create}### Question:{query_input}### Response (use duckdb shorthand if possible):"""If the user wants to know the Cities from Albania country, the prompt will look like this:query = "Cities from Albania country"prompt = prompt.format(ddl_create=ddl_create, query_input=query)So the expanded prompt that will be sent to the LLM looks like this:### Instruction:Your task is to generate valid duckdb SQL to answer the following question.### Input:Here is the database schema that the SQL query will run on:CREATE TABLE "data"(city VARCHAR, country VARCHAR, region VARCHAR, continent VARCHAR, latitude DOUBLE, longitude DOUBLE, x DOUBLE, y DOUBLE, z DOUBLE);### Question:Cities from Albania country### Response (use duckdb shorthand if possible):It is time to send the prompt to the modelfrom llama_cpp import Llamallm = Llama(model_path="DuckDB-NSQL-7B-v0.1-q8_0.gguf",n_ctx=2048,n_gpu_layers=50)pred = llm(prompt, temperature=0.1, max_tokens=1000)sql_output = pred["choices"][0]["text"]The output SQL command will point to a data table, but since we don't have a real table but just a reference to the parquet file, we will replace all data occurrences by the first_parquet_url:sql_output = sql_output.replace("FROM data", f"FROM '{first_parquet_url}'")And the final output will be:SELECT city FROM 'https://huggingface.co/api/datasets/jamescalam/world-cities-geo/parquet/default/train/0.parquet' WHERE country = 'Albania'Now, it is time to finally execute our generated SQL directly in the dataset, so, lets use once again DuckDB powers:con = duckdb.connect()try:query_result = con.sql(sql_output).df()except Exception as error:print(f"❌ Could not execute SQL query {error=}")finally:con.close()And here we have the results (100 rows):Execution result (100 rows)Let's compare this result with the dataset viewer using the "search function" for Albania country, it should be the same:Search result for Albania countryYou can also get the same result calling directly to the search or filter API:Using /search APIimport requestsAPI_URL = "https://datasets-server.huggingface.co/search?dataset=jamescalam/world-cities-geo&config=default&split=train&query=Albania"def query():response = requests.get(API_URL)return response.json()data = query()Using filter APIimport requestsAPI_URL = "https://datasets-server.huggingface.co/filter?dataset=jamescalam/world-cities-geo&config=default&split=train&where=country='Albania'"def query():response = requests.get(API_URL)return response.json()data = query()Our final demo will be a Hugging Face space that looks like this:You can see the notebook with the code here.And the Hugging Face Space here |
https://huggingface.co/blog/setfit-optimum-intel | Blazing Fast SetFit Inference with 🤗 Optimum Intel on Xeon | Daniel Korat, Tom Aarsen, Oren Pereg, Moshe Wasserblat, Ella Charlaix, Abirami Prabhakaran | April 3, 2024 | SetFit is a promising solution for a common modeling problem: how to deal with lack of labeled data for training. Developed with Hugging Face’s research partners at Intel Labs and the UKP Lab, SetFit is an efficient framework for few-shot fine-tuning of Sentence Transformers models. SetFit achieves high accuracy with little labeled data - for example, SetFit outperforms GPT-3.5 in 3-shot prompting and with 5 shot it also outperforms 3-shot GPT-4 on the Banking 77 financial intent dataset.Compared to LLM based methods, SetFit has two unique advantages:🗣 No prompts or verbalisers: few-shot in-context learning with LLMs requires handcrafted prompts which make the results brittle, sensitive to phrasing and dependent on user expertise. SetFit dispenses with prompts altogether by generating rich embeddings directly from a small number of labeled text examples.🏎 Fast to train: SetFit doesn't rely on LLMs such as GPT-3.5 or Llama2 to achieve high accuracy. As a result, it is typically an order of magnitude (or more) faster to train and run inference with.For more details on SetFit, check out our paper, blog, code, and data.Setfit has been widely adopted by the AI developer community, with ~100k downloads per month and ~1500 SetFit models on the Hub, and growing with an average of ~4 models per day!Faster!In this blog post, we'll explain how you can accelerate inference with SetFit by 7.8x on Intel CPUs, by optimizing your SetFit model with 🤗 Optimum Intel. We’ll show how you can achieve huge throughput gains by performing a simple post-training quantization step on your model. This can enable production-grade deployment of SetFit solutions using Intel Xeon CPUs. Optimum Intel is an open-source library that accelerates end-to-end pipelines built with Hugging Face libraries on Intel Hardware. Optimum Intel includes several techniques to accelerate models such as low-bit quantization, model weight pruning, distillation, and an accelerated runtime.The runtime and optimizations included in Optimum Intel take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512), Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs to accelerate models. Specifically, it has built-in BFloat16 (bf16) and int8 GEMM accelerators in every core to accelerate deep learning training and inference workloads. AMX accelerated inference is introduced in PyTorch 2.0 and Intel Extension for PyTorch (IPEX) in addition to other optimizations for various common operators.Optimizing pre-trained models can be done easily with Optimum Intel; many simple examples can be found here.Our blog is accompanied by a notebook for a step-by-step walkthrough.Step 1: Quantize the SetFit Model using 🤗 Optimum IntelIn order to optimize our SetFit model, we will apply quantization to the model body, using Intel Neural Compressor (INC), part of Optimum Intel.Quantization is a very popular deep learning model optimization technique for improving inference speeds. It minimizes the number of bits required to represent the weights and/or activations in a neural network. This is done by converting a set of high-precision numbers into a lower-bit data representations, such as INT8. Moreover, quantization can enable faster computations in lower precision.Specifically, we'll apply post-training static quantization (PTQ). PTQ can reduce the memory footprint and latency for inference, while still preserving the accuracy of the model, with only a small unlabeled calibration set and without any training.Before you begin, make sure you have all the necessary libraries installed and that your version of Optimum Intel is at least 1.14.0 since the functionality was introduced in that version:pip install --upgrade-strategy eager optimum[ipex]Prepare a Calibration DatasetThe calibration dataset should be able to represent the distribution of unseen data. In general, preparing 100 samples is enough for calibration. We'll use the rotten_tomatoes dataset in our case, since it’s composed of movie reviews, similar to our target dataset, sst2.First, we’ll load 100 random samples from this dataset. Then, to prepare the dataset for quantization, we'll need to tokenize each example. We won’t need the “text” and “label” columns, so let’s remove them.calibration_set = load_dataset("rotten_tomatoes", split="train").shuffle(seed=42).select(range(100)) def tokenize(examples):return tokenizer(examples["text"], padding="max_length", max_length=512, truncation=True)tokenizer = setfit_model.model_body.tokenizercalibration_set = calibration_set.map(tokenize, remove_columns=["text", "label"])Run QuantizationBefore we run quantization, we need to define the desired quantization process - in our case - Static Post Training Quantization, and use optimum.intel to run the quantization on our calibration dataset:from optimum.intel import INCQuantizerfrom neural_compressor.config import PostTrainingQuantConfigsetfit_body = setfit_model.model_body[0].auto_modelquantizer = INCQuantizer.from_pretrained(setfit_body)optimum_model_path = "/tmp/bge-small-en-v1.5_setfit-sst2-english_opt"quantization_config = PostTrainingQuantConfig(approach="static", backend="ipex", domain="nlp")quantizer.quantize(quantization_config=quantization_config,calibration_dataset=calibration_set,save_directory=optimum_model_path,batch_size=1,)tokenizer.save_pretrained(optimum_model_path)That’s it! We now have a local copy of our quantized SetFit model. Let’s test it out.Step 2: Benchmark InferenceIn our notebook, we’ve set up a PerformanceBenchmark class to compute model latency and throughput, as well as an accuracy measure. Let’s use it to benchmark our Optimum Intel model with two other commonly used methods:Using PyTorch and 🤗 Transformers library with fp32.Using Intel Extension for PyTorch (IPEX) runtime with bf16 and tracing the model using TorchScript.Load our test dataset, sst2, and run the benchmark using PyTorch and 🤗 Transformers library:from datasets import load_datasetfrom setfit import SetFitModeltest_dataset = load_dataset("SetFit/sst2")["validation"]model_path = "dkorat/bge-small-en-v1.5_setfit-sst2-english"setfit_model = SetFitModel.from_pretrained(model_path)pb = PerformanceBenchmark(model=setfit_model,dataset=test_dataset,optim_type="bge-small (transformers)",)perf_metrics = pb.run_benchmark()For the second benchmark, we'll use Intel Extension for PyTorch (IPEX) with bf16 precision and TorchScript tracing. To use IPEX we simply import the IPEX library and apply ipex.optimize() to the target model, which, in our case, is the SetFit (transformer) model body:dtype = torch.bfloat16body = ipex.optimize(setfit_model.model_body, dtype=dtype)For TorchScript tracing, we generate a random sequence based on the model's maximum input length, with tokens sampled from the tokenizer's vocabulary:tokenizer = setfit_model.model_body.tokenizerd = generate_random_sequences(batch_size=1, length=tokenizer.model_max_length, vocab_size=tokenizer.vocab_size)body = torch.jit.trace(body, (d,), check_trace=False, strict=False)setfit_model.model_body = torch.jit.freeze(body)Now let's run the benchmark using our quantized Optimum model. We’ll first need to define a wrapper around our SetFit model which plugs in our quantized model body at inference (instead of the original model body). Then, we can run the benchmark using this wrapper.from optimum.intel import IPEXModelclass OptimumSetFitModel:def __init__(self, setfit_model, model_body):model_body.tokenizer = setfit_model.model_body.tokenizerself.model_body = model_bodyself.model_head = setfit_model.model_headoptimum_model = IPEXModel.from_pretrained(optimum_model_path)optimum_setfit_model = OptimumSetFitModel(setfit_model, model_body=optimum_model)pb = PerformanceBenchmark(model=optimum_setfit_model,dataset=test_dataset,optim_type=f"bge-small (optimum-int8)",model_path=optimum_model_path,autocast_dtype=torch.bfloat16,)perf_metrics.update(pb.run_benchmark())ResultsAccuracy vs latency at batch size=1bge-small (transformers)bge-small (ipex-bfloat16)bge-small (optimum-int8)Model Size127.32 MB63.74 MB44.65 MBAccuracy on test set88.4%88.4%88.1%Latency (bs=1)15.69 +/- 0.57 ms5.67 +/- 0.66 ms4.55 +/- 0.25 msWhen inspecting the performance at batch size 1, there’s a 3.45x reduction in latency with our optimized model. Note that this is achieved with virtually no drop in accuracy! It's also worth mentioning that the model size has shrunk by 2.85x. We move on to our main focus, which is the reported throughputs with different batch sizes.Here, the optimization has garnered even greater speedups. When comparing the highest achievable throughput (at any batch size), the optimized model is 7.8x faster than the original transformers fp32 model!SummaryIn this blog post, we have showed how to use quantization capabilities present in 🤗 Optimum Intel to optimize SetFit models. After running a quick and easy post-training quantization procedure, we've observed that accuracy level was preserved, while inference throughput increased by 7.8x. This optimization method can be readily applied to any existing SetFit deployment running on Intel Xeon.ReferencesLewis Tunstall, Nils Reimers, Unso Eun Seo Jo, Luke Bates, Daniel Korat, Moshe Wasserblat, Oren Pereg, 2022. "Efficient Few-Shot Learning Without Prompts". https://arxiv.org/abs/2209.11055 |
https://huggingface.co/blog/policy-blog | Public Policy at Hugging Face | Irene Solaiman, Yacine Jernite, Margaret Mitchell | April 8, 2024 | AI Policy at Hugging Face is a multidisciplinary and cross-organizational workstream. Instead of being part of a vertical communications or global affairs organization, our policy work is rooted in the expertise of our many researchers and developers, from Ethics and Society Regulars and the legal team to machine learning engineers working on healthcare, art, and evaluations.What we work on is informed by our Hugging Face community needs and experiences on the Hub. We champion responsible openness, investing heavily in ethics-forward research, transparency mechanisms, platform safeguards, and translate our lessons to policy. So what have we shared with policymakers?Policy MaterialsThe following materials reflect what we have found urgent to stress to policymakers at the time of requests for information, and will be updated as materials are published.United States of AmericaCongressionalSeptember 2023: Clement Delangue (CEO) Senate AI Insight Forum Kickoff StatementJune 2023: Clement Delangue (CEO) House Committee on Science, Space, and Technology TestimonyWritten statementView recorded testimonyNovember 2023: Dr. Margaret Mitchell (Chief Ethics Scientist) Senate Insight Forum StatementExecutiveMarch 2024: Response to NTIA RFC on Dual Use Foundation Artificial Intelligence Models with Widely Available Model WeightsFebruary 2024: Response to NIST RFI Assignments Under Sections 4.1, 4.5 and 11 of the Executive Order Concerning Artificial IntelligenceDecember 2023: Response to OMB RFC Agency Use of Artificial IntelligenceNovember 2023: Response to U.S. Copyright Office Notice of Inquiry on Artificial Intelligence and CopyrightJune 2023: Response to NTIA RFC on AI AccountabilitySeptember 2022: Response to NIST [AI Risk Management Framework]](https://huggingface.co/datasets/huggingface/policy-docs/resolve/main/2022_NIST_RMF_Response.pdf)June 2022: Response to NAIRR Implementing Findings from the National Artificial Intelligence Research Resource Task ForceEuropean UnionJanuary 2024: Response to Digital Services Act, Transparency ReportsJuly 2023: Comments on the Proposed AI ActUnited KingdomNovember 2023: Irene Solaiman (Head of Global Policy) oral evidence to UK Parliament House of Lords transcriptSeptember 2023: Response to UK Parliament: UK Parliament RFI: LLMsJune 2023: Response to No 10: UK RFI: AI Regulatory Innovation White Paper |
https://huggingface.co/blog/cloudflare-workers-ai | Bringing serverless GPU inference to Hugging Face users | Philipp Schmid, Jeff Boudier, Rita Kozlov, Nikhil Kothari | April 2, 2024 | Today, we are thrilled to announce the launch of Deploy on Cloudflare Workers AI, a new integration on the Hugging Face Hub. Deploy on Cloudflare Workers AI makes using open models as a serverless API easy, powered by state-of-the-art GPUs deployed in Cloudflare edge data centers. Starting today, we are integrating some of the most popular open models on Hugging Face into Cloudflare Workers AI, powered by our production solutions, like Text Generation Inference. With Deploy on Cloudflare Workers AI, developers can build robust Generative AI applications without managing GPU infrastructure and servers and at a very low operating cost: only pay for the compute you use, not for idle capacity. Generative AI for Developers This new experience expands upon the strategic partnership we announced last year to simplify the access and deployment of open Generative AI models. One of the main problems developers and organizations face is the scarcity of GPU availability and the fixed costs of deploying servers to start building. Deploy on Cloudflare Workers AI offers an easy, low-cost solution to these challenges, providing serverless access to popular Hugging Face Models with a pay-per-request pricing model. Let's take a look at a concrete example. Imagine you develop an RAG Application that gets ~1000 requests per day, with an input of 1k tokens and an output of 100 tokens using Meta Llama 2 7B. The LLM inference production costs would amount to about $1 a day."We're excited to bring this integration to life so quickly. Putting the power of Cloudflare's global network of serverless GPUs into the hands of developers, paired with the most popular open source models on Hugging Face, will open the doors to lots of exciting innovation by our community around the world," said John Graham-Cumming, CTO, Cloudflare How it works Using Hugging Face Models on Cloudflare Workers AI is super easy. Below, you will find step-by-step instructions on how to use Hermes 2 Pro on Mistral 7B, the newest model from Nous Research.You can find all available models in this Cloudflare Collection.Note: You need access to a Cloudflare Account and API Token.You can find the Deploy on Cloudflare option on all available model pages, including models like Llama, Gemma or Mistral.Open the “Deploy” menu, and select “Cloudflare Workers AI” - this will open an interface that includes instructions on how to use this model and send requests.Note: If the model you want to use does not have a “Cloudflare Workers AI” option, it is currently not supported. We are working on extending the availability of models together with Cloudflare. You can reach out to us at api-enterprise@huggingface.co with your request.The integration can currently be used via two options: using the Workers AI REST API or directly in Workers with the Cloudflare AI SDK. Select your preferred option and copy the code into your environment. When using the REST API, you need to make sure the ACCOUNT_ID and API_TOKEN variables are defined. That’s it! Now you can start sending requests to Hugging Face Models hosted on Cloudflare Workers AI. Make sure to use the correct prompt & template expected by the model. We’re just getting started We are excited to collaborate with Cloudflare to make AI more accessible to developers. We will work with the Cloudflare team to make more models and experiences available to you! |
End of preview. Expand
in Dataset Viewer.
All the Hugging Face blog posts until May 12, 2024. Includes URLs, headlines, dates, authors, and texts.
- Downloads last month
- 52
Size of downloaded dataset files:
2.41 MB
Size of the auto-converted Parquet files:
2.41 MB
Number of rows:
381