title
stringlengths 4
172
| link
stringlengths 27
86
| article
stringlengths 4
40.1k
|
---|---|---|
Better RAG 2: Single-shot is not good enough | https://hf.co/blog/hrishioa/retrieval-augmented-generation-2-walking | Conclusion |
Better RAG 1: Advanced Basics | https://hf.co/blog/hrishioa/retrieval-augmented-generation-1-basics | An interactive demo |
MTEB Leaderboard : User guide and best practices | https://hf.co/blog/lyon-nlp-group/mteb-leaderboard-best-practices | Bibliography |
Revolutionizing Video Transcription: Unveiling Gemma-2b-it and Langchain in the Era of Transformers | https://hf.co/blog/Andyrasika/langchain-whisper | Conclusion |
Towards actively reasoning LLM systems | https://hf.co/blog/KnutJaegersberg/active-reasoning | References |
SemScore: Evaluating LLMs with Semantic Similarity | https://hf.co/blog/g-ronimo/semscore | Summary |
Open-Source SORA Has Arrived! Training Your Own SORA Model! | https://hf.co/blog/lyogavin/train-your-own-sora | Hardware Requirements |
Large Language Models in Quest for Adventure | https://hf.co/blog/crazyjeannot/llms-mapping-adventure | References |
Streamline Computer Vision Workflows with Hugging Face Transformers and FiftyOne | https://hf.co/blog/jamarks/fiftyone-transformers-integration | Resources |
Deploying 🤗 Hub models in Vertex AI | https://hf.co/blog/alvarobartt/deploy-from-hub-to-vertex-ai | References |
Genie: Generative Interactive Environments | https://hf.co/blog/vladbogo/genie-generative-interactive-environments | Conclusion |
Molecule retrieval and editing using multimodal text-structure representations | https://hf.co/blog/nfsrulesFR/llms-for-molecule-editing | Going forward |
Breaking resolution curse of vision-language models | https://hf.co/blog/visheratin/vlm-resolution-curse | Acknowledgements |
🌌 Analysis of Spaces in Hugging Face | https://hf.co/blog/Weyaxi/huggingface-spaces-analysis | 🏛️ License |
Fast, High-Fidelity LLM Decoding with Regex Constraints | https://hf.co/blog/vivien/llm-decoding-with-regex-constraints | Conclusion |
Rephrasing the Web A Recipe for Compute and Data-Efficient Language Modeling | https://hf.co/blog/vladbogo/rephrasing-the-web | Conclusion |
Exploring a Public Domain dataset with Visual Topic Modeling | https://hf.co/blog/charlesdedampierre/exploring-public-domain-dataset | 3.Exploring bias in the data |
Navigating Complexity with Elegance: The P-FAF Approach to Fractal Word Embeddings | https://hf.co/blog/TuringsSolutions/navigating-complexity-with-elegance-the-p-faf-appr |
<p>
Abstract:
Traditional geometric fractals, known for their self-similar patterns at various scales, encounter exponential growth in computational complexity when adapted to data representation tasks. This paper elucidates the Probabilistic Fractal Activation Function (P-FAF) mechanism, a novel approach in natural language processing that leverages fractal mathematics to generate dynamic word embeddings. P-FAF mitigates the exploding calculation complexity typical of geometric fractal methods through probabilistic blending and dimensionality control, offering a scalable solution for capturing the multifaceted nature of language.</p>
<ol>
<li><p>Introduction:
Word vectorization techniques like word2vec and GloVe have revolutionized natural language processing (NLP) by providing a way to represent words as high-dimensional numeric vectors. However, these methods offer static, singular representations that fail to capture the dynamic and context-dependent nature of language. The Probabilistic Fractal Activation Function (P-FAF) introduces a flexible, multifaceted approach to word representation, inspired by the self-similar nature of fractals. Unlike traditional geometric fractals, P-FAF avoids exponential computational growth through a novel application of probabilistic methods and dimensionality controls.</p>
</li>
<li><p>Background:
Fractals are geometric figures, each part of which has the same statistical character as the whole. They are often exactly or statistically self-similar across scales. While fractals have been explored in various fields for modeling phenomena with many scales of size or time, their application in NLP has been limited due to the complexity of calculations required to generate and manipulate them.</p>
</li>
<li><p>P-FAF Formulation:
The core of P-FAF's innovation lies in its formulation:
Formally, given an input word x, the P-FAF formulation defines its embedding f(x) as:</p>
</li>
</ol>
<p>f(x) = ∑(p_i * f_i(x^(1/d_i)))</p>
<p>Where p_i denotes the probability weight for the i-th fractal function f_i, and d_i refers to its fractional dimension. Intuitively, each f_i warps the word x into a particular fractal landscape, revealing different attributes at varying resolutions. The probabilities p_i then blend these fractalized embeddings to produce the final representation. </p>
<ol start="4">
<li>Avoiding Exploding Complexity:
The traditional challenge with geometric fractals, such as the Mandelbrot set or the Sierpinski triangle, is the exploding complexity arising from their recursive nature. P-FAF circumvents this issue through three key strategies:</li>
</ol>
<p>Probabilistic Blending: By integrating multiple fractal embeddings probabilistically, P-FAF maintains computational efficiency. This approach ensures that the complexity of the embedding space grows linearly rather than exponentially with the number of fractal functions employed.
Dimensionality Control: The use of fractional dimensions (d i) allows for fine-tuning the level of detail represented, enabling the model to focus computational resources on the most semantically rich aspects of the embedding space.
Optimized Fractal Selection: Employing optimization algorithms for selecting fractal functions and their parameters, P-FAF ensures that only the most effective fractal transformations for a given task are utilized, minimizing unnecessary computational expenditure.
5. Empirical Validation:
Extensive evaluations demonstrate P-FAF's superior ability to encode nuanced linguistic properties. By integrating P-FAF into neural architectures for tasks such as sentiment analysis and metaphor detection, significant improvements in accuracy were observed, highlighting the method's practical efficacy and computational tractability.</p>
<ol start="6">
<li>Conclusion:
P-FAF represents a significant leap forward in word vectorization, offering a dynamic and contextually aware approach to language representation that scales efficiently. By leveraging the natural fractality of language and employing probabilistic methods to control computational complexity, P-FAF paves the way for the next generation of NLP models that can deeply understand the intricacies of human language with unparalleled precision and efficiency.</li>
</ol>
<p>References:</p>
<p>Barnsley, M. F. (1988). Fractals Everywhere. Academic Press.
Mandelbrot, B. B. (1983). The Fractal Geometry of Nature. W. H. Freeman and Co.
Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient Estimation of Word Representations in Vector Space. arXiv:1301.3781.
Pennington, J., Socher, R., & Manning, C. D. (2014). GloVe: Global Vectors for Word Representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP).</p>
|
Fine-tuning a large language model on Kaggle Notebooks (or even on your own computer) for solving real-world tasks | https://hf.co/blog/lmassaron/fine-tuning-llms-on-kaggle-notebooks | Conclusions |
Transformers and Quadrant: Revolutionizing Data Integration for NLP Tasks | https://hf.co/blog/Andyrasika/qdrant-transformers | Conclusion |
Reformatted Alignment | https://hf.co/blog/vladbogo/reformatted-alignment | Conclusion |
Rank-Stabilized LoRA: Unlocking the Potential of LoRA Fine-Tuning | https://hf.co/blog/damjan-k/rslora | Conclusion |
Guide : W-Okada, realtime voice cloning | https://hf.co/blog/Lenylvt/w-okada | Update : |
Detecting LLM-Generated Text with Binoculars | https://hf.co/blog/dmicz/binoculars-text-detection | Conclusion |
Beyond Traditional Fine-tuning: Exploring Advanced Techniques to Mitigate LLM Hallucinations | https://hf.co/blog/Imama/pr |
<p>
Large language models (LLMs) have revolutionized text generation, but their tendency to produce factually incorrect and nonsensical outputs, known as "hallucinations," remains a major concern. Yesterday I read an info-packed paper named <b>"A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models"</b>. I guess it has all the prominent methods used for Hallucination Mitigation. So let's unpack what it has.</p>
<h3>Hallucination</h3>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/63e243c2419922d5a6d7a8dd/liU9nFA37kaCnhQZVquZ4.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63e243c2419922d5a6d7a8dd/liU9nFA37kaCnhQZVquZ4.png"/></a></p>
<p>Hallucination in LLMs refers to the models' tendency to generate text that appears factual but is entirely fabricated or not grounded in reality. Leading to decreased model accuracy, misleading insights, biased and contradictory outputs, unrealistic narratives, and many more. So in simple terms, Hallucination is when the LLMs try to fluke when they don't know the answer(just kidding😃😃)</p>
<h3>Technique for hallucination mitigation</h3>
<p>Researchers have proposed a diverse range of techniques to mitigate hallucinations in LLMs.Hallucination mitigation techniques are divided into two types: <u>Prompt Engineering and Developing Methods.</u>
Prompt Engineering is further divided into 2 parts RAG and Self Refinement through Feedback and Reasoning and Prompt Tuning.</p>
<p><b>Retrieval-Augmented Generation (RAG) </b></p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/63e243c2419922d5a6d7a8dd/PWkGf9zOHvtyugfuLKuXJ.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/63e243c2419922d5a6d7a8dd/PWkGf9zOHvtyugfuLKuXJ.png"/></a></p>
<p>RAG, short for Retrieval-Augmented Generation, is a technique that combines retrieval-based and generative-based methods to improve the performance of LLMs.The retrieval module searches for the relevant information from the external source and the generation module uses the retrieved information to produce the response of the LLM.
Many techniques go under RAG. Some of them are:</p>
<ol>
<li><i>LLM Augumentor</i> - It modifies the internal parameter to adapt LLM to specific tasks by adding small modules to the LLM architecture and then fine-tuning for the target task.</li>
<li><i>FreshPrompt</i>- Retrieve external information relevant to the user query from the updated search engine and create an LLM response using that.</li>
<li><i>Knowledge Retrieval</i>- LM uses relevant knowledge from the external source and uses keyword search and embedding-based retrieval to find relevant information to produce the response.</li>
<li><i>Decompose and Query</i> framework breaks the user query into small questions and for each sub-question, its relevant response is generated by the LLMs.</li>
<li><i>High Entropy Word Spotting and Replacement technique</i> improves the creativity and diversity of the LLMs by identifying words with high entropy and replacing them using synonym search, random sampling or knowledge replacement.</li>
</ol>
<p><b>Self Refinement through Feedback and Reasoning</b></p>
<p>Self-refinement through Feedback and Reasoning is a novel approach for large language models (LLMs) to improve their outputs iteratively. It leverages feedback-based learning and reasoning abilities to achieve better factuality, consistency, and relevance in generated text.
There are many techniques used in Self-Refinement through Feedback and Reasoning such as <u> ChatProtect, Self Reflection Method, Structured Comparative Reasoning, Chain of Verification(CoVe), Chain of Natural Language Inference(CoNLI), etc.</u></p>
<p><b>Prompt Tuning</b></p>
<p>It is the practice of tailoring prompts to guide LLMs to get desired outputs. It avoids the need for extensive retraining, making it a powerful and efficient tool.</p>
<p><b> Developing Methods </b></p>
<p>Many developing methods can also be effective techniques used for mitigating the hallucinations of LLMs.Some of them are:</p>
<p>1.<u>Context-Aware Decoding (CAD)</u> It combats LLM hallucinations by integrating semantic context vectors into the decoding process. These vectors capture the meaning of the entire context not just a specific word(such as in the attention mechanism).CAD is particularly effective in overriding a model’s prior knowledge when it contradicts the provided context, leading to substantial improvements in tasks where knowledge conflict is possible.</p>
<p>2.<u>Decoding by Contrasting Layers (DoLa)</u>It is a simple decoding strategy designed to mitigate hallucinations in pre-trained LLMs without the need for external knowledge conditioning or additional finetuning. DoLa achieves the next-token distribution by contrasting logit differences between later and earlier layers projected into the vocabulary space. It enhances the identification of factual
knowledge and minimizes the generation of incorrect facts.</p>
<p>3.<u>Supervised fine-tuning (SFT)</u> It is a technique to adapt a pre-trained LLM to a target task using labeled data by fine-tuning the LLM parameters according to the target task. As only a subset of parameters are updated, therefore, SFT usually requires less computational power and training time compared to full fine-tuning.</p>
<p>The journey to harnessing the full potential of LLMs requires tackling the persistent issue of hallucinations. While traditional fine-tuning has its limitations, exciting new techniques like RAG, Self-Refinement, and Context-Aware Decoding offer promising solutions. As we delve deeper into these methods, questions arise:</p>
<p>Which techniques hold the most potential for specific research domains or tasks?
Can we combine these methods for even more robust hallucination mitigation?</p>
<p>These are just a few sparks to ignite the discussion. Share your thoughts, experiences, and questions in the comments below! Let's work together to build a future where LLMs are not just powerful, but also reliable and trustworthy partners in our endeavors.</p>
|
Humor Understanding Multi-task Optimization & Ranking | https://hf.co/blog/TuringsSolutions/humortest | PFAF+Al Bundy |
Probabilistic Fractal Activation Function (P-FAF) and Its Advantages Over Traditional Word Vectorization | https://hf.co/blog/TuringsSolutions/pfafresearch | JediPhi: |
🥐CroissantLLM: A Truly Bilingual French-English Language Model | https://hf.co/blog/manu/croissant-llm-blog |
<p>
We are thrilled to introduce CroissantLLM, a small but capable 1.3 billion parameter language model trained on 3T tokens, that is fully open, and truly bilingual ! The goal is to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware. Our approach is rooted in transparency, and along with the model and various checkpoints, we release new high-quality French datasets sourced from legal, administrative, cultural, business, scientific and translation data, as well as FrenchBench, a novel evaluation benchmark to assess LLM performance in French !</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/Xm2_Jlu5g2O6GtRcUa8HW.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/Xm2_Jlu5g2O6GtRcUa8HW.png"/></a></p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#the-data" id="the-data" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
The data
</span>
</h1>
<p>Most recent models have been trained on dominantly English corpora, leading to performance drops in other languages and to English-centered cultural bias. With CroissantLLM, we aim to train a model in which English is not the dominant language and go for a 1:1 ratio of English and French data ! </p>
<p>One of the challenges was to gather sufficient amounts of high-quality data in French. We collected, filtered and cleaned data from multiple varied sources, in order to target various domains (legal, administrative, cultural, scientific, etc.), or cover different text modalities (speech transcriptions, movie subtitles, encyclopedias, forums, webpages)… All data collected is explicitly listed in the technical report, falls under permissive licenses, and is shared with the rest of the project artefacts.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/CHXyHwZmreVWbwy4yP6d7.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/CHXyHwZmreVWbwy4yP6d7.png"/></a></p>
<p>In total, we collect more than 303 billion tokens of monolingual French data (1.3 Terabytes), as well as 36 billion tokens of French-English high-quality translation data and aggregate that with English and Code data ! We craft our final 3 trillion token dataset such that we obtain equal amounts of French and English data after upsampling. </p>
<p>For reference, training a LLM on 3 trillion tokens is huge ! It is larger than the number of tokens seen during training by the Llama2 models, and almost 10 times as much as what is done in the Bloom models, making CroissantLLM the model that has trained on the most French data to this day !</p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#the-model" id="the-model" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
The model
</span>
</h1>
<p>CroissantLLM is a 1.3 billion parameter model, with a Llama model architecture. Selecting this model size stems from the realization the largest bottlenecks in widespread model adoption is the difficulty in getting models to run quickly on consumer-grade hardware. In fact, looking at HuggingFace downloads, the most downloaded models are not the best performing (Llama2-70B, Mixtral 8x7B) but rather the smaller ones (Llama2-7B, Mistral 7B) which are easier and cheaper to serve and finetune.</p>
<p>With it’s 1.3B model size, CroissantLLM is able to run extremely quickly on lower end GPU servers, enabling for high throughput and low latency, but can also run on CPUs or even mobile devices with decent speeds ! </p>
<p>The tradeoff is obviously that CroissantLLM is not going to display the same generalist capabilities in reasoning, math, coding that larger models have, but it will be perfect for more specific industrial applications, translations or even Chat capabilities in which the big guns are not always demanded !</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/cuzEOKpZLDbHRipGFRaeX.gif" rel="nofollow"><img alt="image/gif" src="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/cuzEOKpZLDbHRipGFRaeX.gif"/></a></p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#the-benchmark" id="the-benchmark" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
The benchmark
</span>
</h1>
<p>To assess the model's performance beyond English, the team introduces FrenchBench, a novel benchmark encompassing various classification and generation tasks to assess LLM performance in French. FrenchBench Gen includes tasks like title generation, summarization, question generation, and question answering, relying on the high-quality French Question Answering dataset, FQuaD. The Multiple Choice section of FrenchBench focuses on reasoning, factual knowledge, and linguistic capabilities. </p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/Z15TPZKUuiFAwBD0qyQrG.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/Z15TPZKUuiFAwBD0qyQrG.png"/></a><em>French-Bench Gen Results (5-shot)</em></p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/mTfVctKyJKkzSyj_DcGAx.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/mTfVctKyJKkzSyj_DcGAx.png"/></a><em>French-Bench Multiple Choice Results (5-shot)</em></p>
<p>CroissantLLM is the best performing model of the size in French, edging out models up to 3 times bigger on most tasks (Bloom 3B).</p>
<p>We also assess the model on English benchmarks and match or surpass the best models of the size !</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/JW3S71PJF_V7u-6Aum6Cu.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/JW3S71PJF_V7u-6Aum6Cu.png"/></a><em>English Benchmarks (5-shot)</em></p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#the-applications" id="the-applications" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
The applications
</span>
</h1>
<p>For the moment, we talked about the base model only ! However, it is now understood base models are only the foundations of most modern LLM systems, and to extract the best performance, it is important to run a second-phase of training called supervised fine-tuning ! We finetune CroissantLLM on Chat data, including from some ChatGPT interactions, and assess CroissantLLMChat capabilities on various tasks in French and English such as MT-Bench, translation, French Trivia…</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/yGkp2nwD684Tgg2tS_IQe.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/yGkp2nwD684Tgg2tS_IQe.png"/></a><em>MT-Bench Performance in English and French</em></p>
<p>MT-Bench aims at assessing the capabilities of LLMs on eight domains. CroissantLLMChat exhibits good performance on French-understanding tasks like Writing and Roleplay, surpassing models of the same size. It also shows good general knowledge in STEM and humanities.</p>
<p>One question this work attempts to tackle is whether training on bilingual data goes beyond augmenting the language understanding and writing capabilities of a model in another language, but also equips the models with novel knowledge and different cultural biases. We evaluate French cultural knowledge on a Trivia task, consisting of questions about France-related topics, asked in English. The results on FrenchTrivia show that pre-training on a very large corpora induces significantly higher knowledge capabilities. </p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/c3fJpbWmAO0hWhfPr5pEB.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/c3fJpbWmAO0hWhfPr5pEB.png"/></a><em>French Trivia Results</em></p>
<p>The benefits of training on French and English data on a 1:1 ratio, and on parallel data can also be seen on translation tasks. In fact, CroissantLLM outperforms large models like Llama and Mistral 7B in few-shot settings, and is on par with the State-of-the-art specialized translation model of the same size, NLLB 1.3B, while retaining it's generalist Chat capabilities.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/WvUODk-DJDZRzbkJCcizm.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/WvUODk-DJDZRzbkJCcizm.png"/></a><em>Translation Results</em></p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#the-transparency" id="the-transparency" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
The transparency
</span>
</h1>
<p>State-of-the-art models, both proprietary and open-weights are often designed and trained by heavily investor-backed companies, that aim to retain a moat by keeping their training data mix and strategy secret, hindering the rest of the field's ability to fully study and understand these models. </p>
<p>Additionally, there are ongoing debates about who actually owns the data used to train these language models, with legal implications becoming more prominent. Recent political discussions, such as the EU AI Act and US Senate hearings, highlight the growing need for transparency in AI development to ensure legal compliance and build trust with users.</p>
<p>The CroissantLLM initiative was designed from the start with transparency in mind. We validate 81 % of the transparency criteria on the <a href="https://crfm.stanford.edu/fmti/" rel="nofollow">FMTI</a> framework, far beyond the scores of even most open initiatives, by releasing the data, models, training procedure and all the code used to curate the data and train the model.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/bYvT4D8kmgtmE7gWJRWRS.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/bYvT4D8kmgtmE7gWJRWRS.png"/></a></p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#the-science" id="the-science" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
The science
</span>
</h1>
<p>More than a performing model, CroissantLLM and the associated arttefacts also aim to be a support to foster further research on multilingual language models, understanding the impact of pretraining data on internal knowledge, and the dynamics of models trained way past the Chinchilla optimal threshold. It will lead to further publications on model memorization and the split capacity of bilingual language models.</p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#links" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Links
</span>
</h1>
<p>The models, datasets, training code, evaluation benchmarks and data are fully open-sourced. </p>
<ul>
<li><a href="https://huggingface.co/croissantllm/CroissantLLMBase">CroissantLLM Base</a> and <a href="https://huggingface.co/croissantllm/CroissantLLMChat-v0.1">CroissantLLMChat</a></li>
<li><a href="https://arxiv.org/abs/2402.00786" rel="nofollow">Technical Report</a></li>
</ul>
<p>CroissantLLM also runs on lower-end mobile devices, and we will release the APK soon !
<a href="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/71ERp-OOaxsnP38s-y0Va.gif" rel="nofollow"><img alt="image/gif" src="https://cdn-uploads.huggingface.co/production/uploads/60f2e021adf471cbdf8bb660/71ERp-OOaxsnP38s-y0Va.gif"/></a></p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#acknowledgments" id="acknowledgments" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
<strong>Acknowledgments</strong>
</span>
</h1>
<p>This work is a collaboration of academic and industrial partners. On the academic side, core authors are affiliated with CentraleSupélec (Université Paris Saclay) and Instituto Superior Técnico de Lisboa, and other contributors are linked to Sorbonne Université and Imperial College London. On the industrial side, core authors receive funding from respectively Illuin Technology (Paris), Unbabel (Lisboa), Equall (New York, Lisboa, Paris). Training compute is mainly obtained on the Jean Zay supercomputer operated by GENCI IDRIS through compute grant 2023-AD011014668R1.</p>
|
Quantization of Transformer Models with Neural Compressor | https://hf.co/blog/Andyrasika/hf-nc-quantization | Conclusion |
Introduction to LLE | https://hf.co/blog/lbourdois/lle | Conclusion and outlook |
Serverless Image Similarity with Upstash Vector and Huggingface Models, Datasets and Spaces | https://hf.co/blog/omerXfaruq/serverless-image-similarity-with-upstash-vector | Conclusion |
Phinetuning 2.0 | https://hf.co/blog/g-ronimo/phinetuning | Sample conversations with fine-tuned Phi-2 |
Building autograd engine tinytorch 03 | https://hf.co/blog/joey00072/building-autograd-engine-tinytorch-03 | Clean up & refactor |
Building autograd engine tinytorch 02 | https://hf.co/blog/joey00072/building-autograd-engine-tinytorch-02 | Connecting Graph |
💻Create a Web Interface for your LLM in Python | https://hf.co/blog/Alex1337/create-a-web-interface-for-your-llm-in-python |
<p>
In this tutorial we will create a simple chatbot web interface and deploy it using an open-source Python library called <a href="https://github.com/Avaiga/taipy" rel="nofollow">Taipy</a>.</p>
<p align="center">
<img alt="Render of the app" src="https://cdn-uploads.huggingface.co/production/uploads/63909280d2cf01fdfe33dc51/Ca237RlHtAJ3XbpTNIzyJ.png" width="100%"/>
</p>
<p>Here we will use HuggingFace's API with google/flan-t5-xxl. This tutorial can easily
be adapted to other LLMs.</p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#step-1-install-requirements" id="step-1-install-requirements" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Step 1: Install Requirements
</span>
</h1>
<p>Create a <code>requirements.txt</code> file with the following content:</p>
<pre><code class="language-bash">taipy==3.0.0
</code></pre>
<p>Install the requirements using pip in a terminal:</p>
<pre><code class="language-bash">pip install -r requirements.txt
</code></pre>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#step-2-imports" id="step-2-imports" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Step 2: Imports
</span>
</h1>
<p>Create a <code>main.py</code> file with the following imports:</p>
<pre><code class="language-python"><span class="hljs-keyword">import</span> requests
<span class="hljs-keyword">from</span> taipy.gui <span class="hljs-keyword">import</span> Gui, State, notify
</code></pre>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#step-3-initialize-variables" id="step-3-initialize-variables" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Step 3: Initialize variables
</span>
</h1>
<p>Initialize the following variables in the main.py file:</p>
<pre><code class="language-python">context = <span class="hljs-string">"The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.\n\nHuman: Hello, who are you?\nAI: I am an AI created by Google. How can I help you today? "</span>
conversation = {
<span class="hljs-string">"Conversation"</span>: [<span class="hljs-string">"Who are you?"</span>, <span class="hljs-string">"Hi! I am FLAN-T5 XXL. How can I help you today?"</span>]
}
current_user_message = <span class="hljs-string">""</span>
</code></pre>
<ul>
<li><code>context</code> is the initial context for the conversation, the LLM will use this to understand what behaviour is expected from it.</li>
<li><code>conversation</code> is a dictionary that will store the conversation history to be displayed</li>
<li><code>current_user_message</code> is the current message that the user is typing</li>
</ul>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#step-4-create-a-function-to-generate-responses" id="step-4-create-a-function-to-generate-responses" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Step 4: Create a function to generate responses
</span>
</h1>
<p><strong>This step is the one that needs to be adapted if you want to
use a different LLM.</strong></p>
<p>Initialize the HuggingFace variables with your Access Token. You can find
your Access Token <a href="https://huggingface.co/settings/tokens">here</a>.</p>
<pre><code class="language-python">API_URL = <span class="hljs-string">"https://api-inference.huggingface.co/models/google/flan-t5-xxl"</span>
headers = {<span class="hljs-string">"Authorization"</span>: <span class="hljs-string">"Bearer [YOUR ACCESS TOKEN]"</span>}
</code></pre>
<p>Create a function that takes as input a string <code>prompt</code> which
is the user message and returns a string which is the response from the LLM.</p>
<pre><code class="language-python"><span class="hljs-keyword">def</span> <span class="hljs-title function_">query</span>(<span class="hljs-params">payload</span>):
response = requests.post(API_URL, headers=headers, json=payload)
<span class="hljs-keyword">return</span> response.json()
<span class="hljs-keyword">def</span> <span class="hljs-title function_">request</span>(<span class="hljs-params">state: State, prompt: <span class="hljs-built_in">str</span></span>) -> <span class="hljs-built_in">str</span>:
<span class="hljs-string">"""</span>
<span class="hljs-string"> Send a prompt to the HuggingFace API and return the response.</span>
<span class="hljs-string"></span>
<span class="hljs-string"> Args:</span>
<span class="hljs-string"> - state: The current state of the app.</span>
<span class="hljs-string"> - prompt: The prompt to send to the API.</span>
<span class="hljs-string"></span>
<span class="hljs-string"> Returns:</span>
<span class="hljs-string"> The response from the API.</span>
<span class="hljs-string"> """</span>
output = query(
{
<span class="hljs-string">"inputs"</span>: prompt,
}
)
<span class="hljs-built_in">print</span>(output)
<span class="hljs-keyword">return</span> output[<span class="hljs-number">0</span>][<span class="hljs-string">"generated_text"</span>]
</code></pre>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#step-5-create-a-function-to-add-the-new-messages-to-the-conversation" id="step-5-create-a-function-to-add-the-new-messages-to-the-conversation" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Step 5: Create a function to add the new messages to the conversation
</span>
</h1>
<p>Create a function that gets triggered when the user sends a
message. This function will add the user's message to the context,
send it to the API, get the response, add the response to the
context and to the displayed conversation.</p>
<pre><code class="language-python"><span class="hljs-keyword">def</span> <span class="hljs-title function_">send_message</span>(<span class="hljs-params">state: State</span>) -> <span class="hljs-literal">None</span>:
<span class="hljs-string">"""</span>
<span class="hljs-string"> Send the user's message to the API and update the conversation.</span>
<span class="hljs-string"></span>
<span class="hljs-string"> Args:</span>
<span class="hljs-string"> - state: The current state of the app.</span>
<span class="hljs-string"> """</span>
<span class="hljs-comment"># Add the user's message to the context</span>
state.context += <span class="hljs-string">f"Human: \n <span class="hljs-subst">{state.current_user_message}</span>\n\n AI:"</span>
<span class="hljs-comment"># Send the user's message to the API and get the response</span>
answer = request(state, state.context).replace(<span class="hljs-string">"\n"</span>, <span class="hljs-string">""</span>)
<span class="hljs-comment"># Add the response to the context for future messages</span>
state.context += answer
<span class="hljs-comment"># Update the conversation</span>
conv = state.conversation._<span class="hljs-built_in">dict</span>.copy()
conv[<span class="hljs-string">"Conversation"</span>] += [state.current_user_message, answer]
state.conversation = conv
<span class="hljs-comment"># Clear the input field</span>
state.current_user_message = <span class="hljs-string">""</span>
</code></pre>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#step-6-create-the-user-interface" id="step-6-create-the-user-interface" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Step 6: Create the User Interface
</span>
</h1>
<p>In Taipy, one way to define pages is to use Markdown strings. Here we use a
<a href="https://docs.taipy.io/en/latest/manuals/gui/viselements/table/" rel="nofollow">table</a> to display the
<code>conversation</code> dictionary and an
<a href="https://docs.taipy.io/en/latest/manuals/gui/viselements/input/" rel="nofollow">input</a> so that the
user can type their message. When the user presses enter,
the <code>send_message</code> function is triggered.</p>
<pre><code class="language-python">page = <span class="hljs-string">"""</span>
<span class="hljs-string"><|{conversation}|table|show_all|width=100%|></span>
<span class="hljs-string"><|{current_user_message}|input|label=Write your message here...|on_action=send_message|class_name=fullwidth|></span>
<span class="hljs-string">"""</span>
</code></pre>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#step-7-run-the-application" id="step-7-run-the-application" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Step 7: Run the application
</span>
</h1>
<p>Finally we run the application:</p>
<pre><code class="language-python"><span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">"__main__"</span>:
Gui(page).run(dark_mode=<span class="hljs-literal">True</span>, title=<span class="hljs-string">"Taipy Chat"</span>)
</code></pre>
<p>And here is the result:</p>
<p align="center">
<img alt="Render of the app" src="https://cdn-uploads.huggingface.co/production/uploads/63909280d2cf01fdfe33dc51/hDdPAMkhzaHLr8R0CvdV6.png" width="80%"/>
</p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#step-8-styling" id="step-8-styling" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Step 8: Styling
</span>
</h1>
<p>The app's style is Taipy's default stylekit. We are going to
make some changes so that it looks more like a chat app.</p>
<p>First in a <code>main.css</code> file, create styles to display user and
AI messages differently:</p>
<pre><code class="language-css"><span class="hljs-selector-class">.gpt_message</span> <span class="hljs-selector-tag">td</span> {
<span class="hljs-attribute">margin-left</span>: <span class="hljs-number">30px</span>;
<span class="hljs-attribute">margin-bottom</span>: <span class="hljs-number">20px</span>;
<span class="hljs-attribute">margin-top</span>: <span class="hljs-number">20px</span>;
<span class="hljs-attribute">position</span>: relative;
<span class="hljs-attribute">display</span>: inline-block;
<span class="hljs-attribute">padding</span>: <span class="hljs-number">20px</span>;
<span class="hljs-attribute">background-color</span>: <span class="hljs-number">#ff462b</span>;
<span class="hljs-attribute">border-radius</span>: <span class="hljs-number">20px</span>;
<span class="hljs-attribute">max-width</span>: <span class="hljs-number">80%</span>;
<span class="hljs-attribute">box-shadow</span>: <span class="hljs-number">0</span> <span class="hljs-number">4px</span> <span class="hljs-number">8px</span> <span class="hljs-number">0</span> <span class="hljs-built_in">rgba</span>(<span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0.2</span>), <span class="hljs-number">0</span> <span class="hljs-number">6px</span> <span class="hljs-number">20px</span> <span class="hljs-number">0</span> <span class="hljs-built_in">rgba</span>(<span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0.19</span>);
<span class="hljs-attribute">font-size</span>: large;
}
<span class="hljs-selector-class">.user_message</span> <span class="hljs-selector-tag">td</span> {
<span class="hljs-attribute">margin-right</span>: <span class="hljs-number">30px</span>;
<span class="hljs-attribute">margin-bottom</span>: <span class="hljs-number">20px</span>;
<span class="hljs-attribute">margin-top</span>: <span class="hljs-number">20px</span>;
<span class="hljs-attribute">position</span>: relative;
<span class="hljs-attribute">display</span>: inline-block;
<span class="hljs-attribute">padding</span>: <span class="hljs-number">20px</span>;
<span class="hljs-attribute">background-color</span>: <span class="hljs-number">#140a1e</span>;
<span class="hljs-attribute">border-radius</span>: <span class="hljs-number">20px</span>;
<span class="hljs-attribute">max-width</span>: <span class="hljs-number">80%</span>;
<span class="hljs-attribute">float</span>: right;
<span class="hljs-attribute">box-shadow</span>: <span class="hljs-number">0</span> <span class="hljs-number">4px</span> <span class="hljs-number">8px</span> <span class="hljs-number">0</span> <span class="hljs-built_in">rgba</span>(<span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0.2</span>), <span class="hljs-number">0</span> <span class="hljs-number">6px</span> <span class="hljs-number">20px</span> <span class="hljs-number">0</span> <span class="hljs-built_in">rgba</span>(<span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0</span>, <span class="hljs-number">0.19</span>);
<span class="hljs-attribute">font-size</span>: large;
}
</code></pre>
<p>We now need to tell Taipy to apply these styles to the rows in
the table. We'll first create a function that will return the
correct class name for each row:</p>
<pre><code class="language-python"><span class="hljs-keyword">def</span> <span class="hljs-title function_">style_conv</span>(<span class="hljs-params">state: State, idx: <span class="hljs-built_in">int</span>, row: <span class="hljs-built_in">int</span></span>) -> <span class="hljs-built_in">str</span>:
<span class="hljs-string">"""</span>
<span class="hljs-string"> Apply a style to the conversation table depending on the message's author.</span>
<span class="hljs-string"></span>
<span class="hljs-string"> Args:</span>
<span class="hljs-string"> - state: The current state of the app.</span>
<span class="hljs-string"> - idx: The index of the message in the table.</span>
<span class="hljs-string"> - row: The row of the message in the table.</span>
<span class="hljs-string"></span>
<span class="hljs-string"> Returns:</span>
<span class="hljs-string"> The style to apply to the message.</span>
<span class="hljs-string"> """</span>
<span class="hljs-keyword">if</span> idx <span class="hljs-keyword">is</span> <span class="hljs-literal">None</span>:
<span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>
<span class="hljs-keyword">elif</span> idx % <span class="hljs-number">2</span> == <span class="hljs-number">0</span>:
<span class="hljs-keyword">return</span> <span class="hljs-string">"user_message"</span>
<span class="hljs-keyword">else</span>:
<span class="hljs-keyword">return</span> <span class="hljs-string">"gpt_message"</span>
</code></pre>
<p>We then apply this function to the table by adding the <code>style</code> property</p>
<pre><code class="language-python"><|{conversation}|table|show_all|style=style_conv|>
</code></pre>
<p>And voilà:</p>
<p align="center">
<img alt="The styled application" src="https://cdn-uploads.huggingface.co/production/uploads/63909280d2cf01fdfe33dc51/UBcpas1GjapfU5PCy7xk8.png" width="80%"/>
</p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#step-9-more-features" id="step-9-more-features" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Step 9: More features
</span>
</h1>
<p>I have added notifications, a sidebar with a button to clear the conversation
and a history of previous conversations. I won't go into the
details of how to do this here, but you can find the full code
in the <a href="https://github.com/Avaiga/demo-llm-chat" rel="nofollow">GitHub repository</a></p>
<h1 class="relative group flex items-center">
<a class="block pr-1.5 text-lg md:absolute md:p-1.5 md:opacity-0 md:group-hover:opacity-100 md:right-full" href="#step-10-deploying-the-app-to-taipy-cloud" id="step-10-deploying-the-app-to-taipy-cloud" rel="nofollow">
<span class="header-link"><svg aria-hidden="true" class="text-gray-500 hover:text-black dark:hover:text-gray-200 w-4" height="1em" preserveaspectratio="xMidYMid meet" role="img" viewbox="0 0 256 256" width="1em" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><path d="M167.594 88.393a8.001 8.001 0 0 1 0 11.314l-67.882 67.882a8 8 0 1 1-11.314-11.315l67.882-67.881a8.003 8.003 0 0 1 11.314 0zm-28.287 84.86l-28.284 28.284a40 40 0 0 1-56.567-56.567l28.284-28.284a8 8 0 0 0-11.315-11.315l-28.284 28.284a56 56 0 0 0 79.196 79.197l28.285-28.285a8 8 0 1 0-11.315-11.314zM212.852 43.14a56.002 56.002 0 0 0-79.196 0l-28.284 28.284a8 8 0 1 0 11.314 11.314l28.284-28.284a40 40 0 0 1 56.568 56.567l-28.285 28.285a8 8 0 0 0 11.315 11.314l28.284-28.284a56.065 56.065 0 0 0 0-79.196z" fill="currentColor"></path></svg></span>
</a>
<span>
Step 10: Deploying the app to Taipy Cloud
</span>
</h1>
<p>We are now going to deploy the app to Taipy Cloud so it is
accessible from anyone with a link.</p>
<p>Firstly we need to store the API key in an environment variable.
Replace the line that defines <code>headers</code> in Step 4 with:</p>
<pre><code class="language-python"><span class="hljs-keyword">import</span> os
headers = {<span class="hljs-string">"Authorization"</span>: <span class="hljs-string">f"Bearer <span class="hljs-subst">{os.environ[<span class="hljs-string">'HUGGINGFACE_API_KEY'</span>]}</span>"</span>}
</code></pre>
<p>Now, instead of having our API key in the code, the app will look
for it in the environment variables.</p>
<p>We can now deploy the app to Taipy Cloud:</p>
<ol>
<li>Connect to <a href="https://cloud.taipy.io/" rel="nofollow">Taipy Cloud</a> and sign in</li>
<li>Click on "Add Machine" and fill in the fields</li>
<li>Select the created machine and click on "Add app"</li>
<li>Zip the <code>main.py</code>, <code>main.css</code> and <code>requirements.txt</code> files and upload the zip file to the "App files" field. Fill in the other fields</li>
<li>In the "Environment Variables" tab, create a new environment variable called <code>HUGGINGFACE_API_KEY</code> and paste your API key as the value like in the image below</li>
<li>Press "Deploy app"</li>
</ol>
<p align="center">
<img alt="Environment Variables Tab" src="https://cdn-uploads.huggingface.co/production/uploads/63909280d2cf01fdfe33dc51/hvAVhgZ1UAGmWQ30OBnS1.png" width="80%"/>
</p>
<p>After a while, your app should be running and will be accessible
from the displayed link!</p>
<p align="center">
<img alt="Taipy Cloud Interface" src="https://cdn-uploads.huggingface.co/production/uploads/63909280d2cf01fdfe33dc51/ESgvjsvM84c_QZSiaKtjC.png" width="80%"/>
</p>
<p align="center">
<img alt="The final application" src="https://cdn-uploads.huggingface.co/production/uploads/63909280d2cf01fdfe33dc51/Y6TIRh-XcXnI-AO3soujF.png" width="80%"/>
</p>
|
Robust image watermarking with Stable Signature + IMATAG's BZH | https://hf.co/blog/imatag-vch/stable-signature-bzh | Conclusion <a name="conclusion" rel="nofollow"></a> |
Multilabel Classification using Mistral-7B on a single GPU with quantization and LoRA | https://hf.co/blog/sirluk/multilabel-llm | Training |
Building autograd engine tinytorch 01 | https://hf.co/blog/joey00072/building-autograd-engine-tinytorch-01 | Add & MUL |
AI Lineage Explorer: A Step Towards AI Integrity. | https://hf.co/blog/backnotprop/integrity-explorer | But first, we prioritized user experience. |
Unleashing the Power of Unsloth and QLora:Redefining Language Model Fine-Tuning | https://hf.co/blog/Andyrasika/finetune-unsloth-qlora | Conclusion |
Breaking Barriers: The Critical Role of Art and Design in Advancing AI Capabilities | https://hf.co/blog/fffiloni/the-critical-role-of-art-and-design-in-advancing-a | Stay curious and keep experimenting |
Implementing Fractional GPUs in Kubernetes with Aliyun Scheduler | https://hf.co/blog/NileshInfer/implementing-fractional-gpus-in-kubernetes | Table of Contents |
Extending the Massive Text Embedding Benchmark to French: the datasets | https://hf.co/blog/lyon-nlp-group/french-mteb-datasets | Bibliography |
Unleashing the Power of Logprobs in Language Models: A Practical Guide | https://hf.co/blog/Andyrasika/logprobs-transformers | Conclusion |
Conditional Probability | https://hf.co/blog/ariG23498/conditional-probability | Acknowledgement |
Merge Large Language Models with mergekit | https://hf.co/blog/mlabonne/merge-models | Conclusion |
Temporal Scene Generation w/ Stable Diffusion | https://hf.co/blog/Bilal326/stable-diffusion-project | Natalie - LoRA training |
Unveiling TinyLlama: An Inspiring Dive into a Revolutionary Small-Scale Language Model | https://hf.co/blog/Andyrasika/tinyllama | Resources: |
Multi-Label Classification Model From Scratch: Step-by-Step Tutorial | https://hf.co/blog/Valerii-Knowledgator/multi-label-classification | Dataset |
Multimodal IDEFICS: Unveiling the Transparency & Power of Open Visual Language Models | https://hf.co/blog/Andyrasika/idefics-multimodal | Resources: |
What is Probability? | https://hf.co/blog/ariG23498/what-is-probability | Acknowledgement |
4D masks support in Transformers | https://hf.co/blog/poedator/4d-masks | Conclusion |
Understanding Mixtral-8x7b | https://hf.co/blog/vtabbott/mixtral |
<p>
<em>This blog post is adapted from an <a href="https://x.com/vtabbott_/status/1741292811193065557?s=20" rel="nofollow">X thread</a> I posted. It's garnered significant interest, so I decided to post it here as well!</em></p>
<p><a href="https://mistral.ai/news/mixtral-of-experts/" rel="nofollow">Mixtral-8x7b</a> by MistralAI is an LLM that outperforms all but OpenAI and Anthropic's most powerful models. And, it is <a href="https://github.com/mistralai/mistral-src/tree/moe" rel="nofollow">open-source</a>. In this blog post, I will explain its architecture design using my <a href="https://openreview.net/forum?id=RyZB4qXEgt" rel="nofollow">Neural Circuit Diagrams</a>. Let's dive in and see how cutting-edge transformers work! </p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/65921b0435c41262d6b68b34/5yRsqriAZH2cp2fUseR7g.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/65921b0435c41262d6b68b34/5yRsqriAZH2cp2fUseR7g.png"/></a><em>From LMSys' <a href="https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard">Chatbot Arena</a>. Mixtral-8x7b is very, very good. You can try it in <a href="https://chat.lmsys.org/?arena" rel="nofollow">the arena</a> for yourself!</em></p>
<p>The overall structure of the architecture is shockingly simple. It is a decoder-only transformer. The model input is a series of tokens, which are embedded into vectors, and are then processed via decoder layers. The output is the probability of every location being occupied by some word, allowing for text infill and prediction.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/65921b0435c41262d6b68b34/eDxgIVPpPYsLp7y7KQMmd.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/65921b0435c41262d6b68b34/eDxgIVPpPYsLp7y7KQMmd.png"/></a><em>The overall model converts tokens to vectors, processes them, and converts them back to word probabilities.</em></p>
<p>Every decoder layer has two key sections: an attention mechanism, which incorporates contextual information; and a multi-layer perceptron, which individually processes every word vector.</p>
<p>These are encapsulated in residual connections, which allows for <a href="https://twitter.com/vtabbott_/status/1733457509632070027?s=20" rel="nofollow">training at depth</a>. A combination of contextual and individual processing allows for sophisticated patterns to be discovered.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/65921b0435c41262d6b68b34/ukuM-mNOoDAjPfoFGJ_yf.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/65921b0435c41262d6b68b34/ukuM-mNOoDAjPfoFGJ_yf.png"/></a><em>The decoder layers are akin to the original transformer's, but exclusively use self-attention.</em></p>
<p>The attention mechanism used is similar to the original transformer's, which I cover in detail in my paper and briefly in a <a href="https://www.youtube.com/watch?v=ghlIs8bVXU4" rel="nofollow">YouTube video</a>. I list additional key features in the diagram, also covered in the <a href="https://github.com/mistralai/mistral-src/tree/moe" rel="nofollow">original github</a> and <a href="https://huggingface.co/docs/transformers/model_doc/mixtral">Hugging Face docs</a>.</p>
<p>A key feature not explicitly shown in the below diagram is <a href="https://hazyresearch.stanford.edu/blog/2023-01-12-flashattention-long-sequences" rel="nofollow">FlashAttention</a> by <a href="https://hazyresearch.stanford.edu/" rel="nofollow">Hazy Research</a>, which accelerates algorithms by decomposing attention to fit on GPU kernels enabling high-speed memory access. I've been making progress using <em>Neural Circuit Diagrams</em> to derive such techniques. Explicitly stating variables in memory, linearity, and broadcasting is naturally displayed by them, offering the formal tools needed to accelerate algorithms.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/65921b0435c41262d6b68b34/1UuvfoZqsw9nB569N9kRB.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/65921b0435c41262d6b68b34/1UuvfoZqsw9nB569N9kRB.png"/></a><em>Attention mechanisms have gradually evolved since popularized by 2017's <a href="https://arxiv.org/abs/1706.03762" rel="nofollow">Attention is All You Need</a>.</em></p>
<p>Finally, we get the key feature of Mixtral: <strong>Sparse Mixture of Experts</strong> (SMoE). MLP layers are immense consumers of computational resources. SMoEs have multiple layers ("experts") available. For every input, a weighted sum is taken over the outputs of the most relevant experts. SMoE layers can therefore learn sophisticated patterns while having relatively inexpensive compute cost.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/65921b0435c41262d6b68b34/-scdQY3VUk19RsiULSo-7.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/65921b0435c41262d6b68b34/-scdQY3VUk19RsiULSo-7.png"/></a><em>A gating mechanism decides which layers to execute, leading to a computationally efficient algorithm. See also <a href="https://arxiv.org/pdf/2101.03961.pdf" rel="nofollow">Switch Transformers</a> and <a href="https://arxiv.org/pdf/2211.15841.pdf" rel="nofollow">MegaBlocks</a>.</em></p>
<p><strong>Conclusion.</strong> Mixtral is an immense achievement for the open-source AI community. The model is surprisingly simple. Compared to the <a href="https://arxiv.org/abs/1706.03762" rel="nofollow">original transformer</a> <a href="/blog/vtabbott/vtabbott.io/ncd-poster">architecture</a>, encoders have been removed. Attention mechanisms have incurred 7 years' gradual innovations. The biggest change is the presence of SMoEs instead of plain MLPs. Mixtral has proven that open-source designs and SMoEs are on the frontier of ML development, and I suspect both will attract far more attention as a result.</p>
<p><a href="https://cdn-uploads.huggingface.co/production/uploads/65921b0435c41262d6b68b34/OsCpW1VryCYHBBFUuqN2S.png" rel="nofollow"><img alt="image/png" src="https://cdn-uploads.huggingface.co/production/uploads/65921b0435c41262d6b68b34/OsCpW1VryCYHBBFUuqN2S.png"/></a><em>The overall attention architecture, expressed using Neural Circuit Diagrams.</em></p>
|
Streamlining Data Management with Hugging Face and DVC: A Seamless Integration | https://hf.co/blog/Andyrasika/hf-dvc | Conclusion |
TchAIkovsky – Piano MIDI Generation with Transformers | https://hf.co/blog/afmck/tchaikovsky | Conclusion |
How Your Ordinary 8GB MacBook’s Untapped AI Power Can Run 70B LLM Models That Will Blow Your Mind! | https://hf.co/blog/lyogavin/airllm-mac | AirLLM Mac |
Leveraging Transformers and PyTorch for Multiple Choice Question Tasks | https://hf.co/blog/Andyrasika/mcq-pytorch-transformers | Conclusion |
Build an AI Chatbot to Run Code and Tweak plots | https://hf.co/blog/sophiamyang/tweak-mpl-chat | Step 4: Define layout |
Combating Evaluation Data Contamination in LLMs: Strategies for High-Quality Finetuning and Model Merging | https://hf.co/blog/rishiraj/merge-models-without-contamination | Conclusion: |
Counting 'n' objects | https://hf.co/blog/ariG23498/count-n-objects | Acknowledgement |
Kubernetes infrastructure for HF models and chat with Cluster.dev | https://hf.co/blog/voatsap/tgi-kubernetes-cluster-dev | Conclusion |
How to build an interactive HF Space to visualize an Image Dataset | https://hf.co/blog/MarkusStoll/interactive-hf-space-to-visualize-image-datasets | References |
Uniting Forces: Integrating Hugging Face with Langchain for Enhanced Natural Language Processing | https://hf.co/blog/Andyrasika/agent-helper-langchain-hf | Conclusion |
Drag GAN - Interactive Point-based Manipulation on the Generative Image Manifold | https://hf.co/blog/hwaseem04/drag-gan | Conclusion |
Running Any HuggingFace Model on SageMaker Endpoint: Walk-Through with Cross Encoder Model Example | https://hf.co/blog/kchoe/deploy-any-huggingface-model-to-sagemaker | <a name="HSageMaker-with-model" rel="nofollow"></a> 6. (Optional) Revise package zip file to include model binary |
Predicting the Effects of Mutations on Protein Function with ESM-2 | https://hf.co/blog/AmelieSchreiber/mutation-scoring | Conclusion |
Deploying Your FastAPI Applications on Huggingface Via Docker | https://hf.co/blog/HemanthSai7/deploy-applications-on-huggingface-spaces | Quick Links |
What is a Transformer? | https://hf.co/blog/andmholm/what-is-a-transformer | Conclusion |
📚 Training Data Transparency in AI: Tools, Trends, and Policy Recommendations 🗳️ | https://hf.co/blog/yjernite/data-transparency | Trends in Data Transparency |
🏷️ Build AI Feedback (AIF) datasets for LLM alignment with ⚗️ distilabel | https://hf.co/blog/alvarobartt/helpsteer-with-distilabel | References |
Fine-Tuning LLMs: Supervised Fine-Tuning and Reward Modelling | https://hf.co/blog/rishiraj/finetune-llms | Conclusion |
Easy JAX training loops with Flax and Optax | https://hf.co/blog/afmck/flax-tutorial | Acknowledgements and Extra Resources |
On Learning JAX – A Framework for High Performance Machine Learning | https://hf.co/blog/afmck/jax-tutorial | Acknowledgements and Further Resources |
Sentence Mining with OpenAI's Whisper | https://hf.co/blog/afmck/whisper-sentence-mining | Acknowledgements and Extra Resources |
Illustrated LLM OS: An Implementational Perspective | https://hf.co/blog/shivance/illustrated-llm-os | Acknowledgements |
💨 Introducing Notus: a DPO fine-tune of Zephyr with a focus on high-quality data | https://hf.co/blog/alvarobartt/notus-7b-v1 | References |
Faster Persistent Homology Alignment and Protein Complex Clustering with ESM-2 and Persistence Landscapes | https://hf.co/blog/AmelieSchreiber/faster-pha | Conclusion |
Evaluating Large Language Models on Gender-Occupational Stereotypes Using the Wino Bias Test | https://hf.co/blog/Rakshit122/gender-occupational-stereotypes | References |
Unbelievable! Run 70B LLM Inference on a Single 4GB GPU with This NEW Technique | https://hf.co/blog/lyogavin/airllm |
<p>
Large language models require huge amounts of GPU memory. Is it possible to run inference on a single GPU? If so, what is the minimum GPU memory required?</p>
<p><a href="https://cdn-images-1.medium.com/max/3280/1*H7xkrF-6ryxIcUPNwnQ4Xg.png" rel="nofollow"><img alt="" src="https://cdn-images-1.medium.com/max/3280/1*H7xkrF-6ryxIcUPNwnQ4Xg.png"/></a></p>
<p>The 70B large language model has parameter size of 130GB. <strong>Just loading the model into the GPU requires 2 A100 GPUs with 100GB memory each.</strong></p>
<p>During inference, the entire input sequence also needs to be loaded into memory for complex “attention” calculations. The memory requirement of this attention mechanism scales quadratically with the input length. On top of the 130GB model size, a lot more memory is needed.</p>
<p>So what techniques can save so much memory and enable inference on a single 4GB GPU?</p>
<p>Note that here the memory optimization techniques <strong>do not require any model compression like quantization, distillation, pruning that would sacrifice model performance.</strong></p>
<p>Today <strong>we will explain the key techniques for extreme memory optimization of large models.</strong></p>
<p>At the end of the article we also shared the open source library to achieve this with a few lines of codes!</p>
<p><strong>01</strong></p>
<p><strong>Layer-wise Inference</strong></p>
<p>The most critical technique is layer-wise inference. This is essentially the basic <strong>divide and conquer approach</strong> in computer science.</p>
<p>Let’s first look at the architecture of large language models. Today’s large language models all adopt the Multi-head self-attention structure proposed in Google’s paper “Attention is all you need”. This is what people later call the Transformer structure.</p>
<p><a href="https://cdn-images-1.medium.com/max/2000/0*wg1TK6QDogxId8Sv" rel="nofollow"><img alt="" src="https://cdn-images-1.medium.com/max/2000/0*wg1TK6QDogxId8Sv"/></a></p>
<p>The large language model first has an embedding projection layer. After that there are 80 completely identical transformer layers. Finally there is a normalization and fully connected layer to predict the token ID probabilities.</p>
<p>During inference, layers are executed sequentially. The output of the previous layer is the input to the next. Only one layer executes at a time.</p>
<p>Therefore, it is completely unnecessary to keep all layers in GPU memory. <strong>We can load whichever layer is needed from disk when executing that layer, do all the calculations, and then completely free the memory after.</strong></p>
<p>This way, the GPU memory required per layer is only about the parameter size of one transformer layer, 1/80 of the full model, around 1.6GB.</p>
<p>In addition, some output caches are also stored in GPU memory, the largest being the KV cache to avoid repeated computations.</p>
<p>A simple calculation, for the 70B model this KV cache size is about:</p>
<p>2 * input_length * num_layers * num_heads * vector_dim * 4</p>
<p>With input length 100, this cache = 2 * 100 * 80 * 8 * 128 * 4 = 30MB GPU memory.</p>
<p><strong>According to our monitoring, the entire inference process uses less than 4GB GPU memory!</strong></p>
<p><strong>02</strong></p>
<p><strong>Single Layer Optimization — Flash Attention</strong></p>
<p>Flash attention is perhaps one of the most important and critical optimizations in the development of large language models today.</p>
<p>All the various large language models use essentially the same underlying code, with flash attention being the biggest improvement.</p>
<p>The idea of flash attention optimization is not entirely novel though, we have to mention another paper “Self-attention Does Not Need O(n²) Memory”.</p>
<p>Originally self attention requires O(n²) memory (n being sequence length).</p>
<p>This paper proposes that we don’t actually need to keep the O(n²) intermediate results. We can compute them sequentially, continuously update one intermediate result and discard everything else. This reduces the memory complexity to O(logn).</p>
<p>Flash attention is similar in essence, with slightly higher memory complexity O(n), but <strong>flash attention deeply optimizes cuda memory access to achieve multi-fold speedups for inference and training.</strong></p>
<p><a href="https://cdn-images-1.medium.com/max/2000/0*Ah_AxED31aIT2cFz" rel="nofollow"><img alt="" src="https://cdn-images-1.medium.com/max/2000/0*Ah_AxED31aIT2cFz"/></a></p>
<p>As the figure shows, originally self attention computes and stores O(n²) intermediate results. Flash attention splits the computation into many small blocks, computing block by block and reducing memory to the size of one block.</p>
<p><strong>03</strong></p>
<p><strong>Model File Sharding</strong></p>
<p>The original model file is usually sharded into multiple chunks, typically 10GB each.</p>
<p>Our execution processes layer by layer. Each layer is only 1.6GB. If we load based on the original 10GB shards, every layer execution will require reloading the entire 10GB file but only using 1.6GB.</p>
<p>This process wastes a lot of memory for loading and disk reading. Disk reading speed is actually the slowest bottleneck in the whole inference process, so we want to minimize it as much as possible.</p>
<p>Therefore, we first <strong>pre-process the original HuggingFace model file and shard it by layers</strong>.</p>
<p>For storage we use safetensor technology (<a href="https://github.com/huggingface/safetensors" rel="nofollow">https://github.com/huggingface/safetensors</a>).</p>
<p><strong>Safetensor ensures the storage format and in-memory format match closely, and uses memory mapping for loading to maximize speed.</strong></p>
<p><strong>04</strong></p>
<p><strong>Meta Device</strong></p>
<p>In implementation we use the meta device feature provided by HuggingFace Accelerate (<a href="https://huggingface.co/docs/accelerate/usage_guides/big_modeling">https://huggingface.co/docs/accelerate/usage\_guides/big\_modeling</a>).</p>
<p>Meta device is a <strong>virtual device</strong> designed specifically for running ultra large models. <strong>When you load a model via meta device, the model data is not actually read in, only the code is loaded. Memory usage is 0.</strong></p>
<p>You can dynamically transfer parts of the model from meta device to a real device like CPU or GPU during execution. Only then is it actually loaded into memory.</p>
<p>Using init_empty_weights() allows model loading via meta device.</p>
<pre><code>from accelerate import init_empty_weights
with init_empty_weights():
my_model = ModelClass(...)
</code></pre>
<p><strong>05</strong></p>
<p><strong>Open Source Library</strong></p>
<p>We open sourced all the code — AirLLM. Allows you to achieve this with a few lines of code.</p>
<p><strong>It can be found in the Anima github: <a href="https://github.com/lyogavin/Anima/tree/main/air_llm" rel="nofollow">**https://github.com/lyogavin/Anima/tree/main/air_llm</a></strong>.**</p>
<p>Usage is very simple. First install the package:</p>
<pre><code>pip install airllm
</code></pre>
<p>Then layered inference can be performed like a normal Transformer model:</p>
<pre><code>from airllm import AirLLMLlama2
MAX_LENGTH = 128
# could use hugging face model repo id:
model = AirLLMLlama2("garage-bAInd/Platypus2-70B-instruct")
# or use model's local path...
#model = AirLLMLlama2("/home/ubuntu/.cache/huggingface/hub/models--garage-bAInd--Platypus2-70B-instruct/snapshots/b585e74bcaae02e52665d9ac6d23f4d0dbc81a0f")
input_text = [
'What is the capital of United States?',
]
input_tokens = model.tokenizer(input_text,
return_tensors="pt",
return_attention_mask=False,
truncation=True,
max_length=MAX_LENGTH,
padding=True)
generation_output = model.generate(
input_tokens['input_ids'].cuda(),
max_new_tokens=20,
use_cache=True,
return_dict_in_generate=True)
output = model.tokenizer.decode(generation_output.sequences[0])
print(output)
</code></pre>
<p>We have tested this code on a 16GB Nvidia T4 GPU. The entire inference process <strong>uses less than 4GB GPU memory</strong>.</p>
<p>Note that lower end GPUs like T4 will be quite slow for inference. Not very suitable for interactive scenarios like chatbots. More suited for some offline data analytics like RAG, PDF analysis etc.</p>
<p>Currently only Llam2 based models are supported. <strong>Leave a comment if you need support for other models!</strong></p>
<p><strong>06</strong></p>
<p><strong>Can 70B Training Fit on a Single GPU?</strong></p>
<p>While inference can be optimized with layering, can training work similarly on a single GPU?</p>
<p>Inference only needs the output of the previous layer when executing the next transformer layer, so layered execution with limited data is possible.</p>
<p><strong>Training requires more data. The training process first computes the forward propagation to get the output of every layer and tensor. Then does backpropagation to compute the gradient of every tensor.</strong></p>
<p><strong>Gradient calculation needs to save the results of previous forward layers, so layered execution does not reduce memory.</strong></p>
<p>There are some other techniques like gradient checkpointing that can achieve similar effects.</p>
<p><strong>If you are interested in how gradient checkpointing can significantly reduce training memory requirements, leave a comment!</strong></p>
<p><strong>07</strong></p>
<p>Our code references a lot from <a href="https://www.kaggle.com/simjeg" rel="nofollow">SIMJEG</a>’s implementation on Kaggle: <a href="https://www.kaggle.com/code/simjeg/platypus2-70b-with-wikipedia-rag/notebook" rel="nofollow">https://www.kaggle.com/code/simjeg/platypus2-70b-with-wikipedia-rag/notebook</a>. Shout out to the awesome Kaggle community for their contributions!</p>
<p><strong>We will continue open sourcing the latest and most effective new methods and advances in AI, contributing to the open source community. Please follow us.</strong></p>
|
Clustering Protein Complexes using Persistent Homology and Finetuning ESM-2 for PPI Network Prediction | https://hf.co/blog/AmelieSchreiber/esm-ppi | Conclusion |
Streamlining ML Workflows: Integrating MLFlow Tracking with LangTest for Enhanced Model Evaluations | https://hf.co/blog/arshaan-nazir/streamlining-ml-workflows-langtest | In Conclusion |
Automatic Hallucination detection with SelfCheckGPT NLI | https://hf.co/blog/dhuynh95/automatic-hallucination-detection | Conclusion |
Extracting Insights from Model Cards Using Open Large Language Models | https://hf.co/blog/davanstrien/model-card-concepts | Conclusion |
ESM-2 for Generating and Optimizing Peptide Binders for Target Proteins | https://hf.co/blog/AmelieSchreiber/esm-interact | Conclusions |
Does Sketching Work? | https://hf.co/blog/ethanepperly/does-sketching-work | Footnotes |
Understanding Zephyr | https://hf.co/blog/Isamu136/understanding-zephyr | Results |
Are your NLP models deteriorating post-deployment? Let’s use unlabelled data to find out | https://hf.co/blog/santiviquez/performance-estimation-nlp-nannyml | References <a name="references" rel="nofollow"></a> |
Persistent Homology Alignment (PHA): Replacing Multiple Sequence Alignments using ESM-2 and Persistent Homology | https://hf.co/blog/AmelieSchreiber/plm-persistent-homology-msa-replacement | Conclusion |
In Silico Directed Evolution of Protein Sequences with ESM-2 and EvoProtGrad | https://hf.co/blog/AmelieSchreiber/directed-evolution-with-esm2 | Advantages of In Silico Methods |
QLoRA for ESM-2 and Post Translational Modification Site Prediction | https://hf.co/blog/AmelieSchreiber/esm2-ptm | Conclusion |
Automating Responsible AI: Integrating Hugging Face and LangTest for More Robust Models | https://hf.co/blog/alytarik/langtest-hf-integration | Evaluating the Enhanced Model’s Performance |
Hugging Face accelerates distribution of models and datasets based on Dragonfly | https://hf.co/blog/gaius-qi/hugging-face-distribution-based-on-dragonfly | Hugging Face |
Introducing the Giskard Bot: Enhancing LLM Testing & Debugging on Hugging Face | https://hf.co/blog/JMJM/giskard-llm-testing-and-debugging-hf | Conclusion: Charting the Future of Giskard Bot on Hugging Face |
Elevate Your NLP Models with Automated Data Augmentation for Enhanced Performance | https://hf.co/blog/chakravarthik27/boost-nlp-models-with-automated-data-augmentation | Conclusion |
Goodbye Python, Hello Rust: Building a RAG CLI Application with Orca | https://hf.co/blog/santiagomed/building-a-rag-cli-application-application | Conclusion |
StarCoder Memorization Experiment Highlights Privacy Risks of Fine-Tuning On Code | https://hf.co/blog/dhuynh95/starcoder-memorization-experiment | Conclusion |
Scaling Self Supervised Learning for Histology: introducing Phikon | https://hf.co/blog/EazyAl/phikon | Conclusion |
Unmasking Language Model Sensitivity in Negation and Toxicity Evaluations | https://hf.co/blog/Prikshit7766/llms-sensitivity-testing | References |
Detecting and Evaluating Sycophancy Bias: An Analysis of LLM and AI Solutions | https://hf.co/blog/Rakshit122/sycophantic-ai | References |
After 500+ LoRAs made, here is the secret | https://hf.co/blog/FPHam/lora-secrets-1 |
<p>
(a reprint of my article posted on reddit)</p>
<p>Well, you wanted it, here it is:</p>
<p>The quality of dataset is 95% of everything. The rest 5% is not to ruin it with bad parameters.</p>
<p>Yeah, I know, GASP! No seriously, folks are searching for secret parameters or secret sauce - but this is the whole deal.</p>
<p>And I mean crystal clean dataset. Yes, I know, thousands of items (maybe tens of thousands), generated or scrubbed from internet, who has time to look at it. I see it in "pro" dataset. Look at some random items, and soon you will spot a garbage - because it was obviously generated or scrubbed and never really checked. What's a few rotten eggs, right? Well, it will spoil the whole bunch as grandma Pam said.</p>
<p>Once I started manually checking the dataset and removing or changing the garbage the quality jumped 10-fold. Yes, it takes a huge amount of time - but no matter of parameters or tricks will fix this, sorry.</p>
<p>The training parameters are there not to ruin it - not make it better, so you don't have to chase the perfect LR 2.5647e-4 it doesn't exist. You kind of aim for the right direction and if dataset is great, most of the time you'll get there.</p>
<p>Some more notes:</p>
<p>13b can go only THAT far. There is no way you can create 100% solid finetuning on 13b. You will get close - but like with a child, sometimes it will spill a cup of milk in your lap. 33b is the way. Sadly training 33b on home hardware with 24GB is basically useless because you really have to tone down the parameters - to what I said before - basically ruining it. 48GB at least for 33b so you can crank it up.</p>
<p>IMHO gradient accumulation will LOWER the quality if you can do more than a few batches. There may be sweet spot somewehere, but IDK. Sure batch 1 and GA 32 will be better than batch 1 and GA 1, but that's not the point, that's a bandaid Edit: It could prevent overfitting though and hence help with generalization. It depends what is the goal and how diverse the dataset is.</p>
<p>Size of dataset matters when you are finetuning on base, but matters less when finetuning on well finetuned model. - in fact sometimes less is better in that case or you may be ruining a good previous finetuning.</p>
<p>Alpha = 2x rank seems like something that came from the old times when people had potato VRAM at most and wanted to get there fast. I really don't feel like it makes much sense - it multiplies the weights and that's it. Making things louder, makes also noise louder.</p>
<p>My favorite scheduler is warmup, hold for 1 epoch then cosine down for the next 1- x epochs.</p>
<p>Rank is literally how many trainable parameters you get - you don't have to try to find some other meaning (style vs knowledge). It's like an image taken with 1Mpixel vs 16Mpixel. You always get the whole image, but on 1Mpixel the details are very mushy - while you can still see the big subject, you better not expect the details will be fine. The problem of course is - do you have enough diverse training data to fill those parameters with? If not, you'd be creating very specific model that would have hard time to generalize. Lowring rank will help with generalizations, but also the mundane details will be lost.</p>
<p>Anything else?</p>
<p>Oh, OK, I was talking about LORA for LLM, but it surely applies to SD as well. In fact it's all the same thing (and hence PEFT can be used for both and the same rules apply)</p>
|