{ "cells": [ { "cell_type": "markdown", "id": "5d9aca72-957a-4ee2-862f-e011b9cd3a62", "metadata": {}, "source": [ "# Introduction\n", "## Goal\n", "I want [jais-13B](https://huggingface.co/core42/jais-13b-chat) deployed with an API quickly and easily. I'm also scared of mice so ideally I can just use my keyboard. \n", "\n", "## Approach\n", "There are lots of options out there that are \"1-click\" which is really cool! I would like to do even better and make a \"0-click\". This is great for those that are musophobic (scared of mice) or want scripts that can run without human intervention.\n", "\n", "We will be using [Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) as our serving toolkit as it is robust and configurable. For our hardware we will be using [Inference Endpoints](https://huggingface.co/inference-endpoints) as it makes the deployment procedure really easy! We will be using the API to reach our aforementioned \"0-click\" goal." ] }, { "cell_type": "markdown", "id": "2086a136-6710-45af-b2b1-7224b5cbbca7", "metadata": {}, "source": [ "# Pre-requisites\n", "Deploying LLMs is a tough process. There are a number of challenges! \n", "- These models are huge\n", " - Slow to load \n", " - Won't fit on convenient HW\n", "- Generative transformers require iterative decoding\n", "- Many of the optimizations are not consolidated\n", "\n", "TGI solves many of these, and while I don't want to dedicate this blog to TGI there are a few concepts we need to cover to properly understand how to configure our deployment.\n", "\n", "\n", "## Prefilling Phase\n", "> In the prefill phase, the LLM processes the input tokens to compute the intermediate states (keys and values), which are used to generate the “first” new token. Each new token depends on all the previous tokens, but because the full extent of the input is known, at a high level this is a matrix-matrix operation that’s highly parallelized. It effectively saturates GPU utilization.\n", "\n", "~[Nvidia Blog](https://developer.nvidia.com/blog/mastering-llm-techniques-inference-optimization/)\n", "\n", "Prefilling is relatively fast.\n", "\n", "## Decoding Phase\n", "> In the decode phase, the LLM generates output tokens autoregressively one at a time, until a stopping criteria is met. Each sequential output token needs to know all the previous iterations’ output states (keys and values). This is like a matrix-vector operation that underutilizes the GPU compute ability compared to the prefill phase. The speed at which the data (weights, keys, values, activations) is transferred to the GPU from memory dominates the latency, not how fast the computation actually happens. In other words, this is a memory-bound operation.\n", "\n", "~[Nvidia Blog](https://developer.nvidia.com/blog/mastering-llm-techniques-inference-optimization/)\n", "\n", "Decoding is relatively slow.\n", "\n", "## Example\n", "Lets take an example of sentiment analysis:\n", "\n", "Below we have input tokens that the LLM will pre-fill. Note that we know what the next token is during the pre-filling phase. We can use this to our advantage.\n", "```text\n", "### Instruction: What is the sentiment of the input?\n", "### Examples\n", "I wish the screen was bigger - Negative\n", "I hate the battery - Negative\n", "I love the default appliations - Positive\n", "### Input\n", "I am happy with this purchase - \n", "### Response\n", "```\n", "\n", "Below we have output tokens generated during decoding phase. Despite being few in this example we dont know what the next token will be until we have generated it.\n", "\n", "```text\n", "Positive\n", "```" ] }, { "cell_type": "markdown", "id": "d2534669-003d-490c-9d7a-32607fa5f404", "metadata": {}, "source": [ "# Setup" ] }, { "cell_type": "markdown", "id": "3c830114-dd88-45a9-81b9-78b0e3da7384", "metadata": {}, "source": [ "## Requirements" ] }, { "cell_type": "code", "execution_count": 1, "id": "35386f72-32cb-49fa-a108-3aa504e20429", "metadata": { "tags": [] }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.3.2\u001b[0m\n", "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n", "Note: you may need to restart the kernel to use updated packages.\n" ] } ], "source": [ "%pip install -q \"huggingface-hub>=0.20\" ipywidgets" ] }, { "cell_type": "markdown", "id": "b6f72042-173d-4a72-ade1-9304b43b528d", "metadata": {}, "source": [ "## Imports" ] }, { "cell_type": "code", "execution_count": 2, "id": "99f60998-0490-46c6-a8e6-04845ddda7be", "metadata": { "tags": [] }, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "/Users/derekthomas/projects/spaces/jais-tgi-benchmark/venv/lib/python3.9/site-packages/urllib3/__init__.py:34: NotOpenSSLWarning: urllib3 v2 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'LibreSSL 2.8.3'. See: https://github.com/urllib3/urllib3/issues/3020\n", " warnings.warn(\n" ] } ], "source": [ "from huggingface_hub import login, whoami, create_inference_endpoint\n", "from getpass import getpass" ] }, { "cell_type": "markdown", "id": "5eece903-64ce-435d-a2fd-096c0ff650bf", "metadata": {}, "source": [ "## Config\n", "You need to fill this in with your desired repos. Note I used 5 for the `MAX_WORKERS` since `jina-embeddings-v2` are quite memory hungry. " ] }, { "cell_type": "code", "execution_count": 3, "id": "dcd7daed-6aca-4fe7-85ce-534bdcd8bc87", "metadata": { "tags": [] }, "outputs": [], "source": [ "ENDPOINT_NAME = \"jais13b-demo\"" ] }, { "cell_type": "code", "execution_count": 4, "id": "0ca1140c-3fcc-4b99-9210-6da1505a27b7", "metadata": { "tags": [] }, "outputs": [ { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "3c7ff285544d4ea9a1cc985cf981993c", "version_major": 2, "version_minor": 0 }, "text/plain": [ "VBox(children=(HTML(value='