derek-thomas HF staff commited on
Commit
8695c51
1 Parent(s): 04d2dbd

Mostly finished

Browse files
notebooks/jais_tgi_inference_endpoints.ipynb ADDED
@@ -0,0 +1,555 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "id": "5d9aca72-957a-4ee2-862f-e011b9cd3a62",
6
+ "metadata": {},
7
+ "source": [
8
+ "# Introduction\n",
9
+ "## Goal\n",
10
+ "I want [jais-13B](https://huggingface.co/core42/jais-13b-chat) deployed with an API quickly and easily. I'm also scared of mice so ideally I can just use my keyboard. \n",
11
+ "\n",
12
+ "## Approach\n",
13
+ "There are lots of options out there that are \"1-click\" which is really cool! I would like to do even better and make a \"0-click\". This is great for those that are musophobic (scared of mice) or want scripts that can run without human intervention.\n",
14
+ "\n",
15
+ "We will be using [Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) as our serving toolkit as it is robust and configurable. For our hardware we will be using [Inference Endpoints](https://huggingface.co/inference-endpoints) as it makes the deployment procedure really easy! We will be using the API to reach our aforementioned \"0-click\" goal."
16
+ ]
17
+ },
18
+ {
19
+ "cell_type": "markdown",
20
+ "id": "2086a136-6710-45af-b2b1-7224b5cbbca7",
21
+ "metadata": {},
22
+ "source": [
23
+ "# Pre-requisites\n",
24
+ "Deploying LLMs is a tough process. There are a number of challenges! \n",
25
+ "- These models are huge\n",
26
+ " - Slow to load \n",
27
+ " - Won't fit on convenient HW\n",
28
+ "- Generative transformers require iterative decoding\n",
29
+ "- Many of the optimizations are not consolidated\n",
30
+ "\n",
31
+ "TGI solves many of these, and while I don't want to dedicate this blog to TGI there are a few concepts we need to cover to properly understand how to configure our deployment.\n",
32
+ "\n",
33
+ "\n",
34
+ "## Prefilling Phase\n",
35
+ "> In the prefill phase, the LLM processes the input tokens to compute the intermediate states (keys and values), which are used to generate the “first” new token. Each new token depends on all the previous tokens, but because the full extent of the input is known, at a high level this is a matrix-matrix operation that’s highly parallelized. It effectively saturates GPU utilization.\n",
36
+ "\n",
37
+ "~[Nvidia Blog](https://developer.nvidia.com/blog/mastering-llm-techniques-inference-optimization/)\n",
38
+ "\n",
39
+ "Prefilling is relatively fast.\n",
40
+ "\n",
41
+ "## Decoding Phase\n",
42
+ "> In the decode phase, the LLM generates output tokens autoregressively one at a time, until a stopping criteria is met. Each sequential output token needs to know all the previous iterations’ output states (keys and values). This is like a matrix-vector operation that underutilizes the GPU compute ability compared to the prefill phase. The speed at which the data (weights, keys, values, activations) is transferred to the GPU from memory dominates the latency, not how fast the computation actually happens. In other words, this is a memory-bound operation.\n",
43
+ "\n",
44
+ "~[Nvidia Blog](https://developer.nvidia.com/blog/mastering-llm-techniques-inference-optimization/)\n",
45
+ "\n",
46
+ "Decoding is relatively slow.\n",
47
+ "\n",
48
+ "## Example\n",
49
+ "Lets take an example of sentiment analysis:\n",
50
+ "\n",
51
+ "Below we have input tokens that the LLM will pre-fill. Note that we know what the next token is during the pre-filling phase. We can use this to our advantage.\n",
52
+ "```text\n",
53
+ "### Instruction: What is the sentiment of the input?\n",
54
+ "### Examples\n",
55
+ "I wish the screen was bigger - Negative\n",
56
+ "I hate the battery - Negative\n",
57
+ "I love the default appliations - Positive\n",
58
+ "### Input\n",
59
+ "I am happy with this purchase - \n",
60
+ "### Response\n",
61
+ "```\n",
62
+ "\n",
63
+ "Below we have output tokens generated during decoding phase. Despite being few in this example we dont know what the next token will be until we have generated it.\n",
64
+ "\n",
65
+ "```text\n",
66
+ "Positive\n",
67
+ "```"
68
+ ]
69
+ },
70
+ {
71
+ "cell_type": "markdown",
72
+ "id": "d2534669-003d-490c-9d7a-32607fa5f404",
73
+ "metadata": {},
74
+ "source": [
75
+ "# Setup"
76
+ ]
77
+ },
78
+ {
79
+ "cell_type": "markdown",
80
+ "id": "3c830114-dd88-45a9-81b9-78b0e3da7384",
81
+ "metadata": {},
82
+ "source": [
83
+ "## Requirements"
84
+ ]
85
+ },
86
+ {
87
+ "cell_type": "code",
88
+ "execution_count": 1,
89
+ "id": "35386f72-32cb-49fa-a108-3aa504e20429",
90
+ "metadata": {
91
+ "tags": []
92
+ },
93
+ "outputs": [
94
+ {
95
+ "name": "stdout",
96
+ "output_type": "stream",
97
+ "text": [
98
+ "\n",
99
+ "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m A new release of pip is available: \u001b[0m\u001b[31;49m23.2.1\u001b[0m\u001b[39;49m -> \u001b[0m\u001b[32;49m23.3.2\u001b[0m\n",
100
+ "\u001b[1m[\u001b[0m\u001b[34;49mnotice\u001b[0m\u001b[1;39;49m]\u001b[0m\u001b[39;49m To update, run: \u001b[0m\u001b[32;49mpip install --upgrade pip\u001b[0m\n",
101
+ "Note: you may need to restart the kernel to use updated packages.\n"
102
+ ]
103
+ }
104
+ ],
105
+ "source": [
106
+ "%pip install -q \"huggingface-hub>=0.20\" ipywidgets"
107
+ ]
108
+ },
109
+ {
110
+ "cell_type": "markdown",
111
+ "id": "b6f72042-173d-4a72-ade1-9304b43b528d",
112
+ "metadata": {},
113
+ "source": [
114
+ "## Imports"
115
+ ]
116
+ },
117
+ {
118
+ "cell_type": "code",
119
+ "execution_count": 2,
120
+ "id": "99f60998-0490-46c6-a8e6-04845ddda7be",
121
+ "metadata": {
122
+ "tags": []
123
+ },
124
+ "outputs": [
125
+ {
126
+ "name": "stderr",
127
+ "output_type": "stream",
128
+ "text": [
129
+ "/Users/derekthomas/projects/spaces/jais-tgi-benchmark/venv/lib/python3.9/site-packages/urllib3/__init__.py:34: NotOpenSSLWarning: urllib3 v2 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'LibreSSL 2.8.3'. See: https://github.com/urllib3/urllib3/issues/3020\n",
130
+ " warnings.warn(\n"
131
+ ]
132
+ }
133
+ ],
134
+ "source": [
135
+ "from huggingface_hub import login, whoami, create_inference_endpoint\n",
136
+ "from getpass import getpass"
137
+ ]
138
+ },
139
+ {
140
+ "cell_type": "markdown",
141
+ "id": "5eece903-64ce-435d-a2fd-096c0ff650bf",
142
+ "metadata": {},
143
+ "source": [
144
+ "## Config\n",
145
+ "You need to fill this in with your desired repos. Note I used 5 for the `MAX_WORKERS` since `jina-embeddings-v2` are quite memory hungry. "
146
+ ]
147
+ },
148
+ {
149
+ "cell_type": "code",
150
+ "execution_count": 3,
151
+ "id": "dcd7daed-6aca-4fe7-85ce-534bdcd8bc87",
152
+ "metadata": {
153
+ "tags": []
154
+ },
155
+ "outputs": [],
156
+ "source": [
157
+ "ENDPOINT_NAME = \"jais13b-demo\""
158
+ ]
159
+ },
160
+ {
161
+ "cell_type": "code",
162
+ "execution_count": 4,
163
+ "id": "0ca1140c-3fcc-4b99-9210-6da1505a27b7",
164
+ "metadata": {
165
+ "tags": []
166
+ },
167
+ "outputs": [
168
+ {
169
+ "data": {
170
+ "application/vnd.jupyter.widget-view+json": {
171
+ "model_id": "3c7ff285544d4ea9a1cc985cf981993c",
172
+ "version_major": 2,
173
+ "version_minor": 0
174
+ },
175
+ "text/plain": [
176
+ "VBox(children=(HTML(value='<center> <img\\nsrc=https://huggingface.co/front/assets/huggingface_logo-noborder.sv…"
177
+ ]
178
+ },
179
+ "metadata": {},
180
+ "output_type": "display_data"
181
+ }
182
+ ],
183
+ "source": [
184
+ "login()"
185
+ ]
186
+ },
187
+ {
188
+ "cell_type": "markdown",
189
+ "id": "5f4ba0a8-0a6c-4705-a73b-7be09b889610",
190
+ "metadata": {},
191
+ "source": [
192
+ "Some users might have payment registered in an organization. This allows you to connect to an organization (that you are a member of) with a payment method.\n",
193
+ "\n",
194
+ "Leave it blank is you want to use your username."
195
+ ]
196
+ },
197
+ {
198
+ "cell_type": "code",
199
+ "execution_count": 5,
200
+ "id": "88cdbd73-5923-4ae9-9940-b6be935f70fa",
201
+ "metadata": {
202
+ "tags": []
203
+ },
204
+ "outputs": [
205
+ {
206
+ "name": "stdin",
207
+ "output_type": "stream",
208
+ "text": [
209
+ "What is your Hugging Face 🤗 username or organization? (with an added payment method) ········\n"
210
+ ]
211
+ }
212
+ ],
213
+ "source": [
214
+ "who = whoami()\n",
215
+ "organization = getpass(prompt=\"What is your Hugging Face 🤗 username or organization? (with an added payment method)\")\n",
216
+ "\n",
217
+ "namespace = organization or who['name']"
218
+ ]
219
+ },
220
+ {
221
+ "cell_type": "markdown",
222
+ "id": "93096cbc-81c6-4137-a283-6afb0f48fbb9",
223
+ "metadata": {},
224
+ "source": [
225
+ "# Inference Endpoints\n",
226
+ "## Create Inference Endpoint\n",
227
+ "We are going to use the [API](https://huggingface.co/docs/inference-endpoints/api_reference) to create an [Inference Endpoint](https://huggingface.co/inference-endpoints). This should provide a few main benefits:\n",
228
+ "- It's convenient (No clicking)\n",
229
+ "- It's repeatable (We have the code to run it easily)\n",
230
+ "- It's cheaper (No time spent waiting for it to load, and automatically shut it down)"
231
+ ]
232
+ },
233
+ {
234
+ "cell_type": "markdown",
235
+ "id": "1cf8334d-6500-412e-9d6d-58990c42c110",
236
+ "metadata": {},
237
+ "source": [
238
+ "Here is a convenient table of instance details you can use when selecting a GPU. Once you have chosen a GPU in Inference Endpoints, you can use the corresponding `instanceType` and `instanceSize`.\n",
239
+ "| hw_desc | instanceType | instanceSize | vRAM |\n",
240
+ "|---------------------|----------------|--------------|-------|\n",
241
+ "| 1x Nvidia Tesla T4 | g4dn.xlarge | small | 16GB |\n",
242
+ "| 4x Nvidia Tesla T4 | g4dn.12xlarge | large | 64GB |\n",
243
+ "| 1x Nvidia A10G | g5.2xlarge | medium | 24GB |\n",
244
+ "| 4x Nvidia A10G | g5.12xlarge | xxlarge | 96GB |\n",
245
+ "| 1x Nvidia A100 | p4de | xlarge | 80GB |\n",
246
+ "| 2x Nvidia A100 | p4de | 2xlarge | 160GB |\n",
247
+ "\n",
248
+ "Note: To use a node (multiple GPUs) you will need to use a sharded version of jais. I'm not sure if there is currently a version like this on the hub. "
249
+ ]
250
+ },
251
+ {
252
+ "cell_type": "code",
253
+ "execution_count": 6,
254
+ "id": "89c7cc21-3dfe-40e6-80ff-1dcc8558859e",
255
+ "metadata": {
256
+ "tags": []
257
+ },
258
+ "outputs": [],
259
+ "source": [
260
+ "hw_dict = dict(\n",
261
+ " accelerator=\"gpu\",\n",
262
+ " vendor=\"aws\",\n",
263
+ " region=\"us-east-1\",\n",
264
+ " type=\"protected\",\n",
265
+ " instance_type=\"p4de\",\n",
266
+ " instance_size=\"xlarge\",\n",
267
+ ")"
268
+ ]
269
+ },
270
+ {
271
+ "cell_type": "markdown",
272
+ "id": "bbc82ce5-d7fa-4167-adc1-b25e567f5559",
273
+ "metadata": {},
274
+ "source": [
275
+ "This is one of the most important parts of this tutorial to understand well. Its important that we choose the deployment settings that best represent our needs and our hardware. I'll just leave some high-level information here and we can go deeper in a future tutorial. It would be interesting to show the difference in how you would optimize your deployment between a chat application and RAG.\n",
276
+ "\n",
277
+ "`MAX_BATCH_PREFILL_TOKENS` | [docs](https://huggingface.co/docs/text-generation-inference/basic_tutorials/launcher#maxbatchprefilltokens) |\n",
278
+ "> Limits the number of tokens for the prefill operation. Since this operation take the most memory and is compute bound, it is interesting to limit the number of requests that can be sent\n",
279
+ "\n",
280
+ "`MAX_INPUT_LENGTH` | [docs](https://huggingface.co/docs/text-generation-inference/basic_tutorials/launcher#maxinputlength) |\n",
281
+ "> This is the maximum allowed input length (expressed in number of tokens) for users. The larger this value, the longer prompt users can send which can impact the overall memory required to handle the load. Please note that some models have a finite range of sequence they can handle\n",
282
+ "\n",
283
+ "I left this quite large as I want to give a lot of freedom to the user more than I want to trade performance. It's important in RAG applications to give more freedom here. But for few turn chat applications you can be more restrictive.\n",
284
+ "\n",
285
+ "`MAX_TOTAL_TOKENS` | [docs](https://huggingface.co/docs/text-generation-inference/basic_tutorials/launcher#maxtotaltokens) | \n",
286
+ "> This is the most important value to set as it defines the \"memory budget\" of running clients requests. Clients will send input sequences and ask to generate `max_new_tokens` on top. with a value of `1512` users can send either a prompt of `1000` and ask for `512` new tokens, or send a prompt of `1` and ask for `1511` max_new_tokens. The larger this value, the larger amount each request will be in your RAM and the less effective batching can be.\n",
287
+ "\n",
288
+ "`TRUST_REMOTE_CODE` This is set to `true` as jais requires it.\n",
289
+ "\n",
290
+ "`QUANTIZE` | [docs](https://huggingface.co/docs/text-generation-inference/basic_tutorials/launcher#quantize) |\n",
291
+ "> Whether you want the model to be quantized\n",
292
+ "\n",
293
+ "With jais, you really only have the bitsandbytes option. The tradeoff is that inference is a bit slower, but you can use much smaller GPUs (~3x smaller) without noticably losing performance. It's one of the better reads IMO and I recommend checking out the [paper](https://arxiv.org/abs/2208.07339)."
294
+ ]
295
+ },
296
+ {
297
+ "cell_type": "code",
298
+ "execution_count": 7,
299
+ "id": "f4267bce-8516-4f3a-b1cc-8ccd6c14a9c7",
300
+ "metadata": {
301
+ "tags": []
302
+ },
303
+ "outputs": [],
304
+ "source": [
305
+ "tgi_env = {\n",
306
+ " \"MAX_BATCH_PREFILL_TOKENS\": \"2048\",\n",
307
+ " \"MAX_INPUT_LENGTH\": \"2000\",\n",
308
+ " 'TRUST_REMOTE_CODE':'true',\n",
309
+ " \"QUANTIZE\": 'bitsandbytes', \n",
310
+ " \"MODEL_ID\": \"/repository\"\n",
311
+ "}"
312
+ ]
313
+ },
314
+ {
315
+ "cell_type": "markdown",
316
+ "id": "74fd83a0-fef0-4e47-8ff1-f4ba7aed131d",
317
+ "metadata": {},
318
+ "source": [
319
+ "A couple notes on my choices here:\n",
320
+ "- I used `derek-thomas/jais-13b-chat-hf` because that repo has SafeTensors merged which will lead to faster loading of the TGI container\n",
321
+ "- I'm using the latest TGI container as of the time of writing (1.3.4)\n",
322
+ "- `min_replica=0` allows [zero scaling](https://huggingface.co/docs/inference-endpoints/autoscaling#scaling-to-0) which is really useful for your wallet though think through if this makes sense for your use-case as there will be loading times\n",
323
+ "- `max_replica` allows you to handle high throughput. Make sure you read through the [docs](https://huggingface.co/docs/inference-endpoints/autoscaling#scaling-criteria) to understand how this scales"
324
+ ]
325
+ },
326
+ {
327
+ "cell_type": "code",
328
+ "execution_count": 8,
329
+ "id": "9e59de46-26b7-4bb9-bbad-8bba9931bde7",
330
+ "metadata": {
331
+ "tags": []
332
+ },
333
+ "outputs": [],
334
+ "source": [
335
+ "endpoint = create_inference_endpoint(\n",
336
+ " ENDPOINT_NAME,\n",
337
+ " repository=\"derek-thomas/jais-13b-chat-hf\", \n",
338
+ " framework=\"pytorch\",\n",
339
+ " task=\"text-generation\",\n",
340
+ " **hw_dict,\n",
341
+ " min_replica=0,\n",
342
+ " max_replica=1,\n",
343
+ " namespace=namespace,\n",
344
+ " custom_image={\n",
345
+ " \"health_route\": \"/health\",\n",
346
+ " \"env\": tgi_env,\n",
347
+ " \"url\": \"ghcr.io/huggingface/text-generation-inference:1.3.4\",\n",
348
+ " },\n",
349
+ ")"
350
+ ]
351
+ },
352
+ {
353
+ "cell_type": "markdown",
354
+ "id": "96d173b2-8980-4554-9039-c62843d3fc7d",
355
+ "metadata": {},
356
+ "source": [
357
+ "## Wait until its running"
358
+ ]
359
+ },
360
+ {
361
+ "cell_type": "code",
362
+ "execution_count": 9,
363
+ "id": "5f3a8bd2-753c-49a8-9452-899578beddc5",
364
+ "metadata": {
365
+ "tags": []
366
+ },
367
+ "outputs": [
368
+ {
369
+ "name": "stdout",
370
+ "output_type": "stream",
371
+ "text": [
372
+ "CPU times: user 188 ms, sys: 101 ms, total: 289 ms\n",
373
+ "Wall time: 2min 56s\n"
374
+ ]
375
+ },
376
+ {
377
+ "data": {
378
+ "text/plain": [
379
+ "InferenceEndpoint(name='jais13b-demo', namespace='HF-test-lab', repository='derek-thomas/jais-13b-chat-hf', status='running', url='https://kgcd24dil090jo6n.us-east-1.aws.endpoints.huggingface.cloud')"
380
+ ]
381
+ },
382
+ "execution_count": 9,
383
+ "metadata": {},
384
+ "output_type": "execute_result"
385
+ }
386
+ ],
387
+ "source": [
388
+ "%%time\n",
389
+ "endpoint.wait()"
390
+ ]
391
+ },
392
+ {
393
+ "cell_type": "code",
394
+ "execution_count": 10,
395
+ "id": "189b26f0-d404-4570-a1b9-e2a9d486c1f7",
396
+ "metadata": {
397
+ "tags": []
398
+ },
399
+ "outputs": [
400
+ {
401
+ "data": {
402
+ "text/plain": [
403
+ "'POSITIVE'"
404
+ ]
405
+ },
406
+ "execution_count": 10,
407
+ "metadata": {},
408
+ "output_type": "execute_result"
409
+ }
410
+ ],
411
+ "source": [
412
+ "endpoint.client.text_generation(\"\"\"\n",
413
+ "### Instruction: What is the sentiment of the input?\n",
414
+ "### Examples\n",
415
+ "I wish the screen was bigger - Negative\n",
416
+ "I hate the battery - Negative\n",
417
+ "I love the default appliations - Positive\n",
418
+ "### Input\n",
419
+ "I am happy with this purchase - \n",
420
+ "### Response\n",
421
+ "\"\"\",\n",
422
+ " do_sample=True,\n",
423
+ " repetition_penalty=1.2,\n",
424
+ " top_p=0.9,\n",
425
+ " temperature=0.3)"
426
+ ]
427
+ },
428
+ {
429
+ "cell_type": "markdown",
430
+ "id": "bab97c7b-7bac-4bf5-9752-b528294dadc7",
431
+ "metadata": {},
432
+ "source": [
433
+ "## Pause Inference Endpoint\n",
434
+ "Now that we have finished, lets pause the endpoint so we don't incur any extra charges, this will also allow us to analyze the cost."
435
+ ]
436
+ },
437
+ {
438
+ "cell_type": "code",
439
+ "execution_count": 11,
440
+ "id": "540a0978-7670-4ce3-95c1-3823cc113b85",
441
+ "metadata": {
442
+ "tags": []
443
+ },
444
+ "outputs": [
445
+ {
446
+ "name": "stdout",
447
+ "output_type": "stream",
448
+ "text": [
449
+ "Endpoint Status: paused\n"
450
+ ]
451
+ }
452
+ ],
453
+ "source": [
454
+ "endpoint = endpoint.pause()\n",
455
+ "\n",
456
+ "print(f\"Endpoint Status: {endpoint.status}\")"
457
+ ]
458
+ },
459
+ {
460
+ "cell_type": "markdown",
461
+ "id": "41abea64-379d-49de-8d9a-355c2f4ce1ac",
462
+ "metadata": {},
463
+ "source": [
464
+ "# Analyze Usage\n",
465
+ "1. Go to your `dashboard_url` printed below\n",
466
+ "1. Click on the Usage & Cost tab\n",
467
+ "1. See how much you have spent"
468
+ ]
469
+ },
470
+ {
471
+ "cell_type": "code",
472
+ "execution_count": 12,
473
+ "id": "16815445-3079-43da-b14e-b54176a07a62",
474
+ "metadata": {
475
+ "tags": []
476
+ },
477
+ "outputs": [
478
+ {
479
+ "name": "stdout",
480
+ "output_type": "stream",
481
+ "text": [
482
+ "https://ui.endpoints.huggingface.co/HF-test-lab/endpoints/jais13b-demo/analytics\n"
483
+ ]
484
+ }
485
+ ],
486
+ "source": [
487
+ "dashboard_url = f'https://ui.endpoints.huggingface.co/{namespace}/endpoints/{ENDPOINT_NAME}/analytics'\n",
488
+ "print(dashboard_url)"
489
+ ]
490
+ },
491
+ {
492
+ "cell_type": "markdown",
493
+ "id": "b953d5be-2494-4ff8-be42-9daf00c99c41",
494
+ "metadata": {},
495
+ "source": [
496
+ "# Delete Endpoint\n",
497
+ "We should see a `200` if everything went correctly."
498
+ ]
499
+ },
500
+ {
501
+ "cell_type": "code",
502
+ "execution_count": 13,
503
+ "id": "c310c0f3-6f12-4d5c-838b-3a4c1f2e54ad",
504
+ "metadata": {
505
+ "tags": []
506
+ },
507
+ "outputs": [
508
+ {
509
+ "name": "stdout",
510
+ "output_type": "stream",
511
+ "text": [
512
+ "Endpoint deleted successfully\n"
513
+ ]
514
+ }
515
+ ],
516
+ "source": [
517
+ "endpoint = endpoint.delete()\n",
518
+ "\n",
519
+ "if not endpoint:\n",
520
+ " print('Endpoint deleted successfully')\n",
521
+ "else:\n",
522
+ " print('Delete Endpoint in manually') "
523
+ ]
524
+ },
525
+ {
526
+ "cell_type": "code",
527
+ "execution_count": null,
528
+ "id": "611e1345-8d8c-46b1-a9f8-cff27eecb426",
529
+ "metadata": {},
530
+ "outputs": [],
531
+ "source": []
532
+ }
533
+ ],
534
+ "metadata": {
535
+ "kernelspec": {
536
+ "display_name": "Python 3 (ipykernel)",
537
+ "language": "python",
538
+ "name": "python3"
539
+ },
540
+ "language_info": {
541
+ "codemirror_mode": {
542
+ "name": "ipython",
543
+ "version": 3
544
+ },
545
+ "file_extension": ".py",
546
+ "mimetype": "text/x-python",
547
+ "name": "python",
548
+ "nbconvert_exporter": "python",
549
+ "pygments_lexer": "ipython3",
550
+ "version": "3.9.6"
551
+ }
552
+ },
553
+ "nbformat": 4,
554
+ "nbformat_minor": 5
555
+ }