Nymbo commited on
Commit
7b8fdad
1 Parent(s): 17469b4

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. 1 Getting Started/01_quickstart.md +119 -0
  2. 1 Getting Started/02_key-features.md +194 -0
  3. 2 Building Interfaces/00_the-interface-class.md +110 -0
  4. 2 Building Interfaces/01_more-on-examples.md +44 -0
  5. 2 Building Interfaces/02_flagging.md +46 -0
  6. 2 Building Interfaces/03_interface-state.md +33 -0
  7. 2 Building Interfaces/04_reactive-interfaces.md +29 -0
  8. 2 Building Interfaces/05_four-kinds-of-interfaces.md +45 -0
  9. 3 Additional Features/01_queuing.md +17 -0
  10. 3 Additional Features/02_streaming-outputs.md +21 -0
  11. 3 Additional Features/03_alerts.md +19 -0
  12. 3 Additional Features/04_styling.md +13 -0
  13. 3 Additional Features/05_progress-bars.md +9 -0
  14. 3 Additional Features/06_batch-functions.md +60 -0
  15. 3 Additional Features/07_resource-cleanup.md +49 -0
  16. 3 Additional Features/08_environment-variables.md +118 -0
  17. 3 Additional Features/09_sharing-your-app.md +482 -0
  18. 4 Building with Blocks/01_blocks-and-event-listeners.md +244 -0
  19. 4 Building with Blocks/02_controlling-layout.md +138 -0
  20. 4 Building with Blocks/03_state-in-blocks.md +31 -0
  21. 4 Building with Blocks/04_dynamic-apps-with-render-decorator.md +67 -0
  22. 4 Building with Blocks/05_custom-CSS-and-JS.md +123 -0
  23. 4 Building with Blocks/06_using-blocks-like-functions.md +91 -0
  24. 5 Chatbots/01_creating-a-chatbot-fast.md +366 -0
  25. 5 Chatbots/02_creating-a-custom-chatbot-with-blocks.md +114 -0
  26. 5 Chatbots/03_creating-a-discord-bot-from-a-gradio-app.md +138 -0
  27. 6 Custom Components/01_custom-components-in-five-minutes.md +125 -0
  28. 6 Custom Components/02_key-component-concepts.md +125 -0
  29. 6 Custom Components/03_configuration.md +101 -0
  30. 6 Custom Components/04_backend.md +228 -0
  31. 6 Custom Components/05_frontend.md +370 -0
  32. 6 Custom Components/06_frequently-asked-questions.md +75 -0
  33. 6 Custom Components/07_pdf-component-example.md +687 -0
  34. 6 Custom Components/08_multimodal-chatbot-part1.md +359 -0
  35. 6 Custom Components/09_documenting-custom-components.md +275 -0
  36. 7 Tabular Data Science and Plots/01_connecting-to-a-database.md +154 -0
  37. 7 Tabular Data Science and Plots/creating-a-dashboard-from-bigquery-data.md +123 -0
  38. 7 Tabular Data Science and Plots/creating-a-dashboard-from-supabase-data.md +122 -0
  39. 7 Tabular Data Science and Plots/creating-a-realtime-dashboard-from-google-sheets.md +143 -0
  40. 7 Tabular Data Science and Plots/plot-component-for-maps.md +111 -0
  41. 7 Tabular Data Science and Plots/styling-the-gradio-dataframe.md +168 -0
  42. 7 Tabular Data Science and Plots/using-gradio-for-tabular-workflows.md +104 -0
  43. 8 Gradio Clients and Lite/01_getting-started-with-the-python-client.md +352 -0
  44. 8 Gradio Clients and Lite/02_getting-started-with-the-js-client.md +328 -0
  45. 8 Gradio Clients and Lite/03_querying-gradio-apps-with-curl.md +304 -0
  46. 8 Gradio Clients and Lite/04_gradio-and-llm-agents.md +140 -0
  47. 8 Gradio Clients and Lite/05_gradio-lite.md +236 -0
  48. 8 Gradio Clients and Lite/06_gradio-lite-and-transformers-js.md +197 -0
  49. 8 Gradio Clients and Lite/07_fastapi-app-with-the-gradio-client.md +198 -0
  50. 9 Other Tutorials/01_using-hugging-face-integrations.md +135 -0
1 Getting Started/01_quickstart.md ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Quickstart
3
+
4
+ Gradio is an open-source Python package that allows you to quickly **build** a demo or web application for your machine learning model, API, or any arbitary Python function. You can then **share** a link to your demo or web application in just a few seconds using Gradio's built-in sharing features. *No JavaScript, CSS, or web hosting experience needed!*
5
+
6
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/lcm-screenshot-3.gif" style="padding-bottom: 10px">
7
+
8
+ It just takes a few lines of Python to create a demo like the one above, so let's get started 💫
9
+
10
+ ## Installation
11
+
12
+ **Prerequisite**: Gradio requires [Python 3.8 or higher](https://www.python.org/downloads/)
13
+
14
+
15
+ We recommend installing Gradio using `pip`, which is included by default in Python. Run this in your terminal or command prompt:
16
+
17
+ ```bash
18
+ pip install gradio
19
+ ```
20
+
21
+
22
+ Tip: it is best to install Gradio in a virtual environment. Detailed installation instructions for all common operating systems <a href="https://www.gradio.app/main/guides/installing-gradio-in-a-virtual-environment">are provided here</a>.
23
+
24
+ ## Building Your First Demo
25
+
26
+ You can run Gradio in your favorite code editor, Jupyter notebook, Google Colab, or anywhere else you write Python. Let's write your first Gradio app:
27
+
28
+
29
+ $code_hello_world_4
30
+
31
+
32
+ Tip: We shorten the imported name from <code>gradio</code> to <code>gr</code> for better readability of code. This is a widely adopted convention that you should follow so that anyone working with your code can easily understand it.
33
+
34
+ Now, run your code. If you've written the Python code in a file named, for example, `app.py`, then you would run `python app.py` from the terminal.
35
+
36
+ The demo below will open in a browser on [http://localhost:7860](http://localhost:7860) if running from a file. If you are running within a notebook, the demo will appear embedded within the notebook.
37
+
38
+ $demo_hello_world_4
39
+
40
+ Type your name in the textbox on the left, drag the slider, and then press the Submit button. You should see a friendly greeting on the right.
41
+
42
+ Tip: When developing locally, you can run your Gradio app in <strong>hot reload mode</strong>, which automatically reloads the Gradio app whenever you make changes to the file. To do this, simply type in <code>gradio</code> before the name of the file instead of <code>python</code>. In the example above, you would type: `gradio app.py` in your terminal. Learn more about hot reloading in the <a href="https://www.gradio.app/guides/developing-faster-with-reload-mode">Hot Reloading Guide</a>.
43
+
44
+
45
+ **Understanding the `Interface` Class**
46
+
47
+ You'll notice that in order to make your first demo, you created an instance of the `gr.Interface` class. The `Interface` class is designed to create demos for machine learning models which accept one or more inputs, and return one or more outputs.
48
+
49
+ The `Interface` class has three core arguments:
50
+
51
+ - `fn`: the function to wrap a user interface (UI) around
52
+ - `inputs`: the Gradio component(s) to use for the input. The number of components should match the number of arguments in your function.
53
+ - `outputs`: the Gradio component(s) to use for the output. The number of components should match the number of return values from your function.
54
+
55
+ The `fn` argument is very flexible -- you can pass *any* Python function that you want to wrap with a UI. In the example above, we saw a relatively simple function, but the function could be anything from a music generator to a tax calculator to the prediction function of a pretrained machine learning model.
56
+
57
+ The `inputs` and `outputs` arguments take one or more Gradio components. As we'll see, Gradio includes more than [30 built-in components](https://www.gradio.app/docs/gradio/components) (such as the `gr.Textbox()`, `gr.Image()`, and `gr.HTML()` components) that are designed for machine learning applications.
58
+
59
+ Tip: For the `inputs` and `outputs` arguments, you can pass in the name of these components as a string (`"textbox"`) or an instance of the class (`gr.Textbox()`).
60
+
61
+ If your function accepts more than one argument, as is the case above, pass a list of input components to `inputs`, with each input component corresponding to one of the arguments of the function, in order. The same holds true if your function returns more than one value: simply pass in a list of components to `outputs`. This flexibility makes the `Interface` class a very powerful way to create demos.
62
+
63
+ We'll dive deeper into the `gr.Interface` on our series on [building Interfaces](https://www.gradio.app/main/guides/the-interface-class).
64
+
65
+ ## Sharing Your Demo
66
+
67
+ What good is a beautiful demo if you can't share it? Gradio lets you easily share a machine learning demo without having to worry about the hassle of hosting on a web server. Simply set `share=True` in `launch()`, and a publicly accessible URL will be created for your demo. Let's revisit our example demo, but change the last line as follows:
68
+
69
+ ```python
70
+ import gradio as gr
71
+
72
+ def greet(name):
73
+ return "Hello " + name + "!"
74
+
75
+ demo = gr.Interface(fn=greet, inputs="textbox", outputs="textbox")
76
+
77
+ demo.launch(share=True) # Share your demo with just 1 extra parameter 🚀
78
+ ```
79
+
80
+ When you run this code, a public URL will be generated for your demo in a matter of seconds, something like:
81
+
82
+ 👉 &nbsp; `https://a23dsf231adb.gradio.live`
83
+
84
+ Now, anyone around the world can try your Gradio demo from their browser, while the machine learning model and all computation continues to run locally on your computer.
85
+
86
+ To learn more about sharing your demo, read our dedicated guide on [sharing your Gradio application](https://www.gradio.app/guides/sharing-your-app).
87
+
88
+
89
+ ## Core Gradio Classes
90
+
91
+ So far, we've been discussing the `Interface` class, which is a high-level class that lets to build demos quickly with Gradio. But what else does Gradio include?aaa
92
+
93
+ ### Chatbots with `gr.ChatInterface`
94
+
95
+ Gradio includes another high-level class, `gr.ChatInterface`, which is specifically designed to create Chatbot UIs. Similar to `Interface`, you supply a function and Gradio creates a fully working Chatbot UI. If you're interested in creating a chatbot, you can jump straight to [our dedicated guide on `gr.ChatInterface`](https://www.gradio.app/guides/creating-a-chatbot-fast).
96
+
97
+ ### Custom Demos with `gr.Blocks`
98
+
99
+ Gradio also offers a low-level approach for designing web apps with more flexible layouts and data flows with the `gr.Blocks` class. Blocks allows you to do things like control where components appear on the page, handle complex data flows (e.g. outputs can serve as inputs to other functions), and update properties/visibility of components based on user interaction — still all in Python.
100
+
101
+ You can build very custom and complex applications using `gr.Blocks()`. For example, the popular image generation [Automatic1111 Web UI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) is built using Gradio Blocks. We dive deeper into the `gr.Blocks` on our series on [building with Blocks](https://www.gradio.app/guides/blocks-and-event-listeners).
102
+
103
+
104
+ ### The Gradio Python & JavaScript Ecosystem
105
+
106
+ That's the gist of the core `gradio` Python library, but Gradio is actually so much more! Its an entire ecosystem of Python and JavaScript libraries that let you build machine learning applications, or query them programmatically, in Python or JavaScript. Here are other related parts of the Gradio ecosystem:
107
+
108
+ * [Gradio Python Client](https://www.gradio.app/guides/getting-started-with-the-python-client) (`gradio_client`): query any Gradio app programmatically in Python.
109
+ * [Gradio JavaScript Client](https://www.gradio.app/guides/getting-started-with-the-js-client) (`@gradio/client`): query any Gradio app programmatically in JavaScript.
110
+ * [Gradio-Lite](https://www.gradio.app/guides/gradio-lite) (`@gradio/lite`): write Gradio apps in Python that run entirely in the browser (no server needed!), thanks to Pyodide.
111
+ * [Hugging Face Spaces](https://huggingface.co/spaces): the most popular place to host Gradio applications — for free!
112
+
113
+ ## What's Next?
114
+
115
+ Keep learning about Gradio sequentially using the Gradio Guides, which include explanations as well as example code and embedded interactive demos. Next up: [let's dive deeper into the Interface class](https://www.gradio.app/guides/the-interface-class).
116
+
117
+ Or, if you already know the basics and are looking for something specific, you can search the more [technical API documentation](https://www.gradio.app/docs/).
118
+
119
+
1 Getting Started/02_key-features.md ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Key Features
3
+
4
+ Let's go through some of the key features of Gradio. This guide is intended to be a high-level overview of various things that you should be aware of as you build your demo. Where appropriate, we link to more detailed guides on specific topics.
5
+
6
+ 1. [Components](#components)
7
+ 2. [Queuing](#queuing)
8
+ 3. [Streaming outputs](#streaming-outputs)
9
+ 4. [Streaming inputs](#streaming-inputs)
10
+ 5. [Alert modals](#alert-modals)
11
+ 6. [Styling](#styling)
12
+ 7. [Progress bars](#progress-bars)
13
+ 8. [Batch functions](#batch-functions)
14
+
15
+ ## Components
16
+
17
+ Gradio includes more than 30 pre-built components (as well as many user-built _custom components_) that can be used as inputs or outputs in your demo with a single line of code. These components correspond to common data types in machine learning and data science, e.g. the `gr.Image` component is designed to handle input or output images, the `gr.Label` component displays classification labels and probabilities, the `gr.Plot` component displays various kinds of plots, and so on.
18
+
19
+ Each component includes various constructor attributes that control the properties of the component. For example, you can control the number of lines in a `gr.Textbox` using the `lines` argument (which takes a positive integer) in its constructor. Or you can control the way that a user can provide an image in the `gr.Image` component using the `sources` parameter (which takes a list like `["webcam", "upload"]`).
20
+
21
+ **Static and Interactive Components**
22
+
23
+ Every component has a _static_ version that is designed to *display* data, and most components also have an _interactive_ version designed to let users input or modify the data. Typically, you don't need to think about this distinction, because when you build a Gradio demo, Gradio automatically figures out whether the component should be static or interactive based on whether it is being used as an input or output. However, you can set this manually using the `interactive` argument that every component supports.
24
+
25
+ **Preprocessing and Postprocessing**
26
+
27
+ When a component is used as an input, Gradio automatically handles the _preprocessing_ needed to convert the data from a type sent by the user's browser (such as an uploaded image) to a form that can be accepted by your function (such as a `numpy` array).
28
+
29
+
30
+ Similarly, when a component is used as an output, Gradio automatically handles the _postprocessing_ needed to convert the data from what is returned by your function (such as a list of image paths) to a form that can be displayed in the user's browser (a gallery of images).
31
+
32
+ Consider an example demo with three input components (`gr.Textbox`, `gr.Number`, and `gr.Image`) and two outputs (`gr.Number` and `gr.Gallery`) that serve as a UI for your image-to-image generation model. Below is a diagram of what our preprocessing will send to the model and what our postprocessing will require from it.
33
+
34
+ ![](https://github.com/gradio-app/gradio/blob/main/guides/assets/dataflow.svg?raw=true)
35
+
36
+ In this image, the following preprocessing steps happen to send the data from the browser to your function:
37
+
38
+ * The text in the textbox is converted to a Python `str` (essentially no preprocessing)
39
+ * The number in the number input is converted to a Python `int` (essentially no preprocessing)
40
+ * Most importantly, ihe image supplied by the user is converted to a `numpy.array` representation of the RGB values in the image
41
+
42
+ Images are converted to NumPy arrays because they are a common format for machine learning workflows. You can control the _preprocessing_ using the component's parameters when constructing the component. For example, if you instantiate the `Image` component with the following parameters, it will preprocess the image to the `PIL` format instead:
43
+
44
+ ```py
45
+ img = gr.Image(type="pil")
46
+ ```
47
+
48
+ Postprocessing is even simpler! Gradio automatically recognizes the format of the returned data (e.g. does the user's function return a `numpy` array or a `str` filepath for the `gr.Image` component?) and postprocesses it appropriately into a format that can be displayed by the browser.
49
+
50
+ So in the image above, the following postprocessing steps happen to send the data returned from a user's function to the browser:
51
+
52
+ * The `float` is displayed as a number and displayed directly to the user
53
+ * The list of string filepaths (`list[str]`) is interpreted as a list of image filepaths and displayed as a gallery in the browser
54
+
55
+ Take a look at the [Docs](https://gradio.app/docs) to see all the parameters for each Gradio component.
56
+
57
+ ## Queuing
58
+
59
+ Every Gradio app comes with a built-in queuing system that can scale to thousands of concurrent users. You can configure the queue by using `queue()` method which is supported by the `gr.Interface`, `gr.Blocks`, and `gr.ChatInterface` classes.
60
+
61
+ For example, you can control the number of requests processed at a single time by setting the `default_concurrency_limit` parameter of `queue()`, e.g.
62
+
63
+ ```python
64
+ demo = gr.Interface(...).queue(default_concurrency_limit=5)
65
+ demo.launch()
66
+ ```
67
+
68
+ This limits the number of requests processed for this event listener at a single time to 5. By default, the `default_concurrency_limit` is actually set to `1`, which means that when many users are using your app, only a single user's request will be processed at a time. This is because many machine learning functions consume a significant amount of memory and so it is only suitable to have a single user using the demo at a time. However, you can change this parameter in your demo easily.
69
+
70
+ See the [docs on queueing](https://gradio.app/docs/gradio/interface#interface-queue) for more details on configuring the queuing parameters.
71
+
72
+ ## Streaming outputs
73
+
74
+ In some cases, you may want to stream a sequence of outputs rather than show a single output at once. For example, you might have an image generation model and you want to show the image that is generated at each step, leading up to the final image. Or you might have a chatbot which streams its response one token at a time instead of returning it all at once.
75
+
76
+ In such cases, you can supply a **generator** function into Gradio instead of a regular function. Creating generators in Python is very simple: instead of a single `return` value, a function should `yield` a series of values instead. Usually the `yield` statement is put in some kind of loop. Here's an example of an generator that simply counts up to a given number:
77
+
78
+ ```python
79
+ def my_generator(x):
80
+ for i in range(x):
81
+ yield i
82
+ ```
83
+
84
+ You supply a generator into Gradio the same way as you would a regular function. For example, here's a a (fake) image generation model that generates noise for several steps before outputting an image using the `gr.Interface` class:
85
+
86
+ $code_fake_diffusion
87
+ $demo_fake_diffusion
88
+
89
+ Note that we've added a `time.sleep(1)` in the iterator to create an artificial pause between steps so that you are able to observe the steps of the iterator (in a real image generation model, this probably wouldn't be necessary).
90
+
91
+ ## Streaming inputs
92
+
93
+ Similarly, Gradio can handle streaming inputs, e.g. a live audio stream that can gets transcribed to text in real time, or an image generation model that reruns every time a user types a letter in a textbox. This is covered in more details in our guide on building [reactive Interfaces](/guides/reactive-interfaces).
94
+
95
+ ## Alert modals
96
+
97
+ You may wish to raise alerts to the user. To do so, raise a `gr.Error("custom message")` to display an error message. You can also issue `gr.Warning("message")` and `gr.Info("message")` by having them as standalone lines in your function, which will immediately display modals while continuing the execution of your function. Queueing needs to be enabled for this to work.
98
+
99
+ Note below how the `gr.Error` has to be raised, while the `gr.Warning` and `gr.Info` are single lines.
100
+
101
+ ```python
102
+ def start_process(name):
103
+ gr.Info("Starting process")
104
+ if name is None:
105
+ gr.Warning("Name is empty")
106
+ ...
107
+ if success == False:
108
+ raise gr.Error("Process failed")
109
+ ```
110
+
111
+
112
+
113
+ ## Styling
114
+
115
+ Gradio themes are the easiest way to customize the look and feel of your app. You can choose from a variety of themes, or create your own. To do so, pass the `theme=` kwarg to the `Interface` constructor. For example:
116
+
117
+ ```python
118
+ demo = gr.Interface(..., theme=gr.themes.Monochrome())
119
+ ```
120
+
121
+ Gradio comes with a set of prebuilt themes which you can load from `gr.themes.*`. You can extend these themes or create your own themes from scratch - see the [theming guide](https://gradio.app/guides/theming-guide) for more details.
122
+
123
+ For additional styling ability, you can pass any CSS (as well as custom JavaScript) to your Gradio application. This is discussed in more detail in our [custom JS and CSS guide](/guides/custom-CSS-and-JS).
124
+
125
+
126
+ ## Progress bars
127
+
128
+ Gradio supports the ability to create custom Progress Bars so that you have customizability and control over the progress update that you show to the user. In order to enable this, simply add an argument to your method that has a default value of a `gr.Progress` instance. Then you can update the progress levels by calling this instance directly with a float between 0 and 1, or using the `tqdm()` method of the `Progress` instance to track progress over an iterable, as shown below.
129
+
130
+ $code_progress_simple
131
+ $demo_progress_simple
132
+
133
+ If you use the `tqdm` library, you can even report progress updates automatically from any `tqdm.tqdm` that already exists within your function by setting the default argument as `gr.Progress(track_tqdm=True)`!
134
+
135
+ ## Batch functions
136
+
137
+ Gradio supports the ability to pass _batch_ functions. Batch functions are just
138
+ functions which take in a list of inputs and return a list of predictions.
139
+
140
+ For example, here is a batched function that takes in two lists of inputs (a list of
141
+ words and a list of ints), and returns a list of trimmed words as output:
142
+
143
+ ```py
144
+ import time
145
+
146
+ def trim_words(words, lens):
147
+ trimmed_words = []
148
+ time.sleep(5)
149
+ for w, l in zip(words, lens):
150
+ trimmed_words.append(w[:int(l)])
151
+ return [trimmed_words]
152
+ ```
153
+
154
+ The advantage of using batched functions is that if you enable queuing, the Gradio server can automatically _batch_ incoming requests and process them in parallel,
155
+ potentially speeding up your demo. Here's what the Gradio code looks like (notice the `batch=True` and `max_batch_size=16`)
156
+
157
+ With the `gr.Interface` class:
158
+
159
+ ```python
160
+ demo = gr.Interface(
161
+ fn=trim_words,
162
+ inputs=["textbox", "number"],
163
+ outputs=["output"],
164
+ batch=True,
165
+ max_batch_size=16
166
+ )
167
+
168
+ demo.launch()
169
+ ```
170
+
171
+ With the `gr.Blocks` class:
172
+
173
+ ```py
174
+ import gradio as gr
175
+
176
+ with gr.Blocks() as demo:
177
+ with gr.Row():
178
+ word = gr.Textbox(label="word")
179
+ leng = gr.Number(label="leng")
180
+ output = gr.Textbox(label="Output")
181
+ with gr.Row():
182
+ run = gr.Button()
183
+
184
+ event = run.click(trim_words, [word, leng], output, batch=True, max_batch_size=16)
185
+
186
+ demo.launch()
187
+ ```
188
+
189
+ In the example above, 16 requests could be processed in parallel (for a total inference time of 5 seconds), instead of each request being processed separately (for a total
190
+ inference time of 80 seconds). Many Hugging Face `transformers` and `diffusers` models work very naturally with Gradio's batch mode: here's [an example demo using diffusers to
191
+ generate images in batches](https://github.com/gradio-app/gradio/blob/main/demo/diffusers_with_batching/run.py)
192
+
193
+
194
+
2 Building Interfaces/00_the-interface-class.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # The `Interface` class
3
+
4
+ As mentioned in the [Quickstart](/main/guides/quickstart), the `gr.Interface` class is a high-level abstraction in Gradio that allows you to quickly create a demo for any Python function simply by specifying the input types and the output types. Revisiting our first demo:
5
+
6
+ $code_hello_world_4
7
+
8
+
9
+ We see that the `Interface` class is initialized with three required parameters:
10
+
11
+ - `fn`: the function to wrap a user interface (UI) around
12
+ - `inputs`: which Gradio component(s) to use for the input. The number of components should match the number of arguments in your function.
13
+ - `outputs`: which Gradio component(s) to use for the output. The number of components should match the number of return values from your function.
14
+
15
+ In this Guide, we'll dive into `gr.Interface` and the various ways it can be customized, but before we do that, let's get a better understanding of Gradio components.
16
+
17
+ ## Gradio Components
18
+
19
+ Gradio includes more than 30 pre-built components (as well as many [community-built _custom components_](https://www.gradio.app/custom-components/gallery)) that can be used as inputs or outputs in your demo. These components correspond to common data types in machine learning and data science, e.g. the `gr.Image` component is designed to handle input or output images, the `gr.Label` component displays classification labels and probabilities, the `gr.Plot` component displays various kinds of plots, and so on.
20
+
21
+ **Static and Interactive Components**
22
+
23
+ Every component has a _static_ version that is designed to *display* data, and most components also have an _interactive_ version designed to let users input or modify the data. Typically, you don't need to think about this distinction, because when you build a Gradio demo, Gradio automatically figures out whether the component should be static or interactive based on whether it is being used as an input or output. However, you can set this manually using the `interactive` argument that every component supports.
24
+
25
+ **Preprocessing and Postprocessing**
26
+
27
+ When a component is used as an input, Gradio automatically handles the _preprocessing_ needed to convert the data from a type sent by the user's browser (such as an uploaded image) to a form that can be accepted by your function (such as a `numpy` array).
28
+
29
+
30
+ Similarly, when a component is used as an output, Gradio automatically handles the _postprocessing_ needed to convert the data from what is returned by your function (such as a list of image paths) to a form that can be displayed in the user's browser (a gallery of images).
31
+
32
+ ## Components Attributes
33
+
34
+ We used the default versions of the `gr.Textbox` and `gr.Slider`, but what if you want to change how the UI components look or behave?
35
+
36
+ Let's say you want to customize the slider to have values from 1 to 10, with a default of 2. And you wanted to customize the output text field — you want it to be larger and have a label.
37
+
38
+ If you use the actual classes for `gr.Textbox` and `gr.Slider` instead of the string shortcuts, you have access to much more customizability through component attributes.
39
+
40
+ $code_hello_world_2
41
+ $demo_hello_world_2
42
+
43
+ ## Multiple Input and Output Components
44
+
45
+ Suppose you had a more complex function, with multiple outputs as well. In the example below, we define a function that takes a string, boolean, and number, and returns a string and number.
46
+
47
+ $code_hello_world_3
48
+ $demo_hello_world_3
49
+
50
+ Just as each component in the `inputs` list corresponds to one of the parameters of the function, in order, each component in the `outputs` list corresponds to one of the values returned by the function, in order.
51
+
52
+ ## An Image Example
53
+
54
+ Gradio supports many types of components, such as `Image`, `DataFrame`, `Video`, or `Label`. Let's try an image-to-image function to get a feel for these!
55
+
56
+ $code_sepia_filter
57
+ $demo_sepia_filter
58
+
59
+ When using the `Image` component as input, your function will receive a NumPy array with the shape `(height, width, 3)`, where the last dimension represents the RGB values. We'll return an image as well in the form of a NumPy array.
60
+
61
+ As mentioned above, Gradio handles the preprocessing and postprocessing to convert images to NumPy arrays and vice versa. You can also control the preprocessing performed with the `type=` keyword argument. For example, if you wanted your function to take a file path to an image instead of a NumPy array, the input `Image` component could be written as:
62
+
63
+ ```python
64
+ gr.Image(type="filepath", shape=...)
65
+ ```
66
+
67
+ You can read more about the built-in Gradio components and how to customize them in the [Gradio docs](https://gradio.app/docs).
68
+
69
+ ## Example Inputs
70
+
71
+ You can provide example data that a user can easily load into `Interface`. This can be helpful to demonstrate the types of inputs the model expects, as well as to provide a way to explore your dataset in conjunction with your model. To load example data, you can provide a **nested list** to the `examples=` keyword argument of the Interface constructor. Each sublist within the outer list represents a data sample, and each element within the sublist represents an input for each input component. The format of example data for each component is specified in the [Docs](https://gradio.app/docs#components).
72
+
73
+ $code_calculator
74
+ $demo_calculator
75
+
76
+ You can load a large dataset into the examples to browse and interact with the dataset through Gradio. The examples will be automatically paginated (you can configure this through the `examples_per_page` argument of `Interface`).
77
+
78
+ Continue learning about examples in the [More On Examples](https://gradio.app/guides/more-on-examples) guide.
79
+
80
+ ## Descriptive Content
81
+
82
+ In the previous example, you may have noticed the `title=` and `description=` keyword arguments in the `Interface` constructor that helps users understand your app.
83
+
84
+ There are three arguments in the `Interface` constructor to specify where this content should go:
85
+
86
+ - `title`: which accepts text and can display it at the very top of interface, and also becomes the page title.
87
+ - `description`: which accepts text, markdown or HTML and places it right under the title.
88
+ - `article`: which also accepts text, markdown or HTML and places it below the interface.
89
+
90
+ ![annotated](https://github.com/gradio-app/gradio/blob/main/guides/assets/annotated.png?raw=true)
91
+
92
+ Note: if you're using the `Blocks` class, you can insert text, markdown, or HTML anywhere in your application using the `gr.Markdown(...)` or `gr.HTML(...)` components.
93
+
94
+ Another useful keyword argument is `label=`, which is present in every `Component`. This modifies the label text at the top of each `Component`. You can also add the `info=` keyword argument to form elements like `Textbox` or `Radio` to provide further information on their usage.
95
+
96
+ ```python
97
+ gr.Number(label='Age', info='In years, must be greater than 0')
98
+ ```
99
+
100
+ ## Additional Inputs within an Accordion
101
+
102
+ If your prediction function takes many inputs, you may want to hide some of them within a collapsed accordion to avoid cluttering the UI. The `Interface` class takes an `additional_inputs` argument which is similar to `inputs` but any input components included here are not visible by default. The user must click on the accordion to show these components. The additional inputs are passed into the prediction function, in order, after the standard inputs.
103
+
104
+ You can customize the appearance of the accordion by using the optional `additional_inputs_accordion` argument, which accepts a string (in which case, it becomes the label of the accordion), or an instance of the `gr.Accordion()` class (e.g. this lets you control whether the accordion is open or closed by default).
105
+
106
+ Here's an example:
107
+
108
+ $code_interface_with_additional_inputs
109
+ $demo_interface_with_additional_inputs
110
+
2 Building Interfaces/01_more-on-examples.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # More on Examples
3
+
4
+ In the [previous Guide](/main/guides/the-interface-class), we discussed how to provide example inputs for your demo to make it easier for users to try it out. Here, we dive into more details.
5
+
6
+ ## Providing Examples
7
+
8
+ Adding examples to an Interface is as easy as providing a list of lists to the `examples`
9
+ keyword argument.
10
+ Each sublist is a data sample, where each element corresponds to an input of the prediction function.
11
+ The inputs must be ordered in the same order as the prediction function expects them.
12
+
13
+ If your interface only has one input component, then you can provide your examples as a regular list instead of a list of lists.
14
+
15
+ ### Loading Examples from a Directory
16
+
17
+ You can also specify a path to a directory containing your examples. If your Interface takes only a single file-type input, e.g. an image classifier, you can simply pass a directory filepath to the `examples=` argument, and the `Interface` will load the images in the directory as examples.
18
+ In the case of multiple inputs, this directory must
19
+ contain a log.csv file with the example values.
20
+ In the context of the calculator demo, we can set `examples='/demo/calculator/examples'` and in that directory we include the following `log.csv` file:
21
+
22
+ ```csv
23
+ num,operation,num2
24
+ 5,"add",3
25
+ 4,"divide",2
26
+ 5,"multiply",3
27
+ ```
28
+
29
+ This can be helpful when browsing flagged data. Simply point to the flagged directory and the `Interface` will load the examples from the flagged data.
30
+
31
+ ### Providing Partial Examples
32
+
33
+ Sometimes your app has many input components, but you would only like to provide examples for a subset of them. In order to exclude some inputs from the examples, pass `None` for all data samples corresponding to those particular components.
34
+
35
+ ## Caching examples
36
+
37
+ You may wish to provide some cached examples of your model for users to quickly try out, in case your model takes a while to run normally.
38
+ If `cache_examples=True`, your Gradio app will run all of the examples and save the outputs when you call the `launch()` method. This data will be saved in a directory called `gradio_cached_examples` in your working directory by default. You can also set this directory with the `GRADIO_EXAMPLES_CACHE` environment variable, which can be either an absolute path or a relative path to your working directory.
39
+
40
+ Whenever a user clicks on an example, the output will automatically be populated in the app now, using data from this cached directory instead of actually running the function. This is useful so users can quickly try out your model without adding any load!
41
+
42
+ Alternatively, you can set `cache_examples="lazy"`. This means that each particular example will only get cached after it is first used (by any user) in the Gradio app. This is helpful if your prediction function is long-running and you do not want to wait a long time for your Gradio app to start.
43
+
44
+ Keep in mind once the cache is generated, it will not be updated automatically in future launches. If the examples or function logic change, delete the cache folder to clear the cache and rebuild it with another `launch()`.
2 Building Interfaces/02_flagging.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Flagging
3
+
4
+ You may have noticed the "Flag" button that appears by default in your `Interface`. When a user using your demo sees input with interesting output, such as erroneous or unexpected model behaviour, they can flag the input for you to review. Within the directory provided by the `flagging_dir=` argument to the `Interface` constructor, a CSV file will log the flagged inputs. If the interface involves file data, such as for Image and Audio components, folders will be created to store those flagged data as well.
5
+
6
+ For example, with the calculator interface shown above, we would have the flagged data stored in the flagged directory shown below:
7
+
8
+ ```directory
9
+ +-- calculator.py
10
+ +-- flagged/
11
+ | +-- logs.csv
12
+ ```
13
+
14
+ _flagged/logs.csv_
15
+
16
+ ```csv
17
+ num1,operation,num2,Output
18
+ 5,add,7,12
19
+ 6,subtract,1.5,4.5
20
+ ```
21
+
22
+ With the sepia interface shown earlier, we would have the flagged data stored in the flagged directory shown below:
23
+
24
+ ```directory
25
+ +-- sepia.py
26
+ +-- flagged/
27
+ | +-- logs.csv
28
+ | +-- im/
29
+ | | +-- 0.png
30
+ | | +-- 1.png
31
+ | +-- Output/
32
+ | | +-- 0.png
33
+ | | +-- 1.png
34
+ ```
35
+
36
+ _flagged/logs.csv_
37
+
38
+ ```csv
39
+ im,Output
40
+ im/0.png,Output/0.png
41
+ im/1.png,Output/1.png
42
+ ```
43
+
44
+ If you wish for the user to provide a reason for flagging, you can pass a list of strings to the `flagging_options` argument of Interface. Users will have to select one of the strings when flagging, which will be saved as an additional column to the CSV.
45
+
46
+
2 Building Interfaces/03_interface-state.md ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Interface State
3
+
4
+ So far, we've assumed that your demos are *stateless*: that they do not persist information beyond a single function call. What if you want to modify the behavior of your demo based on previous interactions with the demo? There are two approaches in Gradio: *global state* and *session state*.
5
+
6
+ ## Global State
7
+
8
+ If the state is something that should be accessible to all function calls and all users, you can create a variable outside the function call and access it inside the function. For example, you may load a large model outside the function and use it inside the function so that every function call does not need to reload the model.
9
+
10
+ $code_score_tracker
11
+
12
+ In the code above, the `scores` array is shared between all users. If multiple users are accessing this demo, their scores will all be added to the same list, and the returned top 3 scores will be collected from this shared reference.
13
+
14
+ ## Session State
15
+
16
+ Another type of data persistence Gradio supports is session state, where data persists across multiple submits within a page session. However, data is _not_ shared between different users of your model. To store data in a session state, you need to do three things:
17
+
18
+ 1. Pass in an extra parameter into your function, which represents the state of the interface.
19
+ 2. At the end of the function, return the updated value of the state as an extra return value.
20
+ 3. Add the `'state'` input and `'state'` output components when creating your `Interface`
21
+
22
+ Here's a simple app to illustrate session state - this app simply stores users previous submissions and displays them back to the user:
23
+
24
+
25
+ $code_interface_state
26
+ $demo_interface_state
27
+
28
+
29
+ Notice how the state persists across submits within each page, but if you load this demo in another tab (or refresh the page), the demos will not share chat history. Here, we could not store the submission history in a global variable, otherwise the submission history would then get jumbled between different users.
30
+
31
+ The initial value of the `State` is `None` by default. If you pass a parameter to the `value` argument of `gr.State()`, it is used as the default value of the state instead.
32
+
33
+ Note: the `Interface` class only supports a single session state variable (though it can be a list with multiple elements). For more complex use cases, you can use Blocks, [which supports multiple `State` variables](/guides/state-in-blocks/). Alternatively, if you are building a chatbot that maintains user state, consider using the `ChatInterface` abstraction, [which manages state automatically](/guides/creating-a-chatbot-fast).
2 Building Interfaces/04_reactive-interfaces.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Reactive Interfaces
3
+
4
+ Finally, we cover how to get Gradio demos to refresh automatically or continuously stream data.
5
+
6
+ ## Live Interfaces
7
+
8
+ You can make interfaces automatically refresh by setting `live=True` in the interface. Now the interface will recalculate as soon as the user input changes.
9
+
10
+ $code_calculator_live
11
+ $demo_calculator_live
12
+
13
+ Note there is no submit button, because the interface resubmits automatically on change.
14
+
15
+ ## Streaming Components
16
+
17
+ Some components have a "streaming" mode, such as `Audio` component in microphone mode, or the `Image` component in webcam mode. Streaming means data is sent continuously to the backend and the `Interface` function is continuously being rerun.
18
+
19
+ The difference between `gr.Audio(source='microphone')` and `gr.Audio(source='microphone', streaming=True)`, when both are used in `gr.Interface(live=True)`, is that the first `Component` will automatically submit data and run the `Interface` function when the user stops recording, whereas the second `Component` will continuously send data and run the `Interface` function _during_ recording.
20
+
21
+ Here is example code of streaming images from the webcam.
22
+
23
+ $code_stream_frames
24
+
25
+ Streaming can also be done in an output component. A `gr.Audio(streaming=True)` output component can take a stream of audio data yielded piece-wise by a generator function and combines them into a single audio file.
26
+
27
+ $code_stream_audio_out
28
+
29
+ For a more detailed example, see our guide on performing [automatic speech recognition](/guides/real-time-speech-recognition) with Gradio.
2 Building Interfaces/05_four-kinds-of-interfaces.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # The 4 Kinds of Gradio Interfaces
3
+
4
+ So far, we've always assumed that in order to build an Gradio demo, you need both inputs and outputs. But this isn't always the case for machine learning demos: for example, _unconditional image generation models_ don't take any input but produce an image as the output.
5
+
6
+ It turns out that the `gradio.Interface` class can actually handle 4 different kinds of demos:
7
+
8
+ 1. **Standard demos**: which have both separate inputs and outputs (e.g. an image classifier or speech-to-text model)
9
+ 2. **Output-only demos**: which don't take any input but produce on output (e.g. an unconditional image generation model)
10
+ 3. **Input-only demos**: which don't produce any output but do take in some sort of input (e.g. a demo that saves images that you upload to a persistent external database)
11
+ 4. **Unified demos**: which have both input and output components, but the input and output components _are the same_. This means that the output produced overrides the input (e.g. a text autocomplete model)
12
+
13
+ Depending on the kind of demo, the user interface (UI) looks slightly different:
14
+
15
+ ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/interfaces4.png)
16
+
17
+ Let's see how to build each kind of demo using the `Interface` class, along with examples:
18
+
19
+ ## Standard demos
20
+
21
+ To create a demo that has both the input and the output components, you simply need to set the values of the `inputs` and `outputs` parameter in `Interface()`. Here's an example demo of a simple image filter:
22
+
23
+ $code_sepia_filter
24
+ $demo_sepia_filter
25
+
26
+ ## Output-only demos
27
+
28
+ What about demos that only contain outputs? In order to build such a demo, you simply set the value of the `inputs` parameter in `Interface()` to `None`. Here's an example demo of a mock image generation model:
29
+
30
+ $code_fake_gan_no_input
31
+ $demo_fake_gan_no_input
32
+
33
+ ## Input-only demos
34
+
35
+ Similarly, to create a demo that only contains inputs, set the value of `outputs` parameter in `Interface()` to be `None`. Here's an example demo that saves any uploaded image to disk:
36
+
37
+ $code_save_file_no_output
38
+ $demo_save_file_no_output
39
+
40
+ ## Unified demos
41
+
42
+ A demo that has a single component as both the input and the output. It can simply be created by setting the values of the `inputs` and `outputs` parameter as the same component. Here's an example demo of a text generation model:
43
+
44
+ $code_unified_demo_text_generation
45
+ $demo_unified_demo_text_generation
3 Additional Features/01_queuing.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Queuing
3
+
4
+ Every Gradio app comes with a built-in queuing system that can scale to thousands of concurrent users. You can configure the queue by using `queue()` method which is supported by the `gr.Interface`, `gr.Blocks`, and `gr.ChatInterface` classes.
5
+
6
+ For example, you can control the number of requests processed at a single time by setting the `default_concurrency_limit` parameter of `queue()`, e.g.
7
+
8
+ ```python
9
+ demo = gr.Interface(...).queue(default_concurrency_limit=5)
10
+ demo.launch()
11
+ ```
12
+
13
+ This limits the number of requests processed for this event listener at a single time to 5. By default, the `default_concurrency_limit` is actually set to `1`, which means that when many users are using your app, only a single user's request will be processed at a time. This is because many machine learning functions consume a significant amount of memory and so it is only suitable to have a single user using the demo at a time. However, you can change this parameter in your demo easily.
14
+
15
+ See the [docs on queueing](/docs/gradio/interface#interface-queue) for more details on configuring the queuing parameters.
16
+
17
+ You can see analytics on the number and status of all requests processed by the queue by visiting the `/monitoring` endpoint of your app. This endpoint will print a secret URL to your console that links to the full analytics dashboard.
3 Additional Features/02_streaming-outputs.md ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Streaming outputs
3
+
4
+ In some cases, you may want to stream a sequence of outputs rather than show a single output at once. For example, you might have an image generation model and you want to show the image that is generated at each step, leading up to the final image. Or you might have a chatbot which streams its response one token at a time instead of returning it all at once.
5
+
6
+ In such cases, you can supply a **generator** function into Gradio instead of a regular function. Creating generators in Python is very simple: instead of a single `return` value, a function should `yield` a series of values instead. Usually the `yield` statement is put in some kind of loop. Here's an example of an generator that simply counts up to a given number:
7
+
8
+ ```python
9
+ def my_generator(x):
10
+ for i in range(x):
11
+ yield i
12
+ ```
13
+
14
+ You supply a generator into Gradio the same way as you would a regular function. For example, here's a a (fake) image generation model that generates noise for several steps before outputting an image using the `gr.Interface` class:
15
+
16
+ $code_fake_diffusion
17
+ $demo_fake_diffusion
18
+
19
+ Note that we've added a `time.sleep(1)` in the iterator to create an artificial pause between steps so that you are able to observe the steps of the iterator (in a real image generation model, this probably wouldn't be necessary).
20
+
21
+ Similarly, Gradio can handle streaming inputs, e.g. an image generation model that reruns every time a user types a letter in a textbox. This is covered in more details in our guide on building [reactive Interfaces](/guides/reactive-interfaces).
3 Additional Features/03_alerts.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Alerts
3
+
4
+ You may wish to display alerts to the user. To do so, raise a `gr.Error("custom message")` in your function to halt the execution of your function and display an error message to the user.
5
+
6
+ Alternatively, can issue `gr.Warning("custom message")` or `gr.Info("custom message")` by having them as standalone lines in your function, which will immediately display modals while continuing the execution of your function. The only difference between `gr.Info()` and `gr.Warning()` is the color of the alert.
7
+
8
+ ```python
9
+ def start_process(name):
10
+ gr.Info("Starting process")
11
+ if name is None:
12
+ gr.Warning("Name is empty")
13
+ ...
14
+ if success == False:
15
+ raise gr.Error("Process failed")
16
+ ```
17
+
18
+ Tip: Note that `gr.Error()` is an exception that has to be raised, while `gr.Warning()` and `gr.Info()` are functions that are called directly.
19
+
3 Additional Features/04_styling.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Styling
3
+
4
+ Gradio themes are the easiest way to customize the look and feel of your app. You can choose from a variety of themes, or create your own. To do so, pass the `theme=` kwarg to the `Interface` constructor. For example:
5
+
6
+ ```python
7
+ demo = gr.Interface(..., theme=gr.themes.Monochrome())
8
+ ```
9
+
10
+ Gradio comes with a set of prebuilt themes which you can load from `gr.themes.*`. You can extend these themes or create your own themes from scratch - see the [theming guide](https://gradio.app/guides/theming-guide) for more details.
11
+
12
+ For additional styling ability, you can pass any CSS (as well as custom JavaScript) to your Gradio application. This is discussed in more detail in our [custom JS and CSS guide](/guides/custom-CSS-and-JS).
13
+
3 Additional Features/05_progress-bars.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Progress Bars
3
+
4
+ Gradio supports the ability to create custom Progress Bars so that you have customizability and control over the progress update that you show to the user. In order to enable this, simply add an argument to your method that has a default value of a `gr.Progress` instance. Then you can update the progress levels by calling this instance directly with a float between 0 and 1, or using the `tqdm()` method of the `Progress` instance to track progress over an iterable, as shown below.
5
+
6
+ $code_progress_simple
7
+ $demo_progress_simple
8
+
9
+ If you use the `tqdm` library, you can even report progress updates automatically from any `tqdm.tqdm` that already exists within your function by setting the default argument as `gr.Progress(track_tqdm=True)`!
3 Additional Features/06_batch-functions.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Batch functions
3
+
4
+ Gradio supports the ability to pass _batch_ functions. Batch functions are just
5
+ functions which take in a list of inputs and return a list of predictions.
6
+
7
+ For example, here is a batched function that takes in two lists of inputs (a list of
8
+ words and a list of ints), and returns a list of trimmed words as output:
9
+
10
+ ```py
11
+ import time
12
+
13
+ def trim_words(words, lens):
14
+ trimmed_words = []
15
+ time.sleep(5)
16
+ for w, l in zip(words, lens):
17
+ trimmed_words.append(w[:int(l)])
18
+ return [trimmed_words]
19
+ ```
20
+
21
+ The advantage of using batched functions is that if you enable queuing, the Gradio server can automatically _batch_ incoming requests and process them in parallel,
22
+ potentially speeding up your demo. Here's what the Gradio code looks like (notice the `batch=True` and `max_batch_size=16`)
23
+
24
+ With the `gr.Interface` class:
25
+
26
+ ```python
27
+ demo = gr.Interface(
28
+ fn=trim_words,
29
+ inputs=["textbox", "number"],
30
+ outputs=["output"],
31
+ batch=True,
32
+ max_batch_size=16
33
+ )
34
+
35
+ demo.launch()
36
+ ```
37
+
38
+ With the `gr.Blocks` class:
39
+
40
+ ```py
41
+ import gradio as gr
42
+
43
+ with gr.Blocks() as demo:
44
+ with gr.Row():
45
+ word = gr.Textbox(label="word")
46
+ leng = gr.Number(label="leng")
47
+ output = gr.Textbox(label="Output")
48
+ with gr.Row():
49
+ run = gr.Button()
50
+
51
+ event = run.click(trim_words, [word, leng], output, batch=True, max_batch_size=16)
52
+
53
+ demo.launch()
54
+ ```
55
+
56
+ In the example above, 16 requests could be processed in parallel (for a total inference time of 5 seconds), instead of each request being processed separately (for a total
57
+ inference time of 80 seconds). Many Hugging Face `transformers` and `diffusers` models work very naturally with Gradio's batch mode: here's [an example demo using diffusers to
58
+ generate images in batches](https://github.com/gradio-app/gradio/blob/main/demo/diffusers_with_batching/run.py)
59
+
60
+
3 Additional Features/07_resource-cleanup.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Resource Cleanup
3
+
4
+ Your Gradio application may create resources during its lifetime.
5
+ Examples of resources are `gr.State` variables, any variables you create and explicitly hold in memory, or files you save to disk.
6
+ Over time, these resources can use up all of your server's RAM or disk space and crash your application.
7
+
8
+ Gradio provides some tools for you to clean up the resources created by your app:
9
+
10
+ 1. Automatic deletion of `gr.State` variables.
11
+ 2. Automatic cache cleanup with the `delete_cache` parameter.
12
+ 2. The `Blocks.unload` event.
13
+
14
+ Let's take a look at each of them individually.
15
+
16
+ ## Automatic deletion of `gr.State`
17
+
18
+ When a user closes their browser tab, Gradio will automatically delete any `gr.State` variables associated with that user session after 60 minutes. If the user connects again within those 60 minutes, no state will be deleted.
19
+
20
+ You can control the deletion behavior further with the following two parameters of `gr.State`:
21
+
22
+ 1. `delete_callback` - An arbitrary function that will be called when the variable is deleted. This function must take the state value as input. This function is useful for deleting variables from GPU memory.
23
+ 2. `time_to_live` - The number of seconds the state should be stored for after it is created or updated. This will delete variables before the session is closed, so it's useful for clearing state for potentially long running sessions.
24
+
25
+ ## Automatic cache cleanup via `delete_cache`
26
+
27
+ Your Gradio application will save uploaded and generated files to a special directory called the cache directory. Gradio uses a hashing scheme to ensure that duplicate files are not saved to the cache but over time the size of the cache will grow (especially if your app goes viral 😉).
28
+
29
+ Gradio can periodically clean up the cache for you if you specify the `delete_cache` parameter of `gr.Blocks()`, `gr.Interface()`, or `gr.ChatInterface()`.
30
+ This parameter is a tuple of the form `[frequency, age]` both expressed in number of seconds.
31
+ Every `frequency` seconds, the temporary files created by this Blocks instance will be deleted if more than `age` seconds have passed since the file was created.
32
+ For example, setting this to (86400, 86400) will delete temporary files every day if they are older than a day old.
33
+ Additionally, the cache will be deleted entirely when the server restarts.
34
+
35
+ ## The `unload` event
36
+
37
+ Additionally, Gradio now includes a `Blocks.unload()` event, allowing you to run arbitrary cleanup functions when users disconnect (this does not have a 60 minute delay).
38
+ Unlike other gradio events, this event does not accept inputs or outptus.
39
+ You can think of the `unload` event as the opposite of the `load` event.
40
+
41
+ ## Putting it all together
42
+
43
+ The following demo uses all of these features. When a user visits the page, a special unique directory is created for that user.
44
+ As the user interacts with the app, images are saved to disk in that special directory.
45
+ When the user closes the page, the images created in that session are deleted via the `unload` event.
46
+ The state and files in the cache are cleaned up automatically as well.
47
+
48
+ $code_state_cleanup
49
+ $demo_state_cleanup
3 Additional Features/08_environment-variables.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Environment Variables
3
+
4
+ Environment variables in Gradio provide a way to customize your applications and launch settings without changing the codebase. In this guide, we'll explore the key environment variables supported in Gradio and how to set them.
5
+
6
+ ## Key Environment Variables
7
+
8
+ ### 1. `GRADIO_SERVER_PORT`
9
+
10
+ - **Description**: Specifies the port on which the Gradio app will run.
11
+ - **Default**: `7860`
12
+ - **Example**:
13
+ ```bash
14
+ export GRADIO_SERVER_PORT=8000
15
+ ```
16
+
17
+ ### 2. `GRADIO_SERVER_NAME`
18
+
19
+ - **Description**: Defines the host name for the Gradio server. To make Gradio accessible from any IP address, set this to `"0.0.0.0"`
20
+ - **Default**: `"127.0.0.1"`
21
+ - **Example**:
22
+ ```bash
23
+ export GRADIO_SERVER_NAME="0.0.0.0"
24
+ ```
25
+
26
+ ### 3. `GRADIO_ANALYTICS_ENABLED`
27
+
28
+ - **Description**: Whether Gradio should provide
29
+ - **Default**: `"True"`
30
+ - **Options**: `"True"`, `"False"`
31
+ - **Example**:
32
+ ```sh
33
+ export GRADIO_ANALYTICS_ENABLED="True"
34
+ ```
35
+
36
+ ### 4. `GRADIO_DEBUG`
37
+
38
+ - **Description**: Enables or disables debug mode in Gradio. If debug mode is enabled, the main thread does not terminate allowing error messages to be printed in environments such as Google Colab.
39
+ - **Default**: `0`
40
+ - **Example**:
41
+ ```sh
42
+ export GRADIO_DEBUG=1
43
+ ```
44
+
45
+ ### 5. `GRADIO_ALLOW_FLAGGING`
46
+
47
+ - **Description**: Controls whether users can flag inputs/outputs in the Gradio interface. See [the Guide on flagging](/guides/using-flagging) for more details.
48
+ - **Default**: `"manual"`
49
+ - **Options**: `"never"`, `"manual"`, `"auto"`
50
+ - **Example**:
51
+ ```sh
52
+ export GRADIO_ALLOW_FLAGGING="never"
53
+ ```
54
+
55
+ ### 6. `GRADIO_TEMP_DIR`
56
+
57
+ - **Description**: Specifies the directory where temporary files created by Gradio are stored.
58
+ - **Default**: System default temporary directory
59
+ - **Example**:
60
+ ```sh
61
+ export GRADIO_TEMP_DIR="/path/to/temp"
62
+ ```
63
+
64
+ ### 7. `GRADIO_ROOT_PATH`
65
+
66
+ - **Description**: Sets the root path for the Gradio application. Useful if running Gradio [behind a reverse proxy](/guides/running-gradio-on-your-web-server-with-nginx).
67
+ - **Default**: `""`
68
+ - **Example**:
69
+ ```sh
70
+ export GRADIO_ROOT_PATH="/myapp"
71
+ ```
72
+
73
+ ### 8. `GRADIO_SHARE`
74
+
75
+ - **Description**: Enables or disables sharing the Gradio app.
76
+ - **Default**: `"False"`
77
+ - **Options**: `"True"`, `"False"`
78
+ - **Example**:
79
+ ```sh
80
+ export GRADIO_SHARE="True"
81
+ ```
82
+
83
+ ### 9. `GRADIO_ALLOWED_PATHS`
84
+
85
+ - **Description**: Sets a list of complete filepaths or parent directories that gradio is allowed to serve. Must be absolute paths. Warning: if you provide directories, any files in these directories or their subdirectories are accessible to all users of your app. Multiple items can be specified by separating items with commas.
86
+ - **Default**: `""`
87
+ - **Example**:
88
+ ```sh
89
+ export GRADIO_ALLOWED_PATHS="/mnt/sda1,/mnt/sda2"
90
+ ```
91
+
92
+ ### 10. `GRADIO_BLOCKED_PATHS`
93
+
94
+ - **Description**: Sets a list of complete filepaths or parent directories that gradio is not allowed to serve (i.e. users of your app are not allowed to access). Must be absolute paths. Warning: takes precedence over `allowed_paths` and all other directories exposed by Gradio by default. Multiple items can be specified by separating items with commas.
95
+ - **Default**: `""`
96
+ - **Example**:
97
+ ```sh
98
+ export GRADIO_BLOCKED_PATHS="/users/x/gradio_app/admin,/users/x/gradio_app/keys"
99
+ ```
100
+
101
+
102
+ ## How to Set Environment Variables
103
+
104
+ To set environment variables in your terminal, use the `export` command followed by the variable name and its value. For example:
105
+
106
+ ```sh
107
+ export GRADIO_SERVER_PORT=8000
108
+ ```
109
+
110
+ If you're using a `.env` file to manage your environment variables, you can add them like this:
111
+
112
+ ```sh
113
+ GRADIO_SERVER_PORT=8000
114
+ GRADIO_SERVER_NAME="localhost"
115
+ ```
116
+
117
+ Then, use a tool like `dotenv` to load these variables when running your application.
118
+
3 Additional Features/09_sharing-your-app.md ADDED
@@ -0,0 +1,482 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Sharing Your App
3
+
4
+ In this Guide, we dive more deeply into the various aspects of sharing a Gradio app with others. We will cover:
5
+
6
+ 1. [Sharing demos with the share parameter](#sharing-demos)
7
+ 2. [Hosting on HF Spaces](#hosting-on-hf-spaces)
8
+ 3. [Embedding hosted spaces](#embedding-hosted-spaces)
9
+ 4. [Using the API page](#api-page)
10
+ 5. [Accessing network requests](#accessing-the-network-request-directly)
11
+ 6. [Mounting within FastAPI](#mounting-within-another-fast-api-app)
12
+ 7. [Authentication](#authentication)
13
+ 8. [Security and file access](#security-and-file-access)
14
+ 9. [Analytics](#analytics)
15
+
16
+ ## Sharing Demos
17
+
18
+ Gradio demos can be easily shared publicly by setting `share=True` in the `launch()` method. Like this:
19
+
20
+ ```python
21
+ import gradio as gr
22
+
23
+ def greet(name):
24
+ return "Hello " + name + "!"
25
+
26
+ demo = gr.Interface(fn=greet, inputs="textbox", outputs="textbox")
27
+
28
+ demo.launch(share=True) # Share your demo with just 1 extra parameter 🚀
29
+ ```
30
+
31
+ This generates a public, shareable link that you can send to anybody! When you send this link, the user on the other side can try out the model in their browser. Because the processing happens on your device (as long as your device stays on), you don't have to worry about any packaging any dependencies.
32
+
33
+ ![sharing](https://github.com/gradio-app/gradio/blob/main/guides/assets/sharing.svg?raw=true)
34
+
35
+
36
+ A share link usually looks something like this: **https://07ff8706ab.gradio.live**. Although the link is served through the Gradio Share Servers, these servers are only a proxy for your local server, and do not store any data sent through your app. Share links expire after 72 hours. (it is [also possible to set up your own Share Server](https://github.com/huggingface/frp/) on your own cloud server to overcome this restriction.)
37
+
38
+ Tip: Keep in mind that share links are publicly accessible, meaning that anyone can use your model for prediction! Therefore, make sure not to expose any sensitive information through the functions you write, or allow any critical changes to occur on your device. Or you can [add authentication to your Gradio app](#authentication) as discussed below.
39
+
40
+ Note that by default, `share=False`, which means that your server is only running locally. (This is the default, except in Google Colab notebooks, where share links are automatically created). As an alternative to using share links, you can use use [SSH port-forwarding](https://www.ssh.com/ssh/tunneling/example) to share your local server with specific users.
41
+
42
+
43
+ ## Hosting on HF Spaces
44
+
45
+ If you'd like to have a permanent link to your Gradio demo on the internet, use Hugging Face Spaces. [Hugging Face Spaces](http://huggingface.co/spaces/) provides the infrastructure to permanently host your machine learning model for free!
46
+
47
+ After you have [created a free Hugging Face account](https://huggingface.co/join), you have two methods to deploy your Gradio app to Hugging Face Spaces:
48
+
49
+ 1. From terminal: run `gradio deploy` in your app directory. The CLI will gather some basic metadata and then launch your app. To update your space, you can re-run this command or enable the Github Actions option to automatically update the Spaces on `git push`.
50
+
51
+ 2. From your browser: Drag and drop a folder containing your Gradio model and all related files [here](https://huggingface.co/new-space). See [this guide how to host on Hugging Face Spaces](https://huggingface.co/blog/gradio-spaces) for more information, or watch the embedded video:
52
+
53
+ <video autoplay muted loop>
54
+ <source src="https://github.com/gradio-app/gradio/blob/main/guides/assets/hf_demo.mp4?raw=true" type="video/mp4" />
55
+ </video>
56
+
57
+
58
+ ## Embedding Hosted Spaces
59
+
60
+ Once you have hosted your app on Hugging Face Spaces (or on your own server), you may want to embed the demo on a different website, such as your blog or your portfolio. Embedding an interactive demo allows people to try out the machine learning model that you have built, without needing to download or install anything — right in their browser! The best part is that you can embed interactive demos even in static websites, such as GitHub pages.
61
+
62
+ There are two ways to embed your Gradio demos. You can find quick links to both options directly on the Hugging Face Space page, in the "Embed this Space" dropdown option:
63
+
64
+ ![Embed this Space dropdown option](https://github.com/gradio-app/gradio/blob/main/guides/assets/embed_this_space.png?raw=true)
65
+
66
+ ### Embedding with Web Components
67
+
68
+ Web components typically offer a better experience to users than IFrames. Web components load lazily, meaning that they won't slow down the loading time of your website, and they automatically adjust their height based on the size of the Gradio app.
69
+
70
+ To embed with Web Components:
71
+
72
+ 1. Import the gradio JS library into into your site by adding the script below in your site (replace {GRADIO_VERSION} in the URL with the library version of Gradio you are using).
73
+
74
+ ```html
75
+ <script
76
+ type="module"
77
+ src="https://gradio.s3-us-west-2.amazonaws.com/{GRADIO_VERSION}/gradio.js"
78
+ ></script>
79
+ ```
80
+
81
+ 2. Add
82
+
83
+ ```html
84
+ <gradio-app src="https://$your_space_host.hf.space"></gradio-app>
85
+ ```
86
+
87
+ element where you want to place the app. Set the `src=` attribute to your Space's embed URL, which you can find in the "Embed this Space" button. For example:
88
+
89
+ ```html
90
+ <gradio-app
91
+ src="https://abidlabs-pytorch-image-classifier.hf.space"
92
+ ></gradio-app>
93
+ ```
94
+
95
+ <script>
96
+ fetch("https://pypi.org/pypi/gradio/json"
97
+ ).then(r => r.json()
98
+ ).then(obj => {
99
+ let v = obj.info.version;
100
+ content = document.querySelector('.prose');
101
+ content.innerHTML = content.innerHTML.replaceAll("{GRADIO_VERSION}", v);
102
+ });
103
+ </script>
104
+
105
+ You can see examples of how web components look <a href="https://www.gradio.app">on the Gradio landing page</a>.
106
+
107
+ You can also customize the appearance and behavior of your web component with attributes that you pass into the `<gradio-app>` tag:
108
+
109
+ - `src`: as we've seen, the `src` attributes links to the URL of the hosted Gradio demo that you would like to embed
110
+ - `space`: an optional shorthand if your Gradio demo is hosted on Hugging Face Space. Accepts a `username/space_name` instead of a full URL. Example: `gradio/Echocardiogram-Segmentation`. If this attribute attribute is provided, then `src` does not need to be provided.
111
+ - `control_page_title`: a boolean designating whether the html title of the page should be set to the title of the Gradio app (by default `"false"`)
112
+ - `initial_height`: the initial height of the web component while it is loading the Gradio app, (by default `"300px"`). Note that the final height is set based on the size of the Gradio app.
113
+ - `container`: whether to show the border frame and information about where the Space is hosted (by default `"true"`)
114
+ - `info`: whether to show just the information about where the Space is hosted underneath the embedded app (by default `"true"`)
115
+ - `autoscroll`: whether to autoscroll to the output when prediction has finished (by default `"false"`)
116
+ - `eager`: whether to load the Gradio app as soon as the page loads (by default `"false"`)
117
+ - `theme_mode`: whether to use the `dark`, `light`, or default `system` theme mode (by default `"system"`)
118
+ - `render`: an event that is triggered once the embedded space has finished rendering.
119
+
120
+ Here's an example of how to use these attributes to create a Gradio app that does not lazy load and has an initial height of 0px.
121
+
122
+ ```html
123
+ <gradio-app
124
+ space="gradio/Echocardiogram-Segmentation"
125
+ eager="true"
126
+ initial_height="0px"
127
+ ></gradio-app>
128
+ ```
129
+
130
+ Here's another example of how to use the `render` event. An event listener is used to capture the `render` event and will call the `handleLoadComplete()` function once rendering is complete.
131
+
132
+ ```html
133
+ <script>
134
+ function handleLoadComplete() {
135
+ console.log("Embedded space has finished rendering");
136
+ }
137
+
138
+ const gradioApp = document.querySelector("gradio-app");
139
+ gradioApp.addEventListener("render", handleLoadComplete);
140
+ </script>
141
+ ```
142
+
143
+ _Note: While Gradio's CSS will never impact the embedding page, the embedding page can affect the style of the embedded Gradio app. Make sure that any CSS in the parent page isn't so general that it could also apply to the embedded Gradio app and cause the styling to break. Element selectors such as `header { ... }` and `footer { ... }` will be the most likely to cause issues._
144
+
145
+ ### Embedding with IFrames
146
+
147
+ To embed with IFrames instead (if you cannot add javascript to your website, for example), add this element:
148
+
149
+ ```html
150
+ <iframe src="https://$your_space_host.hf.space"></iframe>
151
+ ```
152
+
153
+ Again, you can find the `src=` attribute to your Space's embed URL, which you can find in the "Embed this Space" button.
154
+
155
+ Note: if you use IFrames, you'll probably want to add a fixed `height` attribute and set `style="border:0;"` to remove the boreder. In addition, if your app requires permissions such as access to the webcam or the microphone, you'll need to provide that as well using the `allow` attribute.
156
+
157
+ ## API Page
158
+
159
+ You can use almost any Gradio app as an API! In the footer of a Gradio app [like this one](https://huggingface.co/spaces/gradio/hello_world), you'll see a "Use via API" link.
160
+
161
+ ![Use via API](https://github.com/gradio-app/gradio/blob/main/guides/assets/use_via_api.png?raw=true)
162
+
163
+ This is a page that lists the endpoints that can be used to query the Gradio app, via our supported clients: either [the Python client](https://gradio.app/guides/getting-started-with-the-python-client/), or [the JavaScript client](https://gradio.app/guides/getting-started-with-the-js-client/). For each endpoint, Gradio automatically generates the parameters and their types, as well as example inputs, like this.
164
+
165
+ ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api.png)
166
+
167
+ The endpoints are automatically created when you launch a Gradio `Interface`. If you are using Gradio `Blocks`, you can also set up a Gradio API page, though we recommend that you explicitly name each event listener, such as
168
+
169
+ ```python
170
+ btn.click(add, [num1, num2], output, api_name="addition")
171
+ ```
172
+
173
+ This will add and document the endpoint `/api/addition/` to the automatically generated API page. Otherwise, your API endpoints will appear as "unnamed" endpoints.
174
+
175
+ ## Accessing the Network Request Directly
176
+
177
+ When a user makes a prediction to your app, you may need the underlying network request, in order to get the request headers (e.g. for advanced authentication), log the client's IP address, getting the query parameters, or for other reasons. Gradio supports this in a similar manner to FastAPI: simply add a function parameter whose type hint is `gr.Request` and Gradio will pass in the network request as that parameter. Here is an example:
178
+
179
+ ```python
180
+ import gradio as gr
181
+
182
+ def echo(text, request: gr.Request):
183
+ if request:
184
+ print("Request headers dictionary:", request.headers)
185
+ print("IP address:", request.client.host)
186
+ print("Query parameters:", dict(request.query_params))
187
+ return text
188
+
189
+ io = gr.Interface(echo, "textbox", "textbox").launch()
190
+ ```
191
+
192
+ Note: if your function is called directly instead of through the UI (this happens, for
193
+ example, when examples are cached, or when the Gradio app is called via API), then `request` will be `None`.
194
+ You should handle this case explicitly to ensure that your app does not throw any errors. That is why
195
+ we have the explicit check `if request`.
196
+
197
+ ## Mounting Within Another FastAPI App
198
+
199
+ In some cases, you might have an existing FastAPI app, and you'd like to add a path for a Gradio demo.
200
+ You can easily do this with `gradio.mount_gradio_app()`.
201
+
202
+ Here's a complete example:
203
+
204
+ $code_custom_path
205
+
206
+ Note that this approach also allows you run your Gradio apps on custom paths (`http://localhost:8000/gradio` in the example above).
207
+
208
+
209
+ ## Authentication
210
+
211
+ ### Password-protected app
212
+
213
+ You may wish to put an authentication page in front of your app to limit who can open your app. With the `auth=` keyword argument in the `launch()` method, you can provide a tuple with a username and password, or a list of acceptable username/password tuples; Here's an example that provides password-based authentication for a single user named "admin":
214
+
215
+ ```python
216
+ demo.launch(auth=("admin", "pass1234"))
217
+ ```
218
+
219
+ For more complex authentication handling, you can even pass a function that takes a username and password as arguments, and returns `True` to allow access, `False` otherwise.
220
+
221
+ Here's an example of a function that accepts any login where the username and password are the same:
222
+
223
+ ```python
224
+ def same_auth(username, password):
225
+ return username == password
226
+ demo.launch(auth=same_auth)
227
+ ```
228
+
229
+ If you have multiple users, you may wish to customize the content that is shown depending on the user that is logged in. You can retrieve the logged in user by [accessing the network request directly](#accessing-the-network-request-directly) as discussed above, and then reading the `.username` attribute of the request. Here's an example:
230
+
231
+
232
+ ```python
233
+ import gradio as gr
234
+
235
+ def update_message(request: gr.Request):
236
+ return f"Welcome, {request.username}"
237
+
238
+ with gr.Blocks() as demo:
239
+ m = gr.Markdown()
240
+ demo.load(update_message, None, m)
241
+
242
+ demo.launch(auth=[("Abubakar", "Abubakar"), ("Ali", "Ali")])
243
+ ```
244
+
245
+ Note: For authentication to work properly, third party cookies must be enabled in your browser. This is not the case by default for Safari or for Chrome Incognito Mode.
246
+
247
+ If users visit the `/logout` page of your Gradio app, they will automatically be logged out and session cookies deleted. This allows you to add logout functionality to your Gradio app as well. Let's update the previous example to include a log out button:
248
+
249
+ ```python
250
+ import gradio as gr
251
+
252
+ def update_message(request: gr.Request):
253
+ return f"Welcome, {request.username}"
254
+
255
+ with gr.Blocks() as demo:
256
+ m = gr.Markdown()
257
+ logout_button = gr.Button("Logout", link="/logout")
258
+ demo.load(update_message, None, m)
259
+
260
+ demo.launch(auth=[("Pete", "Pete"), ("Dawood", "Dawood")])
261
+ ```
262
+
263
+ Note: Gradio's built-in authentication provides a straightforward and basic layer of access control but does not offer robust security features for applications that require stringent access controls (e.g. multi-factor authentication, rate limiting, or automatic lockout policies).
264
+
265
+ ### OAuth (Login via Hugging Face)
266
+
267
+ Gradio natively supports OAuth login via Hugging Face. In other words, you can easily add a _"Sign in with Hugging Face"_ button to your demo, which allows you to get a user's HF username as well as other information from their HF profile. Check out [this Space](https://huggingface.co/spaces/Wauplin/gradio-oauth-demo) for a live demo.
268
+
269
+ To enable OAuth, you must set `hf_oauth: true` as a Space metadata in your README.md file. This will register your Space
270
+ as an OAuth application on Hugging Face. Next, you can use `gr.LoginButton` to add a login button to
271
+ your Gradio app. Once a user is logged in with their HF account, you can retrieve their profile by adding a parameter of type
272
+ `gr.OAuthProfile` to any Gradio function. The user profile will be automatically injected as a parameter value. If you want
273
+ to perform actions on behalf of the user (e.g. list user's private repos, create repo, etc.), you can retrieve the user
274
+ token by adding a parameter of type `gr.OAuthToken`. You must define which scopes you will use in your Space metadata
275
+ (see [documentation](https://huggingface.co/docs/hub/spaces-oauth#scopes) for more details).
276
+
277
+ Here is a short example:
278
+
279
+ ```py
280
+ import gradio as gr
281
+ from huggingface_hub import whoami
282
+
283
+ def hello(profile: gr.OAuthProfile | None) -> str:
284
+ if profile is None:
285
+ return "I don't know you."
286
+ return f"Hello {profile.name}"
287
+
288
+ def list_organizations(oauth_token: gr.OAuthToken | None) -> str:
289
+ if oauth_token is None:
290
+ return "Please log in to list organizations."
291
+ org_names = [org["name"] for org in whoami(oauth_token.token)["orgs"]]
292
+ return f"You belong to {', '.join(org_names)}."
293
+
294
+ with gr.Blocks() as demo:
295
+ gr.LoginButton()
296
+ m1 = gr.Markdown()
297
+ m2 = gr.Markdown()
298
+ demo.load(hello, inputs=None, outputs=m1)
299
+ demo.load(list_organizations, inputs=None, outputs=m2)
300
+
301
+ demo.launch()
302
+ ```
303
+
304
+ When the user clicks on the login button, they get redirected in a new page to authorize your Space.
305
+
306
+ <center>
307
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/oauth_sign_in.png" style="width:300px; max-width:80%">
308
+ </center>
309
+
310
+ Users can revoke access to their profile at any time in their [settings](https://huggingface.co/settings/connected-applications).
311
+
312
+ As seen above, OAuth features are available only when your app runs in a Space. However, you often need to test your app
313
+ locally before deploying it. To test OAuth features locally, your machine must be logged in to Hugging Face. Please run `huggingface-cli login` or set `HF_TOKEN` as environment variable with one of your access token. You can generate a new token in your settings page (https://huggingface.co/settings/tokens). Then, clicking on the `gr.LoginButton` will login your local Hugging Face profile, allowing you to debug your app with your Hugging Face account before deploying it to a Space.
314
+
315
+
316
+ ### OAuth (with external providers)
317
+
318
+ It is also possible to authenticate with external OAuth providers (e.g. Google OAuth) in your Gradio apps. To do this, first mount your Gradio app within a FastAPI app ([as discussed above](#mounting-within-another-fast-api-app)). Then, you must write an *authentication function*, which gets the user's username from the OAuth provider and returns it. This function should be passed to the `auth_dependency` parameter in `gr.mount_gradio_app`.
319
+
320
+ Similar to [FastAPI dependency functions](https://fastapi.tiangolo.com/tutorial/dependencies/), the function specified by `auth_dependency` will run before any Gradio-related route in your FastAPI app. The function should accept a single parameter: the FastAPI `Request` and return either a string (representing a user's username) or `None`. If a string is returned, the user will be able to access the Gradio-related routes in your FastAPI app.
321
+
322
+ First, let's show a simplistic example to illustrate the `auth_dependency` parameter:
323
+
324
+ ```python
325
+ from fastapi import FastAPI, Request
326
+ import gradio as gr
327
+
328
+ app = FastAPI()
329
+
330
+ def get_user(request: Request):
331
+ return request.headers.get("user")
332
+
333
+ demo = gr.Interface(lambda s: f"Hello {s}!", "textbox", "textbox")
334
+
335
+ app = gr.mount_gradio_app(app, demo, path="/demo", auth_dependency=get_user)
336
+
337
+ if __name__ == '__main__':
338
+ uvicorn.run(app)
339
+ ```
340
+
341
+ In this example, only requests that include a "user" header will be allowed to access the Gradio app. Of course, this does not add much security, since any user can add this header in their request.
342
+
343
+ Here's a more complete example showing how to add Google OAuth to a Gradio app (assuming you've already created OAuth Credentials on the [Google Developer Console](https://console.cloud.google.com/project)):
344
+
345
+ ```python
346
+ import os
347
+ from authlib.integrations.starlette_client import OAuth, OAuthError
348
+ from fastapi import FastAPI, Depends, Request
349
+ from starlette.config import Config
350
+ from starlette.responses import RedirectResponse
351
+ from starlette.middleware.sessions import SessionMiddleware
352
+ import uvicorn
353
+ import gradio as gr
354
+
355
+ app = FastAPI()
356
+
357
+ # Replace these with your own OAuth settings
358
+ GOOGLE_CLIENT_ID = "..."
359
+ GOOGLE_CLIENT_SECRET = "..."
360
+ SECRET_KEY = "..."
361
+
362
+ config_data = {'GOOGLE_CLIENT_ID': GOOGLE_CLIENT_ID, 'GOOGLE_CLIENT_SECRET': GOOGLE_CLIENT_SECRET}
363
+ starlette_config = Config(environ=config_data)
364
+ oauth = OAuth(starlette_config)
365
+ oauth.register(
366
+ name='google',
367
+ server_metadata_url='https://accounts.google.com/.well-known/openid-configuration',
368
+ client_kwargs={'scope': 'openid email profile'},
369
+ )
370
+
371
+ SECRET_KEY = os.environ.get('SECRET_KEY') or "a_very_secret_key"
372
+ app.add_middleware(SessionMiddleware, secret_key=SECRET_KEY)
373
+
374
+ # Dependency to get the current user
375
+ def get_user(request: Request):
376
+ user = request.session.get('user')
377
+ if user:
378
+ return user['name']
379
+ return None
380
+
381
+ @app.get('/')
382
+ def public(user: dict = Depends(get_user)):
383
+ if user:
384
+ return RedirectResponse(url='/gradio')
385
+ else:
386
+ return RedirectResponse(url='/login-demo')
387
+
388
+ @app.route('/logout')
389
+ async def logout(request: Request):
390
+ request.session.pop('user', None)
391
+ return RedirectResponse(url='/')
392
+
393
+ @app.route('/login')
394
+ async def login(request: Request):
395
+ redirect_uri = request.url_for('auth')
396
+ # If your app is running on https, you should ensure that the
397
+ # `redirect_uri` is https, e.g. uncomment the following lines:
398
+ #
399
+ # from urllib.parse import urlparse, urlunparse
400
+ # redirect_uri = urlunparse(urlparse(str(redirect_uri))._replace(scheme='https'))
401
+ return await oauth.google.authorize_redirect(request, redirect_uri)
402
+
403
+ @app.route('/auth')
404
+ async def auth(request: Request):
405
+ try:
406
+ access_token = await oauth.google.authorize_access_token(request)
407
+ except OAuthError:
408
+ return RedirectResponse(url='/')
409
+ request.session['user'] = dict(access_token)["userinfo"]
410
+ return RedirectResponse(url='/')
411
+
412
+ with gr.Blocks() as login_demo:
413
+ gr.Button("Login", link="/login")
414
+
415
+ app = gr.mount_gradio_app(app, login_demo, path="/login-demo")
416
+
417
+ def greet(request: gr.Request):
418
+ return f"Welcome to Gradio, {request.username}"
419
+
420
+ with gr.Blocks() as main_demo:
421
+ m = gr.Markdown("Welcome to Gradio!")
422
+ gr.Button("Logout", link="/logout")
423
+ main_demo.load(greet, None, m)
424
+
425
+ app = gr.mount_gradio_app(app, main_demo, path="/gradio", auth_dependency=get_user)
426
+
427
+ if __name__ == '__main__':
428
+ uvicorn.run(app)
429
+ ```
430
+
431
+ There are actually two separate Gradio apps in this example! One that simply displays a log in button (this demo is accessible to any user), while the other main demo is only accessible to users that are logged in. You can try this example out on [this Space](https://huggingface.co/spaces/gradio/oauth-example).
432
+
433
+
434
+
435
+ ## Security and File Access
436
+
437
+ Sharing your Gradio app with others (by hosting it on Spaces, on your own server, or through temporary share links) **exposes** certain files on the host machine to users of your Gradio app.
438
+
439
+ In particular, Gradio apps ALLOW users to access to four kinds of files:
440
+
441
+ - **Temporary files created by Gradio.** These are files that are created by Gradio as part of running your prediction function. For example, if your prediction function returns a video file, then Gradio will save that video to a temporary cache on your device and then send the path to the file to the front end. You can customize the location of temporary cache files created by Gradio by setting the environment variable `GRADIO_TEMP_DIR` to an absolute path, such as `/home/usr/scripts/project/temp/`. You can delete the files created by your app when it shuts down with the `delete_cache` parameter of `gradio.Blocks`, `gradio.Interface`, and `gradio.ChatInterface`. This parameter is a tuple of integers of the form `[frequency, age]` where `frequency` is how often to delete files and `age` is the time in seconds since the file was last modified.
442
+
443
+
444
+ - **Cached examples created by Gradio.** These are files that are created by Gradio as part of caching examples for faster runtimes, if you set `cache_examples=True` or `cache_examples="lazy"` in `gr.Interface()`, `gr.ChatInterface()` or in `gr.Examples()`. By default, these files are saved in the `gradio_cached_examples/` subdirectory within your app's working directory. You can customize the location of cached example files created by Gradio by setting the environment variable `GRADIO_EXAMPLES_CACHE` to an absolute path or a path relative to your working directory.
445
+
446
+ - **Files that you explicitly allow via the `allowed_paths` parameter in `launch()`**. This parameter allows you to pass in a list of additional directories or exact filepaths you'd like to allow users to have access to. (By default, this parameter is an empty list).
447
+
448
+ - **Static files that you explicitly set via the `gr.set_static_paths` function**. This parameter allows you to pass in a list of directories or filenames that will be considered static. This means that they will not be copied to the cache and will be served directly from your computer. This can help save disk space and reduce the time your app takes to launch but be mindful of possible security implications.
449
+
450
+ Gradio DOES NOT ALLOW access to:
451
+
452
+ - **Files that you explicitly block via the `blocked_paths` parameter in `launch()`**. You can pass in a list of additional directories or exact filepaths to the `blocked_paths` parameter in `launch()`. This parameter takes precedence over the files that Gradio exposes by default or by the `allowed_paths`.
453
+
454
+ - **Any other paths on the host machine**. Users should NOT be able to access other arbitrary paths on the host.
455
+
456
+ Sharing your Gradio application will also allow users to upload files to your computer or server. You can set a maximum file size for uploads to prevent abuse and to preserve disk space. You can do this with the `max_file_size` parameter of `.launch`. For example, the following two code snippets limit file uploads to 5 megabytes per file.
457
+
458
+ ```python
459
+ import gradio as gr
460
+
461
+ demo = gr.Interface(lambda x: x, "image", "image")
462
+
463
+ demo.launch(max_file_size="5mb")
464
+ # or
465
+ demo.launch(max_file_size=5 * gr.FileSize.MB)
466
+ ```
467
+
468
+ Please make sure you are running the latest version of `gradio` for these security settings to apply.
469
+
470
+ ## Analytics
471
+
472
+ By default, Gradio collects certain analytics to help us better understand the usage of the `gradio` library. This includes the following information:
473
+
474
+ * What environment the Gradio app is running on (e.g. Colab Notebook, Hugging Face Spaces)
475
+ * What input/output components are being used in the Gradio app
476
+ * Whether the Gradio app is utilizing certain advanced features, such as `auth` or `show_error`
477
+ * The IP address which is used solely to measure the number of unique developers using Gradio
478
+ * The version of Gradio that is running
479
+
480
+ No information is collected from _users_ of your Gradio app. If you'd like to diable analytics altogether, you can do so by setting the `analytics_enabled` parameter to `False` in `gr.Blocks`, `gr.Interface`, or `gr.ChatInterface`. Or, you can set the GRADIO_ANALYTICS_ENABLED environment variable to `"False"` to apply this to all Gradio apps created across your system.
481
+
482
+ *Note*: this reflects the analytics policy as of `gradio>=4.32.0`.
4 Building with Blocks/01_blocks-and-event-listeners.md ADDED
@@ -0,0 +1,244 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Blocks and Event Listeners
3
+
4
+ We briefly descirbed the Blocks class in the [Quickstart](/main/guides/quickstart#custom-demos-with-gr-blocks) as a way to build custom demos. Let's dive deeper.
5
+
6
+
7
+ ## Blocks Structure
8
+
9
+ Take a look at the demo below.
10
+
11
+ $code_hello_blocks
12
+ $demo_hello_blocks
13
+
14
+ - First, note the `with gr.Blocks() as demo:` clause. The Blocks app code will be contained within this clause.
15
+ - Next come the Components. These are the same Components used in `Interface`. However, instead of being passed to some constructor, Components are automatically added to the Blocks as they are created within the `with` clause.
16
+ - Finally, the `click()` event listener. Event listeners define the data flow within the app. In the example above, the listener ties the two Textboxes together. The Textbox `name` acts as the input and Textbox `output` acts as the output to the `greet` method. This dataflow is triggered when the Button `greet_btn` is clicked. Like an Interface, an event listener can take multiple inputs or outputs.
17
+
18
+ You can also attach event listeners using decorators - skip the `fn` argument and assign `inputs` and `outputs` directly:
19
+
20
+ $code_hello_blocks_decorator
21
+
22
+ ## Event Listeners and Interactivity
23
+
24
+ In the example above, you'll notice that you are able to edit Textbox `name`, but not Textbox `output`. This is because any Component that acts as an input to an event listener is made interactive. However, since Textbox `output` acts only as an output, Gradio determines that it should not be made interactive. You can override the default behavior and directly configure the interactivity of a Component with the boolean `interactive` keyword argument.
25
+
26
+ ```python
27
+ output = gr.Textbox(label="Output", interactive=True)
28
+ ```
29
+
30
+ _Note_: What happens if a Gradio component is neither an input nor an output? If a component is constructed with a default value, then it is presumed to be displaying content and is rendered non-interactive. Otherwise, it is rendered interactive. Again, this behavior can be overridden by specifying a value for the `interactive` argument.
31
+
32
+ ## Types of Event Listeners
33
+
34
+ Take a look at the demo below:
35
+
36
+ $code_blocks_hello
37
+ $demo_blocks_hello
38
+
39
+ Instead of being triggered by a click, the `welcome` function is triggered by typing in the Textbox `inp`. This is due to the `change()` event listener. Different Components support different event listeners. For example, the `Video` Component supports a `play()` event listener, triggered when a user presses play. See the [Docs](http://gradio.app/docs#components) for the event listeners for each Component.
40
+
41
+ ## Multiple Data Flows
42
+
43
+ A Blocks app is not limited to a single data flow the way Interfaces are. Take a look at the demo below:
44
+
45
+ $code_reversible_flow
46
+ $demo_reversible_flow
47
+
48
+ Note that `num1` can act as input to `num2`, and also vice-versa! As your apps get more complex, you will have many data flows connecting various Components.
49
+
50
+ Here's an example of a "multi-step" demo, where the output of one model (a speech-to-text model) gets fed into the next model (a sentiment classifier).
51
+
52
+ $code_blocks_speech_text_sentiment
53
+ $demo_blocks_speech_text_sentiment
54
+
55
+ ## Function Input List vs Dict
56
+
57
+ The event listeners you've seen so far have a single input component. If you'd like to have multiple input components pass data to the function, you have two options on how the function can accept input component values:
58
+
59
+ 1. as a list of arguments, or
60
+ 2. as a single dictionary of values, keyed by the component
61
+
62
+ Let's see an example of each:
63
+ $code_calculator_list_and_dict
64
+
65
+ Both `add()` and `sub()` take `a` and `b` as inputs. However, the syntax is different between these listeners.
66
+
67
+ 1. To the `add_btn` listener, we pass the inputs as a list. The function `add()` takes each of these inputs as arguments. The value of `a` maps to the argument `num1`, and the value of `b` maps to the argument `num2`.
68
+ 2. To the `sub_btn` listener, we pass the inputs as a set (note the curly brackets!). The function `sub()` takes a single dictionary argument `data`, where the keys are the input components, and the values are the values of those components.
69
+
70
+ It is a matter of preference which syntax you prefer! For functions with many input components, option 2 may be easier to manage.
71
+
72
+ $demo_calculator_list_and_dict
73
+
74
+ ## Function Return List vs Dict
75
+
76
+ Similarly, you may return values for multiple output components either as:
77
+
78
+ 1. a list of values, or
79
+ 2. a dictionary keyed by the component
80
+
81
+ Let's first see an example of (1), where we set the values of two output components by returning two values:
82
+
83
+ ```python
84
+ with gr.Blocks() as demo:
85
+ food_box = gr.Number(value=10, label="Food Count")
86
+ status_box = gr.Textbox()
87
+ def eat(food):
88
+ if food > 0:
89
+ return food - 1, "full"
90
+ else:
91
+ return 0, "hungry"
92
+ gr.Button("EAT").click(
93
+ fn=eat,
94
+ inputs=food_box,
95
+ outputs=[food_box, status_box]
96
+ )
97
+ ```
98
+
99
+ Above, each return statement returns two values corresponding to `food_box` and `status_box`, respectively.
100
+
101
+ Instead of returning a list of values corresponding to each output component in order, you can also return a dictionary, with the key corresponding to the output component and the value as the new value. This also allows you to skip updating some output components.
102
+
103
+ ```python
104
+ with gr.Blocks() as demo:
105
+ food_box = gr.Number(value=10, label="Food Count")
106
+ status_box = gr.Textbox()
107
+ def eat(food):
108
+ if food > 0:
109
+ return {food_box: food - 1, status_box: "full"}
110
+ else:
111
+ return {status_box: "hungry"}
112
+ gr.Button("EAT").click(
113
+ fn=eat,
114
+ inputs=food_box,
115
+ outputs=[food_box, status_box]
116
+ )
117
+ ```
118
+
119
+ Notice how when there is no food, we only update the `status_box` element. We skipped updating the `food_box` component.
120
+
121
+ Dictionary returns are helpful when an event listener affects many components on return, or conditionally affects outputs and not others.
122
+
123
+ Keep in mind that with dictionary returns, we still need to specify the possible outputs in the event listener.
124
+
125
+ ## Updating Component Configurations
126
+
127
+ The return value of an event listener function is usually the updated value of the corresponding output Component. Sometimes we want to update the configuration of the Component as well, such as the visibility. In this case, we return a new Component, setting the properties we want to change.
128
+
129
+ $code_blocks_essay_simple
130
+ $demo_blocks_essay_simple
131
+
132
+ See how we can configure the Textbox itself through a new `gr.Textbox()` method. The `value=` argument can still be used to update the value along with Component configuration. Any arguments we do not set will use their previous values.
133
+
134
+ ## Examples
135
+
136
+ Just like with `gr.Interface`, you can also add examples for your functions when you are working with `gr.Blocks`. In this case, instantiate a `gr.Examples` similar to how you would instantiate any other component. The constructor of `gr.Examples` takes two required arguments:
137
+
138
+ * `examples`: a nested list of examples, in which the outer list consists of examples and each inner list consists of an input corresponding to each input component
139
+ * `inputs`: the component or list of components that should be populated when the examples are clicked
140
+
141
+ You can also set `cache_examples=True` similar to `gr.Interface`, in which case two additional arguments must be provided:
142
+
143
+ * `outputs`: the component or list of components corresponding to the output of the examples
144
+ * `fn`: the function to run to generate the outputs corresponding to the examples
145
+
146
+ Here's an example showing how to use `gr.Examples` in a `gr.Blocks` app:
147
+
148
+ $code_calculator_blocks
149
+
150
+ **Note**: In Gradio 4.0 or later, when you click on examples, not only does the value of the input component update to the example value, but the component's configuration also reverts to the properties with which you constructed the component. This ensures that the examples are compatible with the component even if its configuration has been changed.
151
+
152
+
153
+
154
+ ## Running Events Consecutively
155
+
156
+ You can also run events consecutively by using the `then` method of an event listener. This will run an event after the previous event has finished running. This is useful for running events that update components in multiple steps.
157
+
158
+ For example, in the chatbot example below, we first update the chatbot with the user message immediately, and then update the chatbot with the computer response after a simulated delay.
159
+
160
+ $code_chatbot_consecutive
161
+ $demo_chatbot_consecutive
162
+
163
+ The `.then()` method of an event listener executes the subsequent event regardless of whether the previous event raised any errors. If you'd like to only run subsequent events if the previous event executed successfully, use the `.success()` method, which takes the same arguments as `.then()`.
164
+
165
+ ## Running Events Continuously
166
+
167
+ You can run events on a fixed schedule using `gr.Timer()` object. This will run the event when the timer's `tick` event fires. See the code below:
168
+
169
+ ```python
170
+ with gr.Blocks as demo:
171
+ timer = gr.Timer(5)
172
+ textbox = gr.Textbox()
173
+ textbox2 = gr.Textbox()
174
+ timer.tick(set_textbox_fn, textbox, textbox2)
175
+ ```
176
+
177
+ This can also be used directly with a Component's `every=` parameter as such:
178
+
179
+ ```python
180
+ with gr.Blocks as demo:
181
+ timer = gr.Timer(5)
182
+ textbox = gr.Textbox()
183
+ textbox2 = gr.Textbox(set_textbox_fn, inputs=[textbox], every=timer)
184
+ ```
185
+
186
+ Here is an example of a demo that print the current timestamp, and also prints random numbers regularly!
187
+
188
+ $code_timer
189
+ $demo_timer
190
+
191
+ ## Gathering Event Data
192
+
193
+ You can gather specific data about an event by adding the associated event data class as a type hint to an argument in the event listener function.
194
+
195
+ For example, event data for `.select()` can be type hinted by a `gradio.SelectData` argument. This event is triggered when a user selects some part of the triggering component, and the event data includes information about what the user specifically selected. If a user selected a specific word in a `Textbox`, a specific image in a `Gallery`, or a specific cell in a `DataFrame`, the event data argument would contain information about the specific selection.
196
+
197
+ In the 2 player tic-tac-toe demo below, a user can select a cell in the `DataFrame` to make a move. The event data argument contains information about the specific cell that was selected. We can first check to see if the cell is empty, and then update the cell with the user's move.
198
+
199
+ $code_tictactoe
200
+ $demo_tictactoe
201
+
202
+ ## Binding Multiple Triggers to a Function
203
+
204
+ Often times, you may want to bind multiple triggers to the same function. For example, you may want to allow a user to click a submit button, or press enter to submit a form. You can do this using the `gr.on` method and passing a list of triggers to the `trigger`.
205
+
206
+ $code_on_listener_basic
207
+ $demo_on_listener_basic
208
+
209
+ You can use decorator syntax as well:
210
+
211
+ $code_on_listener_decorator
212
+
213
+ You can use `gr.on` to create "live" events by binding to the `change` event of components that implement it. If you do not specify any triggers, the function will automatically bind to all `change` event of all input components that include a `change` event (for example `gr.Textbox` has a `change` event whereas `gr.Button` does not).
214
+
215
+ $code_on_listener_live
216
+ $demo_on_listener_live
217
+
218
+ You can follow `gr.on` with `.then`, just like any regular event listener. This handy method should save you from having to write a lot of repetitive code!
219
+
220
+ ## Binding a Component Value Directly to a Function of Other Components
221
+
222
+ If you want to set a Component's value to always be a function of the value of other Components, you can use the following shorthand:
223
+
224
+ ```python
225
+ with gr.Blocks() as demo:
226
+ num1 = gr.Number()
227
+ num2 = gr.Number()
228
+ product = gr.Number(lambda a, b: a * b, inputs=[num1, num2])
229
+ ```
230
+
231
+ This functionally the same as:
232
+ ```python
233
+ with gr.Blocks() as demo:
234
+ num1 = gr.Number()
235
+ num2 = gr.Number()
236
+ product = gr.Number()
237
+
238
+ gr.on(
239
+ [num1.change, num2.change, demo.load],
240
+ lambda a, b: a * b,
241
+ inputs=[num1, num2],
242
+ outputs=product
243
+ )
244
+ ```
4 Building with Blocks/02_controlling-layout.md ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Controlling Layout
3
+
4
+ By default, Components in Blocks are arranged vertically. Let's take a look at how we can rearrange Components. Under the hood, this layout structure uses the [flexbox model of web development](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Flexible_Box_Layout/Basic_Concepts_of_Flexbox).
5
+
6
+ ## Rows
7
+
8
+ Elements within a `with gr.Row` clause will all be displayed horizontally. For example, to display two Buttons side by side:
9
+
10
+ ```python
11
+ with gr.Blocks() as demo:
12
+ with gr.Row():
13
+ btn1 = gr.Button("Button 1")
14
+ btn2 = gr.Button("Button 2")
15
+ ```
16
+
17
+ To make every element in a Row have the same height, use the `equal_height` argument of the `style` method.
18
+
19
+ ```python
20
+ with gr.Blocks() as demo:
21
+ with gr.Row(equal_height=True):
22
+ textbox = gr.Textbox()
23
+ btn2 = gr.Button("Button 2")
24
+ ```
25
+
26
+ The widths of elements in a Row can be controlled via a combination of `scale` and `min_width` arguments that are present in every Component.
27
+
28
+ - `scale` is an integer that defines how an element will take up space in a Row. If scale is set to `0`, the element will not expand to take up space. If scale is set to `1` or greater, the element will expand. Multiple elements in a row will expand proportional to their scale. Below, `btn2` will expand twice as much as `btn1`, while `btn0` will not expand at all:
29
+
30
+ ```python
31
+ with gr.Blocks() as demo:
32
+ with gr.Row():
33
+ btn0 = gr.Button("Button 0", scale=0)
34
+ btn1 = gr.Button("Button 1", scale=1)
35
+ btn2 = gr.Button("Button 2", scale=2)
36
+ ```
37
+
38
+ - `min_width` will set the minimum width the element will take. The Row will wrap if there isn't sufficient space to satisfy all `min_width` values.
39
+
40
+ Learn more about Rows in the [docs](https://gradio.app/docs/row).
41
+
42
+ ## Columns and Nesting
43
+
44
+ Components within a Column will be placed vertically atop each other. Since the vertical layout is the default layout for Blocks apps anyway, to be useful, Columns are usually nested within Rows. For example:
45
+
46
+ $code_rows_and_columns
47
+ $demo_rows_and_columns
48
+
49
+ See how the first column has two Textboxes arranged vertically. The second column has an Image and Button arranged vertically. Notice how the relative widths of the two columns is set by the `scale` parameter. The column with twice the `scale` value takes up twice the width.
50
+
51
+ Learn more about Columns in the [docs](https://gradio.app/docs/column).
52
+
53
+ # Dimensions
54
+
55
+ You can control the height and width of various components, where the parameters are available. These parameters accept either a number (interpreted as pixels) or a string. Using a string allows the direct application of any CSS unit to the encapsulating Block element, catering to more specific design requirements. When omitted, Gradio uses default dimensions suited for most use cases.
56
+
57
+ Below is an example illustrating the use of viewport width (vw):
58
+
59
+ ```python
60
+ import gradio as gr
61
+
62
+ with gr.Blocks() as demo:
63
+ im = gr.ImageEditor(
64
+ width="50vw",
65
+ )
66
+
67
+ demo.launch()
68
+ ```
69
+
70
+ When using percentage values for dimensions, you may want to define a parent component with an absolute unit (e.g. `px` or `vw`). This approach ensures that child components with relative dimensions are sized appropriately:
71
+
72
+
73
+ ```python
74
+ import gradio as gr
75
+
76
+ css = """
77
+ .container {
78
+ height: 100vh;
79
+ }
80
+ """
81
+
82
+ with gr.Blocks(css=css) as demo:
83
+ with gr.Column(elem_classes=["container"]):
84
+ name = gr.Chatbot(value=[["1", "2"]], height="70%")
85
+
86
+ demo.launch()
87
+ ```
88
+
89
+ In this example, the Column layout component is given a height of 100% of the viewport height (100vh), and the Chatbot component inside it takes up 70% of the Column's height.
90
+
91
+ You can apply any valid CSS unit for these parameters. For a comprehensive list of CSS units, refer to [this guide](https://www.w3schools.com/cssref/css_units.php). We recommend you always consider responsiveness and test your interfaces on various screen sizes to ensure a consistent user experience.
92
+
93
+
94
+
95
+ ## Tabs and Accordions
96
+
97
+ You can also create Tabs using the `with gr.Tab('tab_name'):` clause. Any component created inside of a `with gr.Tab('tab_name'):` context appears in that tab. Consecutive Tab clauses are grouped together so that a single tab can be selected at one time, and only the components within that Tab's context are shown.
98
+
99
+ For example:
100
+
101
+ $code_blocks_flipper
102
+ $demo_blocks_flipper
103
+
104
+ Also note the `gr.Accordion('label')` in this example. The Accordion is a layout that can be toggled open or closed. Like `Tabs`, it is a layout element that can selectively hide or show content. Any components that are defined inside of a `with gr.Accordion('label'):` will be hidden or shown when the accordion's toggle icon is clicked.
105
+
106
+ Learn more about [Tabs](https://gradio.app/docs/tab) and [Accordions](https://gradio.app/docs/accordion) in the docs.
107
+
108
+ ## Visibility
109
+
110
+ Both Components and Layout elements have a `visible` argument that can set initially and also updated. Setting `gr.Column(visible=...)` on a Column can be used to show or hide a set of Components.
111
+
112
+ $code_blocks_form
113
+ $demo_blocks_form
114
+
115
+ ## Variable Number of Outputs
116
+
117
+ By adjusting the visibility of components in a dynamic way, it is possible to create
118
+ demos with Gradio that support a _variable numbers of outputs_. Here's a very simple example
119
+ where the number of output textboxes is controlled by an input slider:
120
+
121
+ $code_variable_outputs
122
+ $demo_variable_outputs
123
+
124
+ ## Defining and Rendering Components Separately
125
+
126
+ In some cases, you might want to define components before you actually render them in your UI. For instance, you might want to show an examples section using `gr.Examples` above the corresponding `gr.Textbox` input. Since `gr.Examples` requires as a parameter the input component object, you will need to first define the input component, but then render it later, after you have defined the `gr.Examples` object.
127
+
128
+ The solution to this is to define the `gr.Textbox` outside of the `gr.Blocks()` scope and use the component's `.render()` method wherever you'd like it placed in the UI.
129
+
130
+ Here's a full code example:
131
+
132
+ ```python
133
+ input_textbox = gr.Textbox()
134
+
135
+ with gr.Blocks() as demo:
136
+ gr.Examples(["hello", "bonjour", "merhaba"], input_textbox)
137
+ input_textbox.render()
138
+ ```
4 Building with Blocks/03_state-in-blocks.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # State in Blocks
3
+
4
+ We covered [State in Interfaces](https://gradio.app/interface-state), this guide takes a look at state in Blocks, which works mostly the same.
5
+
6
+ ## Global State
7
+
8
+ Global state in Blocks works the same as in Interface. Any variable created outside a function call is a reference shared between all users.
9
+
10
+ ## Session State
11
+
12
+ Gradio supports session **state**, where data persists across multiple submits within a page session, in Blocks apps as well. To reiterate, session data is _not_ shared between different users of your model. To store data in a session state, you need to do three things:
13
+
14
+ 1. Create a `gr.State()` object. If there is a default value to this stateful object, pass that into the constructor.
15
+ 2. In the event listener, put the `State` object as an input and output.
16
+ 3. In the event listener function, add the variable to the input parameters and the return value.
17
+
18
+ Let's take a look at a game of hangman.
19
+
20
+ $code_hangman
21
+ $demo_hangman
22
+
23
+ Let's see how we do each of the 3 steps listed above in this game:
24
+
25
+ 1. We store the used letters in `used_letters_var`. In the constructor of `State`, we set the initial value of this to `[]`, an empty list.
26
+ 2. In `btn.click()`, we have a reference to `used_letters_var` in both the inputs and outputs.
27
+ 3. In `guess_letter`, we pass the value of this `State` to `used_letters`, and then return an updated value of this `State` in the return statement.
28
+
29
+ With more complex apps, you will likely have many State variables storing session state in a single Blocks app.
30
+
31
+ Learn more about `State` in the [docs](https://gradio.app/docs/state).
4 Building with Blocks/04_dynamic-apps-with-render-decorator.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Dynamic Apps with the Render Decorator
3
+
4
+ The components and event listeners you define in a Blocks so far have been fixed - once the demo was launched, new components and listeners could not be added, and existing one could not be removed.
5
+
6
+ The `@gr.render` decorator introduces the ability to dynamically change this. Let's take a look.
7
+
8
+ ## Dynamic Number of Components
9
+
10
+ In the example below, we will create a variable number of Textboxes. When the user edits the input Textbox, we create a Textbox for each letter in the input. Try it out below:
11
+
12
+ $code_render_split_simple
13
+ $demo_render_split_simple
14
+
15
+ See how we can now create a variable number of Textboxes using our custom logic - in this case, a simple `for` loop. The `@gr.render` decorator enables this with the following steps:
16
+
17
+ 1. Create a function and attach the @gr.render decorator to it.
18
+ 2. Add the input components to the `inputs=` argument of @gr.render, and create a corresponding argument in your function for each component. This function will automatically re-run on any change to a component.
19
+ 3. Add all components inside the function that you want to render based on the inputs.
20
+
21
+ Now whenever the inputs change, the function re-runs, and replaces the components created from the previous function run with the latest run. Pretty straightforward! Let's add a little more complexity to this app:
22
+
23
+ $code_render_split
24
+ $demo_render_split
25
+
26
+ By default, `@gr.render` re-runs are triggered by the `.load` listener to the app and the `.change` listener to any input component provided. We can override this by explicitly setting the triggers in the decorator, as we have in this app to only trigger on `input_text.submit` instead.
27
+ If you are setting custom triggers, and you also want an automatic render at the start of the app, make sure to add `demo.load` to your list of triggers.
28
+
29
+ ## Dynamic Event Listeners
30
+
31
+ If you're creating components, you probably want to attach event listeners to them as well. Let's take a look at an example that takes in a variable number of Textbox as input, and merges all the text into a single box.
32
+
33
+ $code_render_merge_simple
34
+ $demo_render_merge_simple
35
+
36
+ Let's take a look at what's happening here:
37
+
38
+ 1. The state variable `text_count` is keeping track of the number of Textboxes to create. By clicking on the Add button, we increase `text_count` which triggers the render decorator.
39
+ 2. Note that in every single Textbox we create in the render function, we explicitly set a `key=` argument. This key allows us to preserve the value of this Component between re-renders. If you type in a value in a textbox, and then click the Add button, all the Textboxes re-render, but their values aren't cleared because the `key=` maintains the the value of a Component across a render.
40
+ 3. We've stored the Textboxes created in a list, and provide this list as input to the merge button event listener. Note that **all event listeners that use Components created inside a render function must also be defined inside that render function**. The event listener can still reference Components outside the render function, as we do here by referencing `merge_btn` and `output` which are both defined outside the render function.
41
+
42
+ Just as with Components, whenever a function re-renders, the event listeners created from the previous render are cleared and the new event listeners from the latest run are attached.
43
+
44
+ This allows us to create highly customizable and complex interactions!
45
+
46
+ ## Putting it Together
47
+
48
+ Let's look at two examples that use all the features above. First, try out the to-do list app below:
49
+
50
+ $code_todo_list
51
+ $demo_todo_list
52
+
53
+ Note that almost the entire app is inside a single `gr.render` that reacts to the tasks `gr.State` variable. This variable is a nested list, which presents some complexity. If you design a `gr.render` to react to a list or dict structure, ensure you do the following:
54
+
55
+ 1. Any event listener that modifies a state variable in a manner that should trigger a re-render must set the state variable as an output. This lets Gradio know to check if the variable has changed behind the scenes.
56
+ 2. In a `gr.render`, if a variable in a loop is used inside an event listener function, that variable should be "frozen" via setting it to itself as a default argument in the function header. See how we have `task=task` in both `mark_done` and `delete`. This freezes the variable to its "loop-time" value.
57
+
58
+ Let's take a look at one last example that uses everything we learned. Below is an audio mixer. Provide multiple audio tracks and mix them together.
59
+
60
+ $code_audio_mixer
61
+ $demo_audio_mixer
62
+
63
+ Two things to not in this app:
64
+ 1. Here we provide `key=` to all the components! We need to do this so that if we add another track after setting the values for an existing track, our input values to the existing track do not get reset on re-render.
65
+ 2. When there are lots of components of different types and arbitrary counts passed to an event listener, it is easier to use the set and dictionary notation for inputs rather than list notation. Above, we make one large set of all the input `gr.Audio` and `gr.Slider` components when we pass the inputs to the `merge` function. In the function body we query the component values as a dict.
66
+
67
+ The `gr.render` expands gradio capabilities extensively - see what you can make out of it!
4 Building with Blocks/05_custom-CSS-and-JS.md ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Customizing your demo with CSS and Javascript
3
+
4
+ Gradio allows you to customize your demo in several ways. You can customize the layout of your demo, add custom HTML, and add custom theming as well. This tutorial will go beyond that and walk you through how to add custom CSS and JavaScript code to your demo in order to add custom styling, animations, custom UI functionality, analytics, and more.
5
+
6
+ ## Adding custom CSS to your demo
7
+
8
+ Gradio themes are the easiest way to customize the look and feel of your app. You can choose from a variety of themes, or create your own. To do so, pass the `theme=` kwarg to the `Blocks` constructor. For example:
9
+
10
+ ```python
11
+ with gr.Blocks(theme=gr.themes.Glass()):
12
+ ...
13
+ ```
14
+
15
+ Gradio comes with a set of prebuilt themes which you can load from `gr.themes.*`. You can extend these themes or create your own themes from scratch - see the [Theming guide](/guides/theming-guide) for more details.
16
+
17
+ For additional styling ability, you can pass any CSS to your app using the `css=` kwarg. You can either the filepath to a CSS file, or a string of CSS code.
18
+
19
+ **Warning**: The use of query selectors in custom JS and CSS is _not_ guaranteed to work across Gradio versions as the Gradio HTML DOM may change. We recommend using query selectors sparingly.
20
+
21
+ The base class for the Gradio app is `gradio-container`, so here's an example that changes the background color of the Gradio app:
22
+
23
+ ```python
24
+ with gr.Blocks(css=".gradio-container {background-color: red}") as demo:
25
+ ...
26
+ ```
27
+
28
+ If you'd like to reference external files in your css, preface the file path (which can be a relative or absolute path) with `"file="`, for example:
29
+
30
+ ```python
31
+ with gr.Blocks(css=".gradio-container {background: url('file=clouds.jpg')}") as demo:
32
+ ...
33
+ ```
34
+
35
+ Note: By default, files in the host machine are not accessible to users running the Gradio app. As a result, you should make sure that any referenced files (such as `clouds.jpg` here) are either URLs or allowed via the `allow_list` parameter in `launch()`. Read more in our [section on Security and File Access](/guides/sharing-your-app#security-and-file-access).
36
+
37
+
38
+ ## The `elem_id` and `elem_classes` Arguments
39
+
40
+ You can `elem_id` to add an HTML element `id` to any component, and `elem_classes` to add a class or list of classes. This will allow you to select elements more easily with CSS. This approach is also more likely to be stable across Gradio versions as built-in class names or ids may change (however, as mentioned in the warning above, we cannot guarantee complete compatibility between Gradio versions if you use custom CSS as the DOM elements may themselves change).
41
+
42
+ ```python
43
+ css = """
44
+ #warning {background-color: #FFCCCB}
45
+ .feedback textarea {font-size: 24px !important}
46
+ """
47
+
48
+ with gr.Blocks(css=css) as demo:
49
+ box1 = gr.Textbox(value="Good Job", elem_classes="feedback")
50
+ box2 = gr.Textbox(value="Failure", elem_id="warning", elem_classes="feedback")
51
+ ```
52
+
53
+ The CSS `#warning` ruleset will only target the second Textbox, while the `.feedback` ruleset will target both. Note that when targeting classes, you might need to put the `!important` selector to override the default Gradio styles.
54
+
55
+ ## Adding custom JavaScript to your demo
56
+
57
+ There are 3 ways to add javascript code to your Gradio demo:
58
+
59
+ 1. You can add JavaScript code as a string or as a filepath to the `js` parameter of the `Blocks` or `Interface` initializer. This will run the JavaScript code when the demo is first loaded.
60
+
61
+ Below is an example of adding custom js to show an animated welcome message when the demo first loads.
62
+
63
+ $code_blocks_js_load
64
+ $demo_blocks_js_load
65
+
66
+ Note: You can also supply your custom js code as a file path. For example, if you have a file called `custom.js` in the same directory as your Python script, you can add it to your demo like so: `with gr.Blocks(js="custom.js") as demo:`. Same goes for `Interface` (ex: `gr.Interface(..., js="custom.js")`).
67
+
68
+ 2. When using `Blocks` and event listeners, events have a `js` argument that can take a JavaScript function as a string and treat it just like a Python event listener function. You can pass both a JavaScript function and a Python function (in which case the JavaScript function is run first) or only Javascript (and set the Python `fn` to `None`). Take a look at the code below:
69
+
70
+ $code_blocks_js_methods
71
+ $demo_blocks_js_methods
72
+
73
+ 3. Lastly, you can add JavaScript code to the `head` param of the `Blocks` initializer. This will add the code to the head of the HTML document. For example, you can add Google Analytics to your demo like so:
74
+
75
+
76
+ ```python
77
+ head = f"""
78
+ <script async src="https://www.googletagmanager.com/gtag/js?id={google_analytics_tracking_id}"></script>
79
+ <script>
80
+ window.dataLayer = window.dataLayer || [];
81
+ function gtag(){{dataLayer.push(arguments);}}
82
+ gtag('js', new Date());
83
+ gtag('config', '{google_analytics_tracking_id}');
84
+ </script>
85
+ """
86
+
87
+ with gr.Blocks(head=head) as demo:
88
+ ...demo code...
89
+ ```
90
+
91
+ The `head` parameter accepts any HTML tags you would normally insert into the `<head>` of a page. For example, you can also include `<meta>` tags to `head`.
92
+
93
+ Note that injecting custom HTML can affect browser behavior and compatibility (e.g. keyboard shortcuts). You should test your interface across different browsers and be mindful of how scripts may interact with browser defaults.
94
+ Here's an example where pressing `Shift + s` triggers the `click` event of a specific `Button` component if the browser focus is _not_ on an input component (e.g. `Textbox` component):
95
+
96
+ ```python
97
+ import gradio as gr
98
+
99
+ shortcut_js = """
100
+ <script>
101
+ function shortcuts(e) {
102
+ var event = document.all ? window.event : e;
103
+ switch (e.target.tagName.toLowerCase()) {
104
+ case "input":
105
+ case "textarea":
106
+ break;
107
+ default:
108
+ if (e.key.toLowerCase() == "s" && e.shiftKey) {
109
+ document.getElementById("my_btn").click();
110
+ }
111
+ }
112
+ }
113
+ document.addEventListener('keypress', shortcuts, false);
114
+ </script>
115
+ """
116
+
117
+ with gr.Blocks(head=shortcut_js) as demo:
118
+ action_button = gr.Button(value="Name", elem_id="my_btn")
119
+ textbox = gr.Textbox()
120
+ action_button.click(lambda : "button pressed", None, textbox)
121
+
122
+ demo.launch()
123
+ ```
4 Building with Blocks/06_using-blocks-like-functions.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Using Gradio Blocks Like Functions
3
+
4
+ Tags: TRANSLATION, HUB, SPACES
5
+
6
+ **Prerequisite**: This Guide builds on the Blocks Introduction. Make sure to [read that guide first](https://gradio.app/blocks-and-event-listeners).
7
+
8
+ ## Introduction
9
+
10
+ Did you know that apart from being a full-stack machine learning demo, a Gradio Blocks app is also a regular-old python function!?
11
+
12
+ This means that if you have a gradio Blocks (or Interface) app called `demo`, you can use `demo` like you would any python function.
13
+
14
+ So doing something like `output = demo("Hello", "friend")` will run the first event defined in `demo` on the inputs "Hello" and "friend" and store it
15
+ in the variable `output`.
16
+
17
+ If I put you to sleep 🥱, please bear with me! By using apps like functions, you can seamlessly compose Gradio apps.
18
+ The following section will show how.
19
+
20
+ ## Treating Blocks like functions
21
+
22
+ Let's say we have the following demo that translates english text to german text.
23
+
24
+ $code_english_translator
25
+
26
+ I already went ahead and hosted it in Hugging Face spaces at [gradio/english_translator](https://huggingface.co/spaces/gradio/english_translator).
27
+
28
+ You can see the demo below as well:
29
+
30
+ $demo_english_translator
31
+
32
+ Now, let's say you have an app that generates english text, but you wanted to additionally generate german text.
33
+
34
+ You could either:
35
+
36
+ 1. Copy the source code of my english-to-german translation and paste it in your app.
37
+
38
+ 2. Load my english-to-german translation in your app and treat it like a normal python function.
39
+
40
+ Option 1 technically always works, but it often introduces unwanted complexity.
41
+
42
+ Option 2 lets you borrow the functionality you want without tightly coupling our apps.
43
+
44
+ All you have to do is call the `Blocks.load` class method in your source file.
45
+ After that, you can use my translation app like a regular python function!
46
+
47
+ The following code snippet and demo shows how to use `Blocks.load`.
48
+
49
+ Note that the variable `english_translator` is my english to german app, but its used in `generate_text` like a regular function.
50
+
51
+ $code_generate_english_german
52
+
53
+ $demo_generate_english_german
54
+
55
+ ## How to control which function in the app to use
56
+
57
+ If the app you are loading defines more than one function, you can specify which function to use
58
+ with the `fn_index` and `api_name` parameters.
59
+
60
+ In the code for our english to german demo, you'll see the following line:
61
+
62
+ ```python
63
+ translate_btn.click(translate, inputs=english, outputs=german, api_name="translate-to-german")
64
+ ```
65
+
66
+ The `api_name` gives this function a unique name in our app. You can use this name to tell gradio which
67
+ function in the upstream space you want to use:
68
+
69
+ ```python
70
+ english_generator(text, api_name="translate-to-german")[0]["generated_text"]
71
+ ```
72
+
73
+ You can also use the `fn_index` parameter.
74
+ Imagine my app also defined an english to spanish translation function.
75
+ In order to use it in our text generation app, we would use the following code:
76
+
77
+ ```python
78
+ english_generator(text, fn_index=1)[0]["generated_text"]
79
+ ```
80
+
81
+ Functions in gradio spaces are zero-indexed, so since the spanish translator would be the second function in my space,
82
+ you would use index 1.
83
+
84
+ ## Parting Remarks
85
+
86
+ We showed how treating a Blocks app like a regular python helps you compose functionality across different apps.
87
+ Any Blocks app can be treated like a function, but a powerful pattern is to `load` an app hosted on
88
+ [Hugging Face Spaces](https://huggingface.co/spaces) prior to treating it like a function in your own app.
89
+ You can also load models hosted on the [Hugging Face Model Hub](https://huggingface.co/models) - see the [Using Hugging Face Integrations](/using_hugging_face_integrations) guide for an example.
90
+
91
+ ### Happy building! ⚒️
5 Chatbots/01_creating-a-chatbot-fast.md ADDED
@@ -0,0 +1,366 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # How to Create a Chatbot with Gradio
3
+
4
+ Tags: NLP, TEXT, CHAT
5
+
6
+ ## Introduction
7
+
8
+ Chatbots are a popular application of large language models. Using `gradio`, you can easily build a demo of your chatbot model and share that with your users, or try it yourself using an intuitive chatbot UI.
9
+
10
+ This tutorial uses `gr.ChatInterface()`, which is a high-level abstraction that allows you to create your chatbot UI fast, often with a single line of code. The chatbot interface that we create will look something like this:
11
+
12
+ $demo_chatinterface_streaming_echo
13
+
14
+ We'll start with a couple of simple examples, and then show how to use `gr.ChatInterface()` with real language models from several popular APIs and libraries, including `langchain`, `openai`, and Hugging Face.
15
+
16
+ **Prerequisites**: please make sure you are using the **latest version** version of Gradio:
17
+
18
+ ```bash
19
+ $ pip install --upgrade gradio
20
+ ```
21
+
22
+ ## Defining a chat function
23
+
24
+ When working with `gr.ChatInterface()`, the first thing you should do is define your chat function. Your chat function should take two arguments: `message` and then `history` (the arguments can be named anything, but must be in this order).
25
+
26
+ - `message`: a `str` representing the user's input.
27
+ - `history`: a `list` of `list` representing the conversations up until that point. Each inner list consists of two `str` representing a pair: `[user input, bot response]`.
28
+
29
+ Your function should return a single string response, which is the bot's response to the particular user input `message`. Your function can take into account the `history` of messages, as well as the current message.
30
+
31
+ Let's take a look at a few examples.
32
+
33
+ ## Example: a chatbot that responds yes or no
34
+
35
+ Let's write a chat function that responds `Yes` or `No` randomly.
36
+
37
+ Here's our chat function:
38
+
39
+ ```python
40
+ import random
41
+
42
+ def random_response(message, history):
43
+ return random.choice(["Yes", "No"])
44
+ ```
45
+
46
+ Now, we can plug this into `gr.ChatInterface()` and call the `.launch()` method to create the web interface:
47
+
48
+ ```python
49
+ import gradio as gr
50
+
51
+ gr.ChatInterface(random_response).launch()
52
+ ```
53
+
54
+ That's it! Here's our running demo, try it out:
55
+
56
+ $demo_chatinterface_random_response
57
+
58
+ ## Another example using the user's input and history
59
+
60
+ Of course, the previous example was very simplistic, it didn't even take user input or the previous history into account! Here's another simple example showing how to incorporate a user's input as well as the history.
61
+
62
+ ```python
63
+ import random
64
+ import gradio as gr
65
+
66
+ def alternatingly_agree(message, history):
67
+ if len(history) % 2 == 0:
68
+ return f"Yes, I do think that '{message}'"
69
+ else:
70
+ return "I don't think so"
71
+
72
+ gr.ChatInterface(alternatingly_agree).launch()
73
+ ```
74
+
75
+ ## Streaming chatbots
76
+
77
+ In your chat function, you can use `yield` to generate a sequence of partial responses, each replacing the previous ones. This way, you'll end up with a streaming chatbot. It's that simple!
78
+
79
+ ```python
80
+ import time
81
+ import gradio as gr
82
+
83
+ def slow_echo(message, history):
84
+ for i in range(len(message)):
85
+ time.sleep(0.3)
86
+ yield "You typed: " + message[: i+1]
87
+
88
+ gr.ChatInterface(slow_echo).launch()
89
+ ```
90
+
91
+
92
+ Tip: While the response is streaming, the "Submit" button turns into a "Stop" button that can be used to stop the generator function. You can customize the appearance and behavior of the "Stop" button using the `stop_btn` parameter.
93
+
94
+ ## Customizing your chatbot
95
+
96
+ If you're familiar with Gradio's `Interface` class, the `gr.ChatInterface` includes many of the same arguments that you can use to customize the look and feel of your Chatbot. For example, you can:
97
+
98
+ - add a title and description above your chatbot using `title` and `description` arguments.
99
+ - add a theme or custom css using `theme` and `css` arguments respectively.
100
+ - add `examples` and even enable `cache_examples`, which make it easier for users to try it out .
101
+ - You can change the text or disable each of the buttons that appear in the chatbot interface: `submit_btn`, `retry_btn`, `undo_btn`, `clear_btn`.
102
+
103
+ If you want to customize the `gr.Chatbot` or `gr.Textbox` that compose the `ChatInterface`, then you can pass in your own chatbot or textbox as well. Here's an example of how we can use these parameters:
104
+
105
+ ```python
106
+ import gradio as gr
107
+
108
+ def yes_man(message, history):
109
+ if message.endswith("?"):
110
+ return "Yes"
111
+ else:
112
+ return "Ask me anything!"
113
+
114
+ gr.ChatInterface(
115
+ yes_man,
116
+ chatbot=gr.Chatbot(height=300),
117
+ textbox=gr.Textbox(placeholder="Ask me a yes or no question", container=False, scale=7),
118
+ title="Yes Man",
119
+ description="Ask Yes Man any question",
120
+ theme="soft",
121
+ examples=["Hello", "Am I cool?", "Are tomatoes vegetables?"],
122
+ cache_examples=True,
123
+ retry_btn=None,
124
+ undo_btn="Delete Previous",
125
+ clear_btn="Clear",
126
+ ).launch()
127
+ ```
128
+
129
+ In particular, if you'd like to add a "placeholder" for your chat interface, which appears before the user has started chatting, you can do so using the `placeholder` argument of `gr.Chatbot`, which accepts Markdown or HTML.
130
+
131
+ ```python
132
+ gr.ChatInterface(
133
+ yes_man,
134
+ chatbot=gr.Chatbot(placeholder="<strong>Your Personal Yes-Man</strong><br>Ask Me Anything"),
135
+ ...
136
+ ```
137
+
138
+ The placeholder appears vertically and horizontally centered in the chatbot.
139
+
140
+ ## Add Multimodal Capability to your chatbot
141
+
142
+ You may want to add multimodal capability to your chatbot. For example, you may want users to be able to easily upload images or files to your chatbot and ask questions about it. You can make your chatbot "multimodal" by passing in a single parameter (`multimodal=True`) to the `gr.ChatInterface` class.
143
+
144
+
145
+ ```python
146
+ import gradio as gr
147
+ import time
148
+
149
+ def count_files(message, history):
150
+ num_files = len(message["files"])
151
+ return f"You uploaded {num_files} files"
152
+
153
+ demo = gr.ChatInterface(fn=count_files, examples=[{"text": "Hello", "files": []}], title="Echo Bot", multimodal=True)
154
+
155
+ demo.launch()
156
+ ```
157
+
158
+ When `multimodal=True`, the signature of `fn` changes slightly. The first parameter of your function should accept a dictionary consisting of the submitted text and uploaded files that looks like this: `{"text": "user input", "file": ["file_path1", "file_path2", ...]}`. Similarly, any examples you provide should be in a dictionary of this form. Your function should still return a single `str` message.
159
+
160
+ Tip: If you'd like to customize the UI/UX of the textbox for your multimodal chatbot, you should pass in an instance of `gr.MultimodalTextbox` to the `textbox` argument of `ChatInterface` instead of an instance of `gr.Textbox`.
161
+
162
+ ## Additional Inputs
163
+
164
+ You may want to add additional parameters to your chatbot and expose them to your users through the Chatbot UI. For example, suppose you want to add a textbox for a system prompt, or a slider that sets the number of tokens in the chatbot's response. The `ChatInterface` class supports an `additional_inputs` parameter which can be used to add additional input components.
165
+
166
+ The `additional_inputs` parameters accepts a component or a list of components. You can pass the component instances directly, or use their string shortcuts (e.g. `"textbox"` instead of `gr.Textbox()`). If you pass in component instances, and they have _not_ already been rendered, then the components will appear underneath the chatbot (and any examples) within a `gr.Accordion()`. You can set the label of this accordion using the `additional_inputs_accordion_name` parameter.
167
+
168
+ Here's a complete example:
169
+
170
+ $code_chatinterface_system_prompt
171
+
172
+ If the components you pass into the `additional_inputs` have already been rendered in a parent `gr.Blocks()`, then they will _not_ be re-rendered in the accordion. This provides flexibility in deciding where to lay out the input components. In the example below, we position the `gr.Textbox()` on top of the Chatbot UI, while keeping the slider underneath.
173
+
174
+ ```python
175
+ import gradio as gr
176
+ import time
177
+
178
+ def echo(message, history, system_prompt, tokens):
179
+ response = f"System prompt: {system_prompt}\n Message: {message}."
180
+ for i in range(min(len(response), int(tokens))):
181
+ time.sleep(0.05)
182
+ yield response[: i+1]
183
+
184
+ with gr.Blocks() as demo:
185
+ system_prompt = gr.Textbox("You are helpful AI.", label="System Prompt")
186
+ slider = gr.Slider(10, 100, render=False)
187
+
188
+ gr.ChatInterface(
189
+ echo, additional_inputs=[system_prompt, slider]
190
+ )
191
+
192
+ demo.launch()
193
+ ```
194
+
195
+ If you need to create something even more custom, then its best to construct the chatbot UI using the low-level `gr.Blocks()` API. We have [a dedicated guide for that here](/guides/creating-a-custom-chatbot-with-blocks).
196
+
197
+ ## Using Gradio Components inside the Chatbot
198
+
199
+ The `Chatbot` component supports using many of the core Gradio components (such as `gr.Image`, `gr.Plot`, `gr.Audio`, and `gr.HTML`) inside of the chatbot. Simply return one of these components from your function to use it with `gr.ChatInterface`. Here's an example:
200
+
201
+ ```py
202
+ import gradio as gr
203
+
204
+ def fake(message, history):
205
+ if message.strip():
206
+ return gr.Audio("https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav")
207
+ else:
208
+ return "Please provide the name of an artist"
209
+
210
+ gr.ChatInterface(
211
+ fake,
212
+ textbox=gr.Textbox(placeholder="Which artist's music do you want to listen to?", scale=7),
213
+ chatbot=gr.Chatbot(placeholder="Play music by any artist!"),
214
+ ).launch()
215
+ ```
216
+
217
+ ## Using your chatbot via an API
218
+
219
+ Once you've built your Gradio chatbot and are hosting it on [Hugging Face Spaces](https://hf.space) or somewhere else, then you can query it with a simple API at the `/chat` endpoint. The endpoint just expects the user's message (and potentially additional inputs if you have set any using the `additional_inputs` parameter), and will return the response, internally keeping track of the messages sent so far.
220
+
221
+ [](https://github.com/gradio-app/gradio/assets/1778297/7b10d6db-6476-4e2e-bebd-ecda802c3b8f)
222
+
223
+ To use the endpoint, you should use either the [Gradio Python Client](/guides/getting-started-with-the-python-client) or the [Gradio JS client](/guides/getting-started-with-the-js-client).
224
+
225
+ ## A `langchain` example
226
+
227
+ Now, let's actually use the `gr.ChatInterface` with some real large language models. We'll start by using `langchain` on top of `openai` to build a general-purpose streaming chatbot application in 19 lines of code. You'll need to have an OpenAI key for this example (keep reading for the free, open-source equivalent!)
228
+
229
+ ```python
230
+ from langchain.chat_models import ChatOpenAI
231
+ from langchain.schema import AIMessage, HumanMessage
232
+ import openai
233
+ import gradio as gr
234
+
235
+ os.environ["OPENAI_API_KEY"] = "sk-..." # Replace with your key
236
+
237
+ llm = ChatOpenAI(temperature=1.0, model='gpt-3.5-turbo-0613')
238
+
239
+ def predict(message, history):
240
+ history_langchain_format = []
241
+ for human, ai in history:
242
+ history_langchain_format.append(HumanMessage(content=human))
243
+ history_langchain_format.append(AIMessage(content=ai))
244
+ history_langchain_format.append(HumanMessage(content=message))
245
+ gpt_response = llm(history_langchain_format)
246
+ return gpt_response.content
247
+
248
+ gr.ChatInterface(predict).launch()
249
+ ```
250
+
251
+ ## A streaming example using `openai`
252
+
253
+ Of course, we could also use the `openai` library directy. Here a similar example, but this time with streaming results as well:
254
+
255
+ ```python
256
+ from openai import OpenAI
257
+ import gradio as gr
258
+
259
+ api_key = "sk-..." # Replace with your key
260
+ client = OpenAI(api_key=api_key)
261
+
262
+ def predict(message, history):
263
+ history_openai_format = []
264
+ for human, assistant in history:
265
+ history_openai_format.append({"role": "user", "content": human })
266
+ history_openai_format.append({"role": "assistant", "content":assistant})
267
+ history_openai_format.append({"role": "user", "content": message})
268
+
269
+ response = client.chat.completions.create(model='gpt-3.5-turbo',
270
+ messages= history_openai_format,
271
+ temperature=1.0,
272
+ stream=True)
273
+
274
+ partial_message = ""
275
+ for chunk in response:
276
+ if chunk.choices[0].delta.content is not None:
277
+ partial_message = partial_message + chunk.choices[0].delta.content
278
+ yield partial_message
279
+
280
+ gr.ChatInterface(predict).launch()
281
+ ```
282
+
283
+ **Handling Concurrent Users with Threads**
284
+
285
+ The example above works if you have a single user — or if you have multiple users, since it passes the entire history of the conversation each time there is a new message from a user.
286
+
287
+ However, the `openai` library also provides higher-level abstractions that manage conversation history for you, e.g. the [Threads abstraction](https://platform.openai.com/docs/assistants/how-it-works/managing-threads-and-messages). If you use these abstractions, you will need to create a separate thread for each user session. Here's a partial example of how you can do that, by accessing the `session_hash` within your `predict()` function:
288
+
289
+ ```py
290
+ import openai
291
+ import gradio as gr
292
+
293
+ client = openai.OpenAI(api_key = os.getenv("OPENAI_API_KEY"))
294
+ threads = {}
295
+
296
+ def predict(message, history, request: gr.Request):
297
+ if request.session_hash in threads:
298
+ thread = threads[request.session_hash]
299
+ else:
300
+ threads[request.session_hash] = client.beta.threads.create()
301
+
302
+ message = client.beta.threads.messages.create(
303
+ thread_id=thread.id,
304
+ role="user",
305
+ content=message)
306
+
307
+ ...
308
+
309
+ gr.ChatInterface(predict).launch()
310
+ ```
311
+
312
+ ## Example using a local, open-source LLM with Hugging Face
313
+
314
+ Of course, in many cases you want to run a chatbot locally. Here's the equivalent example using Together's RedePajama model, from Hugging Face (this requires you to have a GPU with CUDA).
315
+
316
+ ```python
317
+ import gradio as gr
318
+ import torch
319
+ from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer
320
+ from threading import Thread
321
+
322
+ tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-3B-v1")
323
+ model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-3B-v1", torch_dtype=torch.float16)
324
+ model = model.to('cuda:0')
325
+
326
+ class StopOnTokens(StoppingCriteria):
327
+ def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
328
+ stop_ids = [29, 0]
329
+ for stop_id in stop_ids:
330
+ if input_ids[0][-1] == stop_id:
331
+ return True
332
+ return False
333
+
334
+ def predict(message, history):
335
+ history_transformer_format = history + [[message, ""]]
336
+ stop = StopOnTokens()
337
+
338
+ messages = "".join(["".join(["\n<human>:"+item[0], "\n<bot>:"+item[1]])
339
+ for item in history_transformer_format])
340
+
341
+ model_inputs = tokenizer([messages], return_tensors="pt").to("cuda")
342
+ streamer = TextIteratorStreamer(tokenizer, timeout=10., skip_prompt=True, skip_special_tokens=True)
343
+ generate_kwargs = dict(
344
+ model_inputs,
345
+ streamer=streamer,
346
+ max_new_tokens=1024,
347
+ do_sample=True,
348
+ top_p=0.95,
349
+ top_k=1000,
350
+ temperature=1.0,
351
+ num_beams=1,
352
+ stopping_criteria=StoppingCriteriaList([stop])
353
+ )
354
+ t = Thread(target=model.generate, kwargs=generate_kwargs)
355
+ t.start()
356
+
357
+ partial_message = ""
358
+ for new_token in streamer:
359
+ if new_token != '<':
360
+ partial_message += new_token
361
+ yield partial_message
362
+
363
+ gr.ChatInterface(predict).launch()
364
+ ```
365
+
366
+ With those examples, you should be all set to create your own Gradio Chatbot demos soon! For building even more custom Chatbot applications, check out [a dedicated guide](/guides/creating-a-custom-chatbot-with-blocks) using the low-level `gr.Blocks()` API.
5 Chatbots/02_creating-a-custom-chatbot-with-blocks.md ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # How to Create a Custom Chatbot with Gradio Blocks
3
+
4
+ Tags: NLP, TEXT, CHAT
5
+ Related spaces: https://huggingface.co/spaces/gradio/chatbot_streaming, https://huggingface.co/spaces/project-baize/Baize-7B,
6
+
7
+ ## Introduction
8
+
9
+ **Important Note**: if you are getting started, we recommend using the `gr.ChatInterface` to create chatbots -- its a high-level abstraction that makes it possible to create beautiful chatbot applications fast, often with a single line of code. [Read more about it here](/guides/creating-a-chatbot-fast).
10
+
11
+ This tutorial will show how to make chatbot UIs from scratch with Gradio's low-level Blocks API. This will give you full control over your Chatbot UI. You'll start by first creating a a simple chatbot to display text, a second one to stream text responses, and finally a chatbot that can handle media files as well. The chatbot interface that we create will look something like this:
12
+
13
+ $demo_chatbot_streaming
14
+
15
+ **Prerequisite**: We'll be using the `gradio.Blocks` class to build our Chatbot demo.
16
+ You can [read the Guide to Blocks first](https://gradio.app/blocks-and-event-listeners) if you are not already familiar with it. Also please make sure you are using the **latest version** version of Gradio: `pip install --upgrade gradio`.
17
+
18
+ ## A Simple Chatbot Demo
19
+
20
+ Let's start with recreating the simple demo above. As you may have noticed, our bot simply randomly responds "How are you?", "I love you", or "I'm very hungry" to any input. Here's the code to create this with Gradio:
21
+
22
+ $code_chatbot_simple
23
+
24
+ There are three Gradio components here:
25
+
26
+ - A `Chatbot`, whose value stores the entire history of the conversation, as a list of response pairs between the user and bot.
27
+ - A `Textbox` where the user can type their message, and then hit enter/submit to trigger the chatbot response
28
+ - A `ClearButton` button to clear the Textbox and entire Chatbot history
29
+
30
+ We have a single function, `respond()`, which takes in the entire history of the chatbot, appends a random message, waits 1 second, and then returns the updated chat history. The `respond()` function also clears the textbox when it returns.
31
+
32
+ Of course, in practice, you would replace `respond()` with your own more complex function, which might call a pretrained model or an API, to generate a response.
33
+
34
+ $demo_chatbot_simple
35
+
36
+ ## Add Streaming to your Chatbot
37
+
38
+ There are several ways we can improve the user experience of the chatbot above. First, we can stream responses so the user doesn't have to wait as long for a message to be generated. Second, we can have the user message appear immediately in the chat history, while the chatbot's response is being generated. Here's the code to achieve that:
39
+
40
+ $code_chatbot_streaming
41
+
42
+ You'll notice that when a user submits their message, we now _chain_ three event events with `.then()`:
43
+
44
+ 1. The first method `user()` updates the chatbot with the user message and clears the input field. This method also makes the input field non interactive so that the user can't send another message while the chatbot is responding. Because we want this to happen instantly, we set `queue=False`, which would skip any queue had it been enabled. The chatbot's history is appended with `(user_message, None)`, the `None` signifying that the bot has not responded.
45
+
46
+ 2. The second method, `bot()` updates the chatbot history with the bot's response. Instead of creating a new message, we just replace the previously-created `None` message with the bot's response. Finally, we construct the message character by character and `yield` the intermediate outputs as they are being constructed. Gradio automatically turns any function with the `yield` keyword [into a streaming output interface](/guides/key-features/#iterative-outputs).
47
+
48
+ 3. The third method makes the input field interactive again so that users can send another message to the bot.
49
+
50
+ Of course, in practice, you would replace `bot()` with your own more complex function, which might call a pretrained model or an API, to generate a response.
51
+
52
+ Finally, we enable queuing by running `demo.queue()`, which is required for streaming intermediate outputs. You can try the improved chatbot by scrolling to the demo at the top of this page.
53
+
54
+ ## Liking / Disliking Chat Messages
55
+
56
+ Once you've created your `gr.Chatbot`, you can add the ability for users to like or dislike messages. This can be useful if you would like users to vote on a bot's responses or flag inappropriate results.
57
+
58
+ To add this functionality to your Chatbot, simply attach a `.like()` event to your Chatbot. A chatbot that has the `.like()` event will automatically feature a thumbs-up icon and a thumbs-down icon next to every bot message.
59
+
60
+ The `.like()` method requires you to pass in a function that is called when a user clicks on these icons. In your function, you should have an argument whose type is `gr.LikeData`. Gradio will automatically supply the parameter to this argument with an object that contains information about the liked or disliked message. Here's a simplistic example of how you can have users like or dislike chat messages:
61
+
62
+ ```py
63
+ import gradio as gr
64
+
65
+ def greet(history, input):
66
+ return history + [(input, "Hello, " + input)]
67
+
68
+ def vote(data: gr.LikeData):
69
+ if data.liked:
70
+ print("You upvoted this response: " + data.value["value"])
71
+ else:
72
+ print("You downvoted this response: " + data.value["value"])
73
+
74
+
75
+ with gr.Blocks() as demo:
76
+ chatbot = gr.Chatbot()
77
+ textbox = gr.Textbox()
78
+ textbox.submit(greet, [chatbot, textbox], [chatbot])
79
+ chatbot.like(vote, None, None) # Adding this line causes the like/dislike icons to appear in your chatbot
80
+
81
+ demo.launch()
82
+ ```
83
+
84
+ ## Adding Markdown, Images, Audio, or Videos
85
+
86
+ The `gr.Chatbot` component supports a subset of markdown including bold, italics, and code. For example, we could write a function that responds to a user's message, with a bold **That's cool!**, like this:
87
+
88
+ ```py
89
+ def bot(history):
90
+ response = "**That's cool!**"
91
+ history[-1][1] = response
92
+ return history
93
+ ```
94
+
95
+ In addition, it can handle media files, such as images, audio, and video. You can use the `MultimodalTextbox` component to easily upload all types of media files to your chatbot. To pass in a media file, we must pass in the file as a tuple of two strings, like this: `(filepath, alt_text)`. The `alt_text` is optional, so you can also just pass in a tuple with a single element `(filepath,)`, like this:
96
+
97
+ ```python
98
+ def add_message(history, message):
99
+ for x in message["files"]:
100
+ history.append(((x["path"],), None))
101
+ if message["text"] is not None:
102
+ history.append((message["text"], None))
103
+ return history, gr.MultimodalTextbox(value=None, interactive=False, file_types=["image"])
104
+ ```
105
+
106
+ Putting this together, we can create a _multimodal_ chatbot with a multimodal textbox for a user to submit text and media files. The rest of the code looks pretty much the same as before:
107
+
108
+ $code_chatbot_multimodal
109
+ $demo_chatbot_multimodal
110
+
111
+ And you're done! That's all the code you need to build an interface for your chatbot model. Finally, we'll end our Guide with some links to Chatbots that are running on Spaces so that you can get an idea of what else is possible:
112
+
113
+ - [project-baize/Baize-7B](https://huggingface.co/spaces/project-baize/Baize-7B): A stylized chatbot that allows you to stop generation as well as regenerate responses.
114
+ - [MAGAer13/mPLUG-Owl](https://huggingface.co/spaces/MAGAer13/mPLUG-Owl): A multimodal chatbot that allows you to upvote and downvote responses.
5 Chatbots/03_creating-a-discord-bot-from-a-gradio-app.md ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # 🚀 Creating Discord Bots from Gradio Apps 🚀
3
+
4
+ Tags: NLP, TEXT, CHAT
5
+
6
+ We're excited to announce that Gradio can now automatically create a discord bot from a deployed app! 🤖
7
+
8
+ Discord is a popular communication platform that allows users to chat and interact with each other in real-time. By turning your Gradio app into a Discord bot, you can bring cutting edge AI to your discord server and give your community a whole new way to interact.
9
+
10
+ ## 💻 How does it work? 💻
11
+
12
+ With `gradio_client` version `0.3.0`, any gradio `ChatInterface` app on the internet can automatically be deployed as a discord bot via the `deploy_discord` method of the `Client` class.
13
+
14
+ Technically, any gradio app that exposes an api route that takes in a single string and outputs a single string can be deployed to discord. In this guide, we will focus on `gr.ChatInterface` as those apps naturally lend themselves to discord's chat functionality.
15
+
16
+ ## 🛠️ Requirements 🛠️
17
+
18
+ Make sure you have the latest `gradio_client` and `gradio` versions installed.
19
+
20
+ ```bash
21
+ pip install gradio_client>=0.3.0 gradio>=3.38.0
22
+ ```
23
+
24
+ Also, make sure you have a [Hugging Face account](https://huggingface.co/) and a [write access token](https://huggingface.co/docs/hub/security-tokens).
25
+
26
+ ⚠️ Tip ⚠️: Make sure you login to the Hugging Face Hub by running `huggingface-cli login`. This will let you skip passing your token in all subsequent commands in this guide.
27
+
28
+ ## 🏃‍♀️ Quickstart 🏃‍♀️
29
+
30
+ ### Step 1: Implementing our chatbot
31
+
32
+ Let's build a very simple Chatbot using `ChatInterface` that simply repeats the user message. Write the following code into an `app.py`
33
+
34
+ ```python
35
+ import gradio as gr
36
+
37
+ def slow_echo(message, history):
38
+ return message
39
+
40
+ demo = gr.ChatInterface(slow_echo).queue().launch()
41
+ ```
42
+
43
+ ### Step 2: Deploying our App
44
+
45
+ In order to create a discord bot for our app, it must be accessible over the internet. In this guide, we will use the `gradio deploy` command to deploy our chatbot to Hugging Face spaces from the command line. Run the following command.
46
+
47
+ ```bash
48
+ gradio deploy --title echo-chatbot --app-file app.py
49
+ ```
50
+
51
+ This command will ask you some questions, e.g. requested hardware, requirements, but the default values will suffice for this guide.
52
+ Note the URL of the space that was created. Mine is https://huggingface.co/spaces/freddyaboulton/echo-chatbot
53
+
54
+ ### Step 3: Creating a Discord Bot
55
+
56
+ Turning our space into a discord bot is also a one-liner thanks to the `gradio deploy-discord`. Run the following command:
57
+
58
+ ```bash
59
+ gradio deploy-discord --src freddyaboulton/echo-chatbot
60
+ ```
61
+
62
+ ❗️ Advanced ❗️: If you already have a discord bot token you can pass it to the `deploy-discord` command. Don't worry, if you don't have one yet!
63
+
64
+ ```bash
65
+ gradio deploy-discord --src freddyaboulton/echo-chatbot --discord-bot-token <token>
66
+ ```
67
+
68
+ Note the URL that gets printed out to the console. Mine is https://huggingface.co/spaces/freddyaboulton/echo-chatbot-gradio-discord-bot
69
+
70
+ ### Step 4: Getting a Discord Bot Token
71
+
72
+ If you didn't have a discord bot token for step 3, go to the URL that got printed in the console and follow the instructions there.
73
+ Once you obtain a token, run the command again but this time pass in the token:
74
+
75
+ ```bash
76
+ gradio deploy-discord --src freddyaboulton/echo-chatbot --discord-bot-token <token>
77
+ ```
78
+
79
+ ### Step 5: Add the bot to your server
80
+
81
+ Visit the space of your discord bot. You should see "Add this bot to your server by clicking this link:" followed by a URL. Go to that URL and add the bot to your server!
82
+
83
+ ### Step 6: Use your bot!
84
+
85
+ By default the bot can be called by starting a message with `/chat`, e.g. `/chat <your prompt here>`.
86
+
87
+ ⚠️ Tip ⚠️: If either of the deployed spaces goes to sleep, the bot will stop working. By default, spaces go to sleep after 48 hours of inactivity. You can upgrade the hardware of your space to prevent it from going to sleep. See this [guide](https://huggingface.co/docs/hub/spaces-gpus#using-gpu-spaces) for more information.
88
+
89
+ <img src="https://gradio-builds.s3.amazonaws.com/demo-files/discordbots/guide/echo_slash.gif">
90
+
91
+ ### Using the `gradio_client.Client` Class
92
+
93
+ You can also create a discord bot from a deployed gradio app with python.
94
+
95
+ ```python
96
+ import gradio_client as grc
97
+ grc.Client("freddyaboulton/echo-chatbot").deploy_discord()
98
+ ```
99
+
100
+ ## 🦾 Using State of The Art LLMs 🦾
101
+
102
+ We have created an organization on Hugging Face called [gradio-discord-bots](https://huggingface.co/gradio-discord-bots) containing several template spaces that explain how to deploy state of the art LLMs powered by gradio as discord bots.
103
+
104
+ The easiest way to get started is by deploying Meta's Llama 2 LLM with 70 billion parameter. Simply go to this [space](https://huggingface.co/spaces/gradio-discord-bots/Llama-2-70b-chat-hf) and follow the instructions.
105
+
106
+ The deployment can be done in one line! 🤯
107
+
108
+ ```python
109
+ import gradio_client as grc
110
+ grc.Client("ysharma/Explore_llamav2_with_TGI").deploy_discord(to_id="llama2-70b-discord-bot")
111
+ ```
112
+
113
+ ## 🦜 Additional LLMs 🦜
114
+
115
+ In addition to Meta's 70 billion Llama 2 model, we have prepared template spaces for the following LLMs and deployment options:
116
+
117
+ - [gpt-3.5-turbo](https://huggingface.co/spaces/gradio-discord-bots/gpt-35-turbo), powered by openai. Required OpenAI key.
118
+ - [falcon-7b-instruct](https://huggingface.co/spaces/gradio-discord-bots/falcon-7b-instruct) powered by Hugging Face Inference Endpoints.
119
+ - [Llama-2-13b-chat-hf](https://huggingface.co/spaces/gradio-discord-bots/Llama-2-13b-chat-hf) powered by Hugging Face Inference Endpoints.
120
+ - [Llama-2-13b-chat-hf](https://huggingface.co/spaces/gradio-discord-bots/llama-2-13b-chat-transformers) powered by Hugging Face transformers.
121
+
122
+ To deploy any of these models to discord, simply follow the instructions in the linked space for that model.
123
+
124
+ ## Deploying non-chat gradio apps to discord
125
+
126
+ As mentioned above, you don't need a `gr.ChatInterface` if you want to deploy your gradio app to discord. All that's needed is an api route that takes in a single string and outputs a single string.
127
+
128
+ The following code will deploy a space that translates english to german as a discord bot.
129
+
130
+ ```python
131
+ import gradio_client as grc
132
+ client = grc.Client("freddyaboulton/english-to-german")
133
+ client.deploy_discord(api_names=['german'])
134
+ ```
135
+
136
+ ## Conclusion
137
+
138
+ That's it for this guide! We're really excited about this feature. Tag [@Gradio](https://twitter.com/Gradio) on twitter and show us how your discord community interacts with your discord bots.
6 Custom Components/01_custom-components-in-five-minutes.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Custom Components in 5 minutes
3
+
4
+ Gradio 4.0 introduces Custom Components -- the ability for developers to create their own custom components and use them in Gradio apps.
5
+ You can publish your components as Python packages so that other users can use them as well.
6
+ Users will be able to use all of Gradio's existing functions, such as `gr.Blocks`, `gr.Interface`, API usage, themes, etc. with Custom Components.
7
+ This guide will cover how to get started making custom components.
8
+
9
+ ## Installation
10
+
11
+ You will need to have:
12
+
13
+ * Python 3.8+ (<a href="https://www.python.org/downloads/" target="_blank">install here</a>)
14
+ * pip 21.3+ (`python -m pip install --upgrade pip`)
15
+ * Node.js v16.14+ (<a href="https://nodejs.dev/en/download/package-manager/" target="_blank">install here</a>)
16
+ * npm 9+ (<a href="https://docs.npmjs.com/downloading-and-installing-node-js-and-npm/" target="_blank">install here</a>)
17
+ * Gradio 4.0+ (`pip install --upgrade gradio`)
18
+
19
+ ## The Workflow
20
+
21
+ The Custom Components workflow consists of 4 steps: create, dev, build, and publish.
22
+
23
+ 1. create: creates a template for you to start developing a custom component.
24
+ 2. dev: launches a development server with a sample app & hot reloading allowing you to easily develop your custom component
25
+ 3. build: builds a python package containing to your custom component's Python and JavaScript code -- this makes things official!
26
+ 4. publish: uploads your package to [PyPi](https://pypi.org/) and/or a sample app to [HuggingFace Spaces](https://hf.co/spaces).
27
+
28
+ Each of these steps is done via the Custom Component CLI. You can invoke it with `gradio cc` or `gradio component`
29
+
30
+ Tip: Run `gradio cc --help` to get a help menu of all available commands. There are some commands that are not covered in this guide. You can also append `--help` to any command name to bring up a help page for that command, e.g. `gradio cc create --help`.
31
+
32
+ ## 1. create
33
+
34
+ Bootstrap a new template by running the following in any working directory:
35
+
36
+ ```bash
37
+ gradio cc create MyComponent --template SimpleTextbox
38
+ ```
39
+
40
+ Instead of `MyComponent`, give your component any name.
41
+
42
+ Instead of `SimpleTextbox`, you can use any Gradio component as a template. `SimpleTextbox` is actually a special component that a stripped-down version of the `Textbox` component that makes it particularly useful when creating your first custom component.
43
+ Some other components that are good if you are starting out: `SimpleDropdown`, `SimpleImage`, or `File`.
44
+
45
+ Tip: Run `gradio cc show` to get a list of available component templates.
46
+
47
+ The `create` command will:
48
+
49
+ 1. Create a directory with your component's name in lowercase with the following structure:
50
+ ```directory
51
+ - backend/ <- The python code for your custom component
52
+ - frontend/ <- The javascript code for your custom component
53
+ - demo/ <- A sample app using your custom component. Modify this to develop your component!
54
+ - pyproject.toml <- Used to build the package and specify package metadata.
55
+ ```
56
+
57
+ 2. Install the component in development mode
58
+
59
+ Each of the directories will have the code you need to get started developing!
60
+
61
+ ## 2. dev
62
+
63
+ Once you have created your new component, you can start a development server by `entering the directory` and running
64
+
65
+ ```bash
66
+ gradio cc dev
67
+ ```
68
+
69
+ You'll see several lines that are printed to the console.
70
+ The most important one is the one that says:
71
+
72
+ > Frontend Server (Go here): http://localhost:7861/
73
+
74
+ The port number might be different for you.
75
+ Click on that link to launch the demo app in hot reload mode.
76
+ Now, you can start making changes to the backend and frontend you'll see the results reflected live in the sample app!
77
+ We'll go through a real example in a later guide.
78
+
79
+ Tip: You don't have to run dev mode from your custom component directory. The first argument to `dev` mode is the path to the directory. By default it uses the current directory.
80
+
81
+ ## 3. build
82
+
83
+ Once you are satisfied with your custom component's implementation, you can `build` it to use it outside of the development server.
84
+
85
+ From your component directory, run:
86
+
87
+ ```bash
88
+ gradio cc build
89
+ ```
90
+
91
+ This will create a `tar.gz` and `.whl` file in a `dist/` subdirectory.
92
+ If you or anyone installs that `.whl` file (`pip install <path-to-whl>`) they will be able to use your custom component in any gradio app!
93
+
94
+ The `build` command will also generate documentation for your custom component. This takes the form of an interactive space and a static `README.md`. You can disable this by passing `--no-generate-docs`. You can read more about the documentation generator in [the dedicated guide](https://gradio.app/guides/documenting-custom-components).
95
+
96
+ ## 4. publish
97
+
98
+ Right now, your package is only available on a `.whl` file on your computer.
99
+ You can share that file with the world with the `publish` command!
100
+
101
+ Simply run the following command from your component directory:
102
+
103
+ ```bash
104
+ gradio cc publish
105
+ ```
106
+
107
+ This will guide you through the following process:
108
+
109
+ 1. Upload your distribution files to PyPi. This is optional. If you decide to upload to PyPi, you will need a PyPI username and password. You can get one [here](https://pypi.org/account/register/).
110
+ 2. Upload a demo of your component to hugging face spaces. This is also optional.
111
+
112
+
113
+ Here is an example of what publishing looks like:
114
+
115
+ <video autoplay muted loop>
116
+ <source src="https://gradio-builds.s3.amazonaws.com/assets/text_with_attachments_publish.mov" type="video/mp4" />
117
+ </video>
118
+
119
+
120
+ ## Conclusion
121
+
122
+ Now that you know the high-level workflow of creating custom components, you can go in depth in the next guides!
123
+ After reading the guides, check out this [collection](https://huggingface.co/collections/gradio/custom-components-65497a761c5192d981710b12) of custom components on the HuggingFace Hub so you can learn from other's code.
124
+
125
+ Tip: If you want to start off from someone else's custom component see this [guide](./frequently-asked-questions#do-i-always-need-to-start-my-component-from-scratch).
6 Custom Components/02_key-component-concepts.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Gradio Components: The Key Concepts
3
+
4
+ In this section, we discuss a few important concepts when it comes to components in Gradio.
5
+ It's important to understand these concepts when developing your own component.
6
+ Otherwise, your component may behave very different to other Gradio components!
7
+
8
+ Tip: You can skip this section if you are familiar with the internals of the Gradio library, such as each component's preprocess and postprocess methods.
9
+
10
+ ## Interactive vs Static
11
+
12
+ Every component in Gradio comes in a `static` variant, and most come in an `interactive` version as well.
13
+ The `static` version is used when a component is displaying a value, and the user can **NOT** change that value by interacting with it.
14
+ The `interactive` version is used when the user is able to change the value by interacting with the Gradio UI.
15
+
16
+ Let's see some examples:
17
+
18
+ ```python
19
+ import gradio as gr
20
+
21
+ with gr.Blocks() as demo:
22
+ gr.Textbox(value="Hello", interactive=True)
23
+ gr.Textbox(value="Hello", interactive=False)
24
+
25
+ demo.launch()
26
+
27
+ ```
28
+ This will display two textboxes.
29
+ The only difference: you'll be able to edit the value of the Gradio component on top, and you won't be able to edit the variant on the bottom (i.e. the textbox will be disabled).
30
+
31
+ Perhaps a more interesting example is with the `Image` component:
32
+
33
+ ```python
34
+ import gradio as gr
35
+
36
+ with gr.Blocks() as demo:
37
+ gr.Image(interactive=True)
38
+ gr.Image(interactive=False)
39
+
40
+ demo.launch()
41
+ ```
42
+
43
+ The interactive version of the component is much more complex -- you can upload images or snap a picture from your webcam -- while the static version can only be used to display images.
44
+
45
+ Not every component has a distinct interactive version. For example, the `gr.AnnotatedImage` only appears as a static version since there's no way to interactively change the value of the annotations or the image.
46
+
47
+ ### What you need to remember
48
+
49
+ * Gradio will use the interactive version (if available) of a component if that component is used as the **input** to any event; otherwise, the static version will be used.
50
+
51
+ * When you design custom components, you **must** accept the boolean interactive keyword in the constructor of your Python class. In the frontend, you **may** accept the `interactive` property, a `bool` which represents whether the component should be static or interactive. If you do not use this property in the frontend, the component will appear the same in interactive or static mode.
52
+
53
+ ## The value and how it is preprocessed/postprocessed
54
+
55
+ The most important attribute of a component is its `value`.
56
+ Every component has a `value`.
57
+ The value that is typically set by the user in the frontend (if the component is interactive) or displayed to the user (if it is static).
58
+ It is also this value that is sent to the backend function when a user triggers an event, or returned by the user's function e.g. at the end of a prediction.
59
+
60
+ So this value is passed around quite a bit, but sometimes the format of the value needs to change between the frontend and backend.
61
+ Take a look at this example:
62
+
63
+ ```python
64
+ import numpy as np
65
+ import gradio as gr
66
+
67
+ def sepia(input_img):
68
+ sepia_filter = np.array([
69
+ [0.393, 0.769, 0.189],
70
+ [0.349, 0.686, 0.168],
71
+ [0.272, 0.534, 0.131]
72
+ ])
73
+ sepia_img = input_img.dot(sepia_filter.T)
74
+ sepia_img /= sepia_img.max()
75
+ return sepia_img
76
+
77
+ demo = gr.Interface(sepia, gr.Image(shape=(200, 200)), "image")
78
+ demo.launch()
79
+ ```
80
+
81
+ This will create a Gradio app which has an `Image` component as the input and the output.
82
+ In the frontend, the Image component will actually **upload** the file to the server and send the **filepath** but this is converted to a `numpy` array before it is sent to a user's function.
83
+ Conversely, when the user returns a `numpy` array from their function, the numpy array is converted to a file so that it can be sent to the frontend and displayed by the `Image` component.
84
+
85
+ Tip: By default, the `Image` component sends numpy arrays to the python function because it is a common choice for machine learning engineers, though the Image component also supports other formats using the `type` parameter. Read the `Image` docs [here](https://www.gradio.app/docs/image) to learn more.
86
+
87
+ Each component does two conversions:
88
+
89
+ 1. `preprocess`: Converts the `value` from the format sent by the frontend to the format expected by the python function. This usually involves going from a web-friendly **JSON** structure to a **python-native** data structure, like a `numpy` array or `PIL` image. The `Audio`, `Image` components are good examples of `preprocess` methods.
90
+
91
+ 2. `postprocess`: Converts the value returned by the python function to the format expected by the frontend. This usually involves going from a **python-native** data-structure, like a `PIL` image to a **JSON** structure.
92
+
93
+ ### What you need to remember
94
+
95
+ * Every component must implement `preprocess` and `postprocess` methods. In the rare event that no conversion needs to happen, simply return the value as-is. `Textbox` and `Number` are examples of this.
96
+
97
+ * As a component author, **YOU** control the format of the data displayed in the frontend as well as the format of the data someone using your component will receive. Think of an ergonomic data-structure a **python** developer will find intuitive, and control the conversion from a **Web-friendly JSON** data structure (and vice-versa) with `preprocess` and `postprocess.`
98
+
99
+ ## The "Example Version" of a Component
100
+
101
+ Gradio apps support providing example inputs -- and these are very useful in helping users get started using your Gradio app.
102
+ In `gr.Interface`, you can provide examples using the `examples` keyword, and in `Blocks`, you can provide examples using the special `gr.Examples` component.
103
+
104
+ At the bottom of this screenshot, we show a miniature example image of a cheetah that, when clicked, will populate the same image in the input Image component:
105
+
106
+ ![img](https://user-images.githubusercontent.com/1778297/277548211-a3cb2133-2ffc-4cdf-9a83-3e8363b57ea6.png)
107
+
108
+
109
+ To enable the example view, you must have the following two files in the top of the `frontend` directory:
110
+
111
+ * `Example.svelte`: this corresponds to the "example version" of your component
112
+ * `Index.svelte`: this corresponds to the "regular version"
113
+
114
+ In the backend, you typically don't need to do anything. The user-provided example `value` is processed using the same `.postprocess()` method described earlier. If you'd like to do process the data differently (for example, if the `.postprocess()` method is computationally expensive), then you can write your own `.process_example()` method for your custom component, which will be used instead.
115
+
116
+ The `Example.svelte` file and `process_example()` method will be covered in greater depth in the dedicated [frontend](./frontend) and [backend](./backend) guides respectively.
117
+
118
+ ### What you need to remember
119
+
120
+ * If you expect your component to be used as input, it is important to define an "Example" view.
121
+ * If you don't, Gradio will use a default one but it won't be as informative as it can be!
122
+
123
+ ## Conclusion
124
+
125
+ Now that you know the most important pieces to remember about Gradio components, you can start to design and build your own!
6 Custom Components/03_configuration.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Configuring Your Custom Component
3
+
4
+ The custom components workflow focuses on [convention over configuration](https://en.wikipedia.org/wiki/Convention_over_configuration) to reduce the number of decisions you as a developer need to make when developing your custom component.
5
+ That being said, you can still configure some aspects of the custom component package and directory.
6
+ This guide will cover how.
7
+
8
+ ## The Package Name
9
+
10
+ By default, all custom component packages are called `gradio_<component-name>` where `component-name` is the name of the component's python class in lowercase.
11
+
12
+ As an example, let's walkthrough changing the name of a component from `gradio_mytextbox` to `supertextbox`.
13
+
14
+ 1. Modify the `name` in the `pyproject.toml` file.
15
+
16
+ ```bash
17
+ [project]
18
+ name = "supertextbox"
19
+ ```
20
+
21
+ 2. Change all occurrences of `gradio_<component-name>` in `pyproject.toml` to `<component-name>`
22
+
23
+ ```bash
24
+ [tool.hatch.build]
25
+ artifacts = ["/backend/supertextbox/templates", "*.pyi"]
26
+
27
+ [tool.hatch.build.targets.wheel]
28
+ packages = ["/backend/supertextbox"]
29
+ ```
30
+
31
+ 3. Rename the `gradio_<component-name>` directory in `backend/` to `<component-name>`
32
+
33
+ ```bash
34
+ mv backend/gradio_mytextbox backend/supertextbox
35
+ ```
36
+
37
+
38
+ Tip: Remember to change the import statement in `demo/app.py`!
39
+
40
+ ## Top Level Python Exports
41
+
42
+ By default, only the custom component python class is a top level export.
43
+ This means that when users type `from gradio_<component-name> import ...`, the only class that will be available is the custom component class.
44
+ To add more classes as top level exports, modify the `__all__` property in `__init__.py`
45
+
46
+ ```python
47
+ from .mytextbox import MyTextbox
48
+ from .mytextbox import AdditionalClass, additional_function
49
+
50
+ __all__ = ['MyTextbox', 'AdditionalClass', 'additional_function']
51
+ ```
52
+
53
+ ## Python Dependencies
54
+
55
+ You can add python dependencies by modifying the `dependencies` key in `pyproject.toml`
56
+
57
+ ```bash
58
+ dependencies = ["gradio", "numpy", "PIL"]
59
+ ```
60
+
61
+
62
+ Tip: Remember to run `gradio cc install` when you add dependencies!
63
+
64
+ ## Javascript Dependencies
65
+
66
+ You can add JavaScript dependencies by modifying the `"dependencies"` key in `frontend/package.json`
67
+
68
+ ```json
69
+ "dependencies": {
70
+ "@gradio/atoms": "0.2.0-beta.4",
71
+ "@gradio/statustracker": "0.3.0-beta.6",
72
+ "@gradio/utils": "0.2.0-beta.4",
73
+ "your-npm-package": "<version>"
74
+ }
75
+ ```
76
+
77
+ ## Directory Structure
78
+
79
+ By default, the CLI will place the Python code in `backend` and the JavaScript code in `frontend`.
80
+ It is not recommended to change this structure since it makes it easy for a potential contributor to look at your source code and know where everything is.
81
+ However, if you did want to this is what you would have to do:
82
+
83
+ 1. Place the Python code in the subdirectory of your choosing. Remember to modify the `[tool.hatch.build]` `[tool.hatch.build.targets.wheel]` in the `pyproject.toml` to match!
84
+
85
+ 2. Place the JavaScript code in the subdirectory of your choosing.
86
+
87
+ 2. Add the `FRONTEND_DIR` property on the component python class. It must be the relative path from the file where the class is defined to the location of the JavaScript directory.
88
+
89
+ ```python
90
+ class SuperTextbox(Component):
91
+ FRONTEND_DIR = "../../frontend/"
92
+ ```
93
+
94
+ The JavaScript and Python directories must be under the same common directory!
95
+
96
+ ## Conclusion
97
+
98
+
99
+ Sticking to the defaults will make it easy for others to understand and contribute to your custom component.
100
+ After all, the beauty of open source is that anyone can help improve your code!
101
+ But if you ever need to deviate from the defaults, you know how!
6 Custom Components/04_backend.md ADDED
@@ -0,0 +1,228 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # The Backend 🐍
3
+
4
+ This guide will cover everything you need to know to implement your custom component's backend processing.
5
+
6
+ ## Which Class to Inherit From
7
+
8
+ All components inherit from one of three classes `Component`, `FormComponent`, or `BlockContext`.
9
+ You need to inherit from one so that your component behaves like all other gradio components.
10
+ When you start from a template with `gradio cc create --template`, you don't need to worry about which one to choose since the template uses the correct one.
11
+ For completeness, and in the event that you need to make your own component from scratch, we explain what each class is for.
12
+
13
+ * `FormComponent`: Use this when you want your component to be grouped together in the same `Form` layout with other `FormComponents`. The `Slider`, `Textbox`, and `Number` components are all `FormComponents`.
14
+ * `BlockContext`: Use this when you want to place other components "inside" your component. This enabled `with MyComponent() as component:` syntax.
15
+ * `Component`: Use this for all other cases.
16
+
17
+ Tip: If your component supports streaming output, inherit from the `StreamingOutput` class.
18
+
19
+ Tip: If you inherit from `BlockContext`, you also need to set the metaclass to be `ComponentMeta`. See example below.
20
+
21
+ ```python
22
+ from gradio.blocks import BlockContext
23
+ from gradio.component_meta import ComponentMeta
24
+
25
+
26
+
27
+
28
+ @document()
29
+ class Row(BlockContext, metaclass=ComponentMeta):
30
+ pass
31
+ ```
32
+
33
+ ## The methods you need to implement
34
+
35
+ When you inherit from any of these classes, the following methods must be implemented.
36
+ Otherwise the Python interpreter will raise an error when you instantiate your component!
37
+
38
+ ### `preprocess` and `postprocess`
39
+
40
+ Explained in the [Key Concepts](./key-component-concepts#the-value-and-how-it-is-preprocessed-postprocessed) guide.
41
+ They handle the conversion from the data sent by the frontend to the format expected by the python function.
42
+
43
+ ```python
44
+ def preprocess(self, x: Any) -> Any:
45
+ """
46
+ Convert from the web-friendly (typically JSON) value in the frontend to the format expected by the python function.
47
+ """
48
+ return x
49
+
50
+ def postprocess(self, y):
51
+ """
52
+ Convert from the data returned by the python function to the web-friendly (typically JSON) value expected by the frontend.
53
+ """
54
+ return y
55
+ ```
56
+
57
+ ### `process_example`
58
+
59
+ Takes in the original Python value and returns the modified value that should be displayed in the examples preview in the app.
60
+ If not provided, the `.postprocess()` method is used instead. Let's look at the following example from the `SimpleDropdown` component.
61
+
62
+ ```python
63
+ def process_example(self, input_data):
64
+ return next((c[0] for c in self.choices if c[1] == input_data), None)
65
+ ```
66
+
67
+ Since `self.choices` is a list of tuples corresponding to (`display_name`, `value`), this converts the value that a user provides to the display value (or if the value is not present in `self.choices`, it is converted to `None`).
68
+
69
+
70
+ ### `api_info`
71
+
72
+ A JSON-schema representation of the value that the `preprocess` expects.
73
+ This powers api usage via the gradio clients.
74
+ You do **not** need to implement this yourself if you components specifies a `data_model`.
75
+ The `data_model` in the following section.
76
+
77
+ ```python
78
+ def api_info(self) -> dict[str, list[str]]:
79
+ """
80
+ A JSON-schema representation of the value that the `preprocess` expects and the `postprocess` returns.
81
+ """
82
+ pass
83
+ ```
84
+
85
+ ### `example_payload`
86
+
87
+ An example payload for your component, e.g. something that can be passed into the `.preprocess()` method
88
+ of your component. The example input is displayed in the `View API` page of a Gradio app that uses your custom component.
89
+ Must be JSON-serializable. If your component expects a file, it is best to use a publicly accessible URL.
90
+
91
+ ```python
92
+ def example_payload(self) -> Any:
93
+ """
94
+ The example inputs for this component for API usage. Must be JSON-serializable.
95
+ """
96
+ pass
97
+ ```
98
+
99
+ ### `example_value`
100
+
101
+ An example value for your component, e.g. something that can be passed into the `.postprocess()` method
102
+ of your component. This is used as the example value in the default app that is created in custom component development.
103
+
104
+ ```python
105
+ def example_payload(self) -> Any:
106
+ """
107
+ The example inputs for this component for API usage. Must be JSON-serializable.
108
+ """
109
+ pass
110
+ ```
111
+
112
+ ### `flag`
113
+
114
+ Write the component's value to a format that can be stored in the `csv` or `json` file used for flagging.
115
+ You do **not** need to implement this yourself if you components specifies a `data_model`.
116
+ The `data_model` in the following section.
117
+
118
+ ```python
119
+ def flag(self, x: Any | GradioDataModel, flag_dir: str | Path = "") -> str:
120
+ pass
121
+ ```
122
+
123
+ ### `read_from_flag`
124
+ Convert from the format stored in the `csv` or `json` file used for flagging to the component's python `value`.
125
+ You do **not** need to implement this yourself if you components specifies a `data_model`.
126
+ The `data_model` in the following section.
127
+
128
+ ```python
129
+ def read_from_flag(
130
+ self,
131
+ x: Any,
132
+ ) -> GradioDataModel | Any:
133
+ """
134
+ Convert the data from the csv or jsonl file into the component state.
135
+ """
136
+ return x
137
+ ```
138
+
139
+ ## The `data_model`
140
+
141
+ The `data_model` is how you define the expected data format your component's value will be stored in the frontend.
142
+ It specifies the data format your `preprocess` method expects and the format the `postprocess` method returns.
143
+ It is not necessary to define a `data_model` for your component but it greatly simplifies the process of creating a custom component.
144
+ If you define a custom component you only need to implement four methods - `preprocess`, `postprocess`, `example_payload`, and `example_value`!
145
+
146
+ You define a `data_model` by defining a [pydantic model](https://docs.pydantic.dev/latest/concepts/models/#basic-model-usage) that inherits from either `GradioModel` or `GradioRootModel`.
147
+
148
+ This is best explained with an example. Let's look at the core `Video` component, which stores the video data as a JSON object with two keys `video` and `subtitles` which point to separate files.
149
+
150
+ ```python
151
+ from gradio.data_classes import FileData, GradioModel
152
+
153
+ class VideoData(GradioModel):
154
+ video: FileData
155
+ subtitles: Optional[FileData] = None
156
+
157
+ class Video(Component):
158
+ data_model = VideoData
159
+ ```
160
+
161
+ By adding these four lines of code, your component automatically implements the methods needed for API usage, the flagging methods, and example caching methods!
162
+ It also has the added benefit of self-documenting your code.
163
+ Anyone who reads your component code will know exactly the data it expects.
164
+
165
+ Tip: If your component expects files to be uploaded from the frontend, your must use the `FileData` model! It will be explained in the following section.
166
+
167
+ Tip: Read the pydantic docs [here](https://docs.pydantic.dev/latest/concepts/models/#basic-model-usage).
168
+
169
+ The difference between a `GradioModel` and a `GradioRootModel` is that the `RootModel` will not serialize the data to a dictionary.
170
+ For example, the `Names` model will serialize the data to `{'names': ['freddy', 'pete']}` whereas the `NamesRoot` model will serialize it to `['freddy', 'pete']`.
171
+
172
+ ```python
173
+ from typing import List
174
+
175
+ class Names(GradioModel):
176
+ names: List[str]
177
+
178
+ class NamesRoot(GradioRootModel):
179
+ root: List[str]
180
+ ```
181
+
182
+ Even if your component does not expect a "complex" JSON data structure it can be beneficial to define a `GradioRootModel` so that you don't have to worry about implementing the API and flagging methods.
183
+
184
+ Tip: Use classes from the Python typing library to type your models. e.g. `List` instead of `list`.
185
+
186
+ ## Handling Files
187
+
188
+ If your component expects uploaded files as input, or returns saved files to the frontend, you **MUST** use the `FileData` to type the files in your `data_model`.
189
+
190
+ When you use the `FileData`:
191
+
192
+ * Gradio knows that it should allow serving this file to the frontend. Gradio automatically blocks requests to serve arbitrary files in the computer running the server.
193
+
194
+ * Gradio will automatically place the file in a cache so that duplicate copies of the file don't get saved.
195
+
196
+ * The client libraries will automatically know that they should upload input files prior to sending the request. They will also automatically download files.
197
+
198
+ If you do not use the `FileData`, your component will not work as expected!
199
+
200
+
201
+ ## Adding Event Triggers To Your Component
202
+
203
+ The events triggers for your component are defined in the `EVENTS` class attribute.
204
+ This is a list that contains the string names of the events.
205
+ Adding an event to this list will automatically add a method with that same name to your component!
206
+
207
+ You can import the `Events` enum from `gradio.events` to access commonly used events in the core gradio components.
208
+
209
+ For example, the following code will define `text_submit`, `file_upload` and `change` methods in the `MyComponent` class.
210
+
211
+ ```python
212
+ from gradio.events import Events
213
+ from gradio.components import FormComponent
214
+
215
+ class MyComponent(FormComponent):
216
+
217
+ EVENTS = [
218
+ "text_submit",
219
+ "file_upload",
220
+ Events.change
221
+ ]
222
+ ```
223
+
224
+
225
+ Tip: Don't forget to also handle these events in the JavaScript code!
226
+
227
+ ## Conclusion
228
+
6 Custom Components/05_frontend.md ADDED
@@ -0,0 +1,370 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # The Frontend 🌐⭐️
3
+
4
+ This guide will cover everything you need to know to implement your custom component's frontend.
5
+
6
+ Tip: Gradio components use Svelte. Writing Svelte is fun! If you're not familiar with it, we recommend checking out their interactive [guide](https://learn.svelte.dev/tutorial/welcome-to-svelte).
7
+
8
+ ## The directory structure
9
+
10
+ The frontend code should have, at minimum, three files:
11
+
12
+ * `Index.svelte`: This is the main export and where your component's layout and logic should live.
13
+ * `Example.svelte`: This is where the example view of the component is defined.
14
+
15
+ Feel free to add additional files and subdirectories.
16
+ If you want to export any additional modules, remember to modify the `package.json` file
17
+
18
+ ```json
19
+ "exports": {
20
+ ".": "./Index.svelte",
21
+ "./example": "./Example.svelte",
22
+ "./package.json": "./package.json"
23
+ },
24
+ ```
25
+
26
+ ## The Index.svelte file
27
+
28
+ Your component should expose the following props that will be passed down from the parent Gradio application.
29
+
30
+ ```typescript
31
+ import type { LoadingStatus } from "@gradio/statustracker";
32
+ import type { Gradio } from "@gradio/utils";
33
+
34
+ export let gradio: Gradio<{
35
+ event_1: never;
36
+ event_2: never;
37
+ }>;
38
+
39
+ export let elem_id = "";
40
+ export let elem_classes: string[] = [];
41
+ export let scale: number | null = null;
42
+ export let min_width: number | undefined = undefined;
43
+ export let loading_status: LoadingStatus | undefined = undefined;
44
+ export let mode: "static" | "interactive";
45
+ ```
46
+
47
+ * `elem_id` and `elem_classes` allow Gradio app developers to target your component with custom CSS and JavaScript from the Python `Blocks` class.
48
+
49
+ * `scale` and `min_width` allow Gradio app developers to control how much space your component takes up in the UI.
50
+
51
+ * `loading_status` is used to display a loading status over the component when it is the output of an event.
52
+
53
+ * `mode` is how the parent Gradio app tells your component whether the `interactive` or `static` version should be displayed.
54
+
55
+ * `gradio`: The `gradio` object is created by the parent Gradio app. It stores some application-level configuration that will be useful in your component, like internationalization. You must use it to dispatch events from your component.
56
+
57
+ A minimal `Index.svelte` file would look like:
58
+
59
+ ```svelte
60
+ <script lang="ts">
61
+ import type { LoadingStatus } from "@gradio/statustracker";
62
+ import { Block } from "@gradio/atoms";
63
+ import { StatusTracker } from "@gradio/statustracker";
64
+ import type { Gradio } from "@gradio/utils";
65
+
66
+ export let gradio: Gradio<{
67
+ event_1: never;
68
+ event_2: never;
69
+ }>;
70
+
71
+ export let value = "";
72
+ export let elem_id = "";
73
+ export let elem_classes: string[] = [];
74
+ export let scale: number | null = null;
75
+ export let min_width: number | undefined = undefined;
76
+ export let loading_status: LoadingStatus | undefined = undefined;
77
+ export let mode: "static" | "interactive";
78
+ </script>
79
+
80
+ <Block
81
+ visible={true}
82
+ {elem_id}
83
+ {elem_classes}
84
+ {scale}
85
+ {min_width}
86
+ allow_overflow={false}
87
+ padding={true}
88
+ >
89
+ {#if loading_status}
90
+ <StatusTracker
91
+ autoscroll={gradio.autoscroll}
92
+ i18n={gradio.i18n}
93
+ {...loading_status}
94
+ />
95
+ {/if}
96
+ <p>{value}</p>
97
+ </Block>
98
+ ```
99
+
100
+ ## The Example.svelte file
101
+
102
+ The `Example.svelte` file should expose the following props:
103
+
104
+ ```typescript
105
+ export let value: string;
106
+ export let type: "gallery" | "table";
107
+ export let selected = false;
108
+ export let index: number;
109
+ ```
110
+
111
+ * `value`: The example value that should be displayed.
112
+
113
+ * `type`: This is a variable that can be either `"gallery"` or `"table"` depending on how the examples are displayed. The `"gallery"` form is used when the examples correspond to a single input component, while the `"table"` form is used when a user has multiple input components, and the examples need to populate all of them.
114
+
115
+ * `selected`: You can also adjust how the examples are displayed if a user "selects" a particular example by using the selected variable.
116
+
117
+ * `index`: The current index of the selected value.
118
+
119
+ * Any additional props your "non-example" component takes!
120
+
121
+ This is the `Example.svelte` file for the code `Radio` component:
122
+
123
+ ```svelte
124
+ <script lang="ts">
125
+ export let value: string;
126
+ export let type: "gallery" | "table";
127
+ export let selected = false;
128
+ </script>
129
+
130
+ <div
131
+ class:table={type === "table"}
132
+ class:gallery={type === "gallery"}
133
+ class:selected
134
+ >
135
+ {value}
136
+ </div>
137
+
138
+ <style>
139
+ .gallery {
140
+ padding: var(--size-1) var(--size-2);
141
+ }
142
+ </style>
143
+ ```
144
+
145
+ ## Handling Files
146
+
147
+ If your component deals with files, these files **should** be uploaded to the backend server.
148
+ The `@gradio/client` npm package provides the `upload` and `prepare_files` utility functions to help you do this.
149
+
150
+ The `prepare_files` function will convert the browser's `File` datatype to gradio's internal `FileData` type.
151
+ You should use the `FileData` data in your component to keep track of uploaded files.
152
+
153
+ The `upload` function will upload an array of `FileData` values to the server.
154
+
155
+ Here's an example of loading files from an `<input>` element when its value changes.
156
+
157
+
158
+ ```svelte
159
+ <script lang="ts">
160
+ import { upload, prepare_files, type FileData } from "@gradio/client";
161
+ export let root;
162
+ export let value;
163
+ let uploaded_files;
164
+
165
+ async function handle_upload(file_data: FileData[]): Promise<void> {
166
+ await tick();
167
+ uploaded_files = await upload(file_data, root);
168
+ }
169
+
170
+ async function loadFiles(files: FileList): Promise<void> {
171
+ let _files: File[] = Array.from(files);
172
+ if (!files.length) {
173
+ return;
174
+ }
175
+ if (file_count === "single") {
176
+ _files = [files[0]];
177
+ }
178
+ let file_data = await prepare_files(_files);
179
+ await handle_upload(file_data);
180
+ }
181
+
182
+ async function loadFilesFromUpload(e: Event): Promise<void> {
183
+ const target = e.target;
184
+
185
+ if (!target.files) return;
186
+ await loadFiles(target.files);
187
+ }
188
+ </script>
189
+
190
+ <input
191
+ type="file"
192
+ on:change={loadFilesFromUpload}
193
+ multiple={true}
194
+ />
195
+ ```
196
+
197
+ The component exposes a prop named `root`.
198
+ This is passed down by the parent gradio app and it represents the base url that the files will be uploaded to and fetched from.
199
+
200
+ For WASM support, you should get the upload function from the `Context` and pass that as the third parameter of the `upload` function.
201
+
202
+ ```typescript
203
+ <script lang="ts">
204
+ import { getContext } from "svelte";
205
+ const upload_fn = getContext<typeof upload_files>("upload_files");
206
+
207
+ async function handle_upload(file_data: FileData[]): Promise<void> {
208
+ await tick();
209
+ await upload(file_data, root, upload_fn);
210
+ }
211
+ </script>
212
+ ```
213
+
214
+ ## Leveraging Existing Gradio Components
215
+
216
+ Most of Gradio's frontend components are published on [npm](https://www.npmjs.com/), the javascript package repository.
217
+ This means that you can use them to save yourself time while incorporating common patterns in your component, like uploading files.
218
+ For example, the `@gradio/upload` package has `Upload` and `ModifyUpload` components for properly uploading files to the Gradio server.
219
+ Here is how you can use them to create a user interface to upload and display PDF files.
220
+
221
+ ```svelte
222
+ <script>
223
+ import { type FileData, Upload, ModifyUpload } from "@gradio/upload";
224
+ import { Empty, UploadText, BlockLabel } from "@gradio/atoms";
225
+ </script>
226
+
227
+ <BlockLabel Icon={File} label={label || "PDF"} />
228
+ {#if value === null && interactive}
229
+ <Upload
230
+ filetype="application/pdf"
231
+ on:load={handle_load}
232
+ {root}
233
+ >
234
+ <UploadText type="file" i18n={gradio.i18n} />
235
+ </Upload>
236
+ {:else if value !== null}
237
+ {#if interactive}
238
+ <ModifyUpload i18n={gradio.i18n} on:clear={handle_clear}/>
239
+ {/if}
240
+ <iframe title={value.orig_name || "PDF"} src={value.data} height="{height}px" width="100%"></iframe>
241
+ {:else}
242
+ <Empty size="large"> <File/> </Empty>
243
+ {/if}
244
+ ```
245
+
246
+ You can also combine existing Gradio components to create entirely unique experiences.
247
+ Like rendering a gallery of chatbot conversations.
248
+ The possibilities are endless, please read the documentation on our javascript packages [here](https://gradio.app/main/docs/js).
249
+ We'll be adding more packages and documentation over the coming weeks!
250
+
251
+ ## Matching Gradio Core's Design System
252
+
253
+ You can explore our component library via Storybook. You'll be able to interact with our components and see them in their various states.
254
+
255
+ For those interested in design customization, we provide the CSS variables consisting of our color palette, radii, spacing, and the icons we use - so you can easily match up your custom component with the style of our core components. This Storybook will be regularly updated with any new additions or changes.
256
+
257
+ [Storybook Link](https://gradio.app/main/docs/js/storybook)
258
+
259
+ ## Custom configuration
260
+
261
+ If you want to make use of the vast vite ecosystem, you can use the `gradio.config.js` file to configure your component's build process. This allows you to make use of tools like tailwindcss, mdsvex, and more.
262
+
263
+ Currently, it is possible to configure the following:
264
+
265
+ Vite options:
266
+ - `plugins`: A list of vite plugins to use.
267
+
268
+ Svelte options:
269
+ - `preprocess`: A list of svelte preprocessors to use.
270
+ - `extensions`: A list of file extensions to compile to `.svelte` files.
271
+ - `build.target`: The target to build for, this may be necessary to support newer javascript features. See the [esbuild docs](https://esbuild.github.io/api/#target) for more information.
272
+
273
+ The `gradio.config.js` file should be placed in the root of your component's `frontend` directory. A default config file is created for you when you create a new component. But you can also create your own config file, if one doesn't exist, and use it to customize your component's build process.
274
+
275
+ ### Example for a Vite plugin
276
+
277
+ Custom components can use Vite plugins to customize the build process. Check out the [Vite Docs](https://vitejs.dev/guide/using-plugins.html) for more information.
278
+
279
+ Here we configure [TailwindCSS](https://tailwindcss.com), a utility-first CSS framework. Setup is easiest using the version 4 prerelease.
280
+
281
+ ```
282
+ npm install tailwindcss@next @tailwindcss/vite@next
283
+ ```
284
+
285
+ In `gradio.config.js`:
286
+
287
+ ```typescript
288
+ import tailwindcss from "@tailwindcss/vite";
289
+ export default {
290
+ plugins: [tailwindcss()]
291
+ };
292
+ ```
293
+
294
+ Then create a `style.css` file with the following content:
295
+
296
+ ```css
297
+ @import "tailwindcss";
298
+ ```
299
+
300
+ Import this file into `Index.svelte`. Note, that you need to import the css file containing `@import` and cannot just use a `<style>` tag and use `@import` there.
301
+
302
+ ```svelte
303
+ <script lang="ts">
304
+ [...]
305
+ import "./style.css";
306
+ [...]
307
+ </script>
308
+ ```
309
+
310
+ ### Example for Svelte options
311
+
312
+ In `gradio.config.js` you can also specify a some Svelte options to apply to the Svelte compilation. In this example we will add support for [`mdsvex`](https://mdsvex.pngwn.io), a Markdown preprocessor for Svelte.
313
+
314
+ In order to do this we will need to add a [Svelte Preprocessor](https://svelte.dev/docs/svelte-compiler#preprocess) to the `svelte` object in `gradio.config.js` and configure the [`extensions`](https://github.com/sveltejs/vite-plugin-svelte/blob/HEAD/docs/config.md#config-file) field. Other options are not currently supported.
315
+
316
+ First, install the `mdsvex` plugin:
317
+
318
+ ```bash
319
+ npm install mdsvex
320
+ ```
321
+
322
+ Then add the following to `gradio.config.js`:
323
+
324
+ ```typescript
325
+ import { mdsvex } from "mdsvex";
326
+
327
+ export default {
328
+ svelte: {
329
+ preprocess: [
330
+ mdsvex()
331
+ ],
332
+ extensions: [".svelte", ".svx"]
333
+ }
334
+ };
335
+ ```
336
+
337
+ Now we can create `mdsvex` documents in our component's `frontend` directory and they will be compiled to `.svelte` files.
338
+
339
+ ```md
340
+ <!-- HelloWorld.svx -->
341
+
342
+ <script lang="ts">
343
+ import { Block } from "@gradio/atoms";
344
+
345
+ export let title = "Hello World";
346
+ </script>
347
+
348
+ <Block label="Hello World">
349
+
350
+ # {title}
351
+
352
+ This is a markdown file.
353
+
354
+ </Block>
355
+ ```
356
+
357
+ We can then use the `HelloWorld.svx` file in our components:
358
+
359
+ ```svelte
360
+ <script lang="ts">
361
+ import HelloWorld from "./HelloWorld.svx";
362
+ </script>
363
+
364
+ <HelloWorld />
365
+ ```
366
+
367
+ ## Conclusion
368
+
369
+ You now how to create delightful frontends for your components!
370
+
6 Custom Components/06_frequently-asked-questions.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Frequently Asked Questions
3
+
4
+ ## What do I need to install before using Custom Components?
5
+ Before using Custom Components, make sure you have Python 3.8+, Node.js v16.14+, npm 9+, and Gradio 4.0+ installed.
6
+
7
+ ## What templates can I use to create my custom component?
8
+ Run `gradio cc show` to see the list of built-in templates.
9
+ You can also start off from other's custom components!
10
+ Simply `git clone` their repository and make your modifications.
11
+
12
+ ## What is the development server?
13
+ When you run `gradio cc dev`, a development server will load and run a Gradio app of your choosing.
14
+ This is like when you run `python <app-file>.py`, however the `gradio` command will hot reload so you can instantly see your changes.
15
+
16
+ ## The development server didn't work for me
17
+
18
+ **1. Check your terminal and browser console**
19
+
20
+ Make sure there are no syntax errors or other obvious problems in your code. Exceptions triggered from python will be displayed in the terminal. Exceptions from javascript will be displayed in the browser console and/or the terminal.
21
+
22
+ **2. Are you developing on Windows?**
23
+
24
+ Chrome on Windows will block the local compiled svelte files for security reasons. We recommend developing your custom component in the windows subsystem for linux (WSL) while the team looks at this issue.
25
+
26
+ **3. Inspect the window.__GRADIO_CC__ variable**
27
+
28
+ In the browser console, print the `window.__GRADIO__CC` variable (just type it into the console). If it is an empty object, that means
29
+ that the CLI could not find your custom component source code. Typically, this happens when the custom component is installed in a different virtual environment than the one used to run the dev command. Please use the `--python-path` and `gradio-path` CLI arguments to specify the path of the python and gradio executables for the environment your component is installed in. For example, if you are using a virtualenv located at `/Users/mary/venv`, pass in `/Users/mary/bin/python` and `/Users/mary/bin/gradio` respectively.
30
+
31
+ If the `window.__GRADIO__CC` variable is not empty (see below for an example), then the dev server should be working correctly.
32
+
33
+ ![](https://gradio-builds.s3.amazonaws.com/demo-files/gradio_CC_DEV.png)
34
+
35
+ **4. Make sure you are using a virtual environment**
36
+ It is highly recommended you use a virtual environment to prevent conflicts with other python dependencies installed in your system.
37
+
38
+
39
+ ## Do I always need to start my component from scratch?
40
+ No! You can start off from an existing gradio component as a template, see the [five minute guide](./custom-components-in-five-minutes).
41
+ You can also start from an existing custom component if you'd like to tweak it further. Once you find the source code of a custom component you like, clone the code to your computer and run `gradio cc install`. Then you can run the development server to make changes.If you run into any issues, contact the author of the component by opening an issue in their repository. The [gallery](https://www.gradio.app/custom-components/gallery) is a good place to look for published components. For example, to start from the [PDF component](https://www.gradio.app/custom-components/gallery?id=freddyaboulton%2Fgradio_pdf), clone the space with `git clone https://huggingface.co/spaces/freddyaboulton/gradio_pdf`, `cd` into the `src` directory, and run `gradio cc install`.
42
+
43
+
44
+ ## Do I need to host my custom component on HuggingFace Spaces?
45
+ You can develop and build your custom component without hosting or connecting to HuggingFace.
46
+ If you would like to share your component with the gradio community, it is recommended to publish your package to PyPi and host a demo on HuggingFace so that anyone can install it or try it out.
47
+
48
+ ## What methods are mandatory for implementing a custom component in Gradio?
49
+
50
+ You must implement the `preprocess`, `postprocess`, `example_payload`, and `example_value` methods. If your component does not use a data model, you must also define the `api_info`, `flag`, and `read_from_flag` methods. Read more in the [backend guide](./backend).
51
+
52
+ ## What is the purpose of a `data_model` in Gradio custom components?
53
+
54
+ A `data_model` defines the expected data format for your component, simplifying the component development process and self-documenting your code. It streamlines API usage and example caching.
55
+
56
+ ## Why is it important to use `FileData` for components dealing with file uploads?
57
+
58
+ Utilizing `FileData` is crucial for components that expect file uploads. It ensures secure file handling, automatic caching, and streamlined client library functionality.
59
+
60
+ ## How can I add event triggers to my custom Gradio component?
61
+
62
+ You can define event triggers in the `EVENTS` class attribute by listing the desired event names, which automatically adds corresponding methods to your component.
63
+
64
+ ## Can I implement a custom Gradio component without defining a `data_model`?
65
+
66
+ Yes, it is possible to create custom components without a `data_model`, but you are going to have to manually implement `api_info`, `flag`, and `read_from_flag` methods.
67
+
68
+ ## Are there sample custom components I can learn from?
69
+
70
+ We have prepared this [collection](https://huggingface.co/collections/gradio/custom-components-65497a761c5192d981710b12) of custom components on the HuggingFace Hub that you can use to get started!
71
+
72
+ ## How can I find custom components created by the Gradio community?
73
+
74
+ We're working on creating a gallery to make it really easy to discover new custom components.
75
+ In the meantime, you can search for HuggingFace Spaces that are tagged as a `gradio-custom-component` [here](https://huggingface.co/search/full-text?q=gradio-custom-component&type=space)
6 Custom Components/07_pdf-component-example.md ADDED
@@ -0,0 +1,687 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Case Study: A Component to Display PDFs
3
+
4
+ Let's work through an example of building a custom gradio component for displaying PDF files.
5
+ This component will come in handy for showcasing [document question answering](https://huggingface.co/models?pipeline_tag=document-question-answering&sort=trending) models, which typically work on PDF input.
6
+ This is a sneak preview of what our finished component will look like:
7
+
8
+ ![demo](https://gradio-builds.s3.amazonaws.com/assets/PDFDisplay.png)
9
+
10
+ ## Step 0: Prerequisites
11
+ Make sure you have gradio 4.0 installed as well as node 18+.
12
+ As of the time of publication, the latest release is 4.1.1.
13
+ Also, please read the [Five Minute Tour](./custom-components-in-five-minutes) of custom components and the [Key Concepts](./key-component-concepts) guide before starting.
14
+
15
+
16
+ ## Step 1: Creating the custom component
17
+
18
+ Navigate to a directory of your choosing and run the following command:
19
+
20
+ ```bash
21
+ gradio cc create PDF
22
+ ```
23
+
24
+
25
+ Tip: You should change the name of the component.
26
+ Some of the screenshots assume the component is called `PDF` but the concepts are the same!
27
+
28
+ This will create a subdirectory called `pdf` in your current working directory.
29
+ There are three main subdirectories in `pdf`: `frontend`, `backend`, and `demo`.
30
+ If you open `pdf` in your code editor, it will look like this:
31
+
32
+ ![directory structure](https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/CodeStructure.png)
33
+
34
+ Tip: For this demo we are not templating off a current gradio component. But you can see the list of available templates with `gradio cc show` and then pass the template name to the `--template` option, e.g. `gradio cc create <Name> --template <foo>`
35
+
36
+ ## Step 2: Frontend - modify javascript dependencies
37
+
38
+ We're going to use the [pdfjs](https://mozilla.github.io/pdf.js/) javascript library to display the pdfs in the frontend.
39
+ Let's start off by adding it to our frontend project's dependencies, as well as adding a couple of other projects we'll need.
40
+
41
+ From within the `frontend` directory, run `npm install @gradio/client @gradio/upload @gradio/icons @gradio/button` and `npm install --save-dev pdfjs-dist@3.11.174`.
42
+ Also, let's uninstall the `@zerodevx/svelte-json-view` dependency by running `npm uninstall @zerodevx/svelte-json-view`.
43
+
44
+ The complete `package.json` should look like this:
45
+
46
+ ```json
47
+ {
48
+ "name": "gradio_pdf",
49
+ "version": "0.2.0",
50
+ "description": "Gradio component for displaying PDFs",
51
+ "type": "module",
52
+ "author": "",
53
+ "license": "ISC",
54
+ "private": false,
55
+ "main_changeset": true,
56
+ "exports": {
57
+ ".": "./Index.svelte",
58
+ "./example": "./Example.svelte",
59
+ "./package.json": "./package.json"
60
+ },
61
+ "devDependencies": {
62
+ "pdfjs-dist": "3.11.174"
63
+ },
64
+ "dependencies": {
65
+ "@gradio/atoms": "0.2.0",
66
+ "@gradio/statustracker": "0.3.0",
67
+ "@gradio/utils": "0.2.0",
68
+ "@gradio/client": "0.7.1",
69
+ "@gradio/upload": "0.3.2",
70
+ "@gradio/icons": "0.2.0",
71
+ "@gradio/button": "0.2.3",
72
+ "pdfjs-dist": "3.11.174"
73
+ }
74
+ }
75
+ ```
76
+
77
+
78
+ Tip: Running `npm install` will install the latest version of the package available. You can install a specific version with `npm install package@<version>`. You can find all of the gradio javascript package documentation [here](https://www.gradio.app/main/docs/js). It is recommended you use the same versions as me as the API can change.
79
+
80
+ Navigate to `Index.svelte` and delete mentions of `JSONView`
81
+
82
+ ```ts
83
+ import { JsonView } from "@zerodevx/svelte-json-view";
84
+ ```
85
+
86
+ ```svelte
87
+ <JsonView json={value} />
88
+ ```
89
+
90
+ ## Step 3: Frontend - Launching the Dev Server
91
+
92
+ Run the `dev` command to launch the development server.
93
+ This will open the demo in `demo/app.py` in an environment where changes to the `frontend` and `backend` directories will reflect instantaneously in the launched app.
94
+
95
+ After launching the dev server, you should see a link printed to your console that says `Frontend Server (Go here): ... `.
96
+
97
+ ![](https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/dev_server_terminal.png)
98
+
99
+ You should see the following:
100
+
101
+ ![](https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/frontend_start.png)
102
+
103
+
104
+ Its not impressive yet but we're ready to start coding!
105
+
106
+ ## Step 4: Frontend - The basic skeleton
107
+
108
+ We're going to start off by first writing the skeleton of our frontend and then adding the pdf rendering logic.
109
+ Add the following imports and expose the following properties to the top of your file in the `<script>` tag.
110
+ You may get some warnings from your code editor that some props are not used.
111
+ That's ok.
112
+
113
+ ```ts
114
+ import { tick } from "svelte";
115
+ import type { Gradio } from "@gradio/utils";
116
+ import { Block, BlockLabel } from "@gradio/atoms";
117
+ import { File } from "@gradio/icons";
118
+ import { StatusTracker } from "@gradio/statustracker";
119
+ import type { LoadingStatus } from "@gradio/statustracker";
120
+ import type { FileData } from "@gradio/client";
121
+ import { Upload, ModifyUpload } from "@gradio/upload";
122
+
123
+ export let elem_id = "";
124
+ export let elem_classes: string[] = [];
125
+ export let visible = true;
126
+ export let value: FileData | null = null;
127
+ export let container = true;
128
+ export let scale: number | null = null;
129
+ export let root: string;
130
+ export let height: number | null = 500;
131
+ export let label: string;
132
+ export let proxy_url: string;
133
+ export let min_width: number | undefined = undefined;
134
+ export let loading_status: LoadingStatus;
135
+ export let gradio: Gradio<{
136
+ change: never;
137
+ upload: never;
138
+ }>;
139
+
140
+ let _value = value;
141
+ let old_value = _value;
142
+ ```
143
+
144
+
145
+ Tip: The `gradio`` object passed in here contains some metadata about the application as well as some utility methods. One of these utilities is a dispatch method. We want to dispatch change and upload events whenever our PDF is changed or updated. This line provides type hints that these are the only events we will be dispatching.
146
+
147
+ We want our frontend component to let users upload a PDF document if there isn't one already loaded.
148
+ If it is loaded, we want to display it underneath a "clear" button that lets our users upload a new document.
149
+ We're going to use the `Upload` and `ModifyUpload` components that come with the `@gradio/upload` package to do this.
150
+ Underneath the `</script>` tag, delete all the current code and add the following:
151
+
152
+ ```svelte
153
+ <Block {visible} {elem_id} {elem_classes} {container} {scale} {min_width}>
154
+ {#if loading_status}
155
+ <StatusTracker
156
+ autoscroll={gradio.autoscroll}
157
+ i18n={gradio.i18n}
158
+ {...loading_status}
159
+ />
160
+ {/if}
161
+ <BlockLabel
162
+ show_label={label !== null}
163
+ Icon={File}
164
+ float={value === null}
165
+ label={label || "File"}
166
+ />
167
+ {#if _value}
168
+ <ModifyUpload i18n={gradio.i18n} absolute />
169
+ {:else}
170
+ <Upload
171
+ filetype={"application/pdf"}
172
+ file_count="single"
173
+ {root}
174
+ >
175
+ Upload your PDF
176
+ </Upload>
177
+ {/if}
178
+ </Block>
179
+ ```
180
+
181
+ You should see the following when you navigate to your app after saving your current changes:
182
+
183
+ ![](https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/frontend_1.png)
184
+
185
+ ## Step 5: Frontend - Nicer Upload Text
186
+
187
+ The `Upload your PDF` text looks a bit small and barebones.
188
+ Lets customize it!
189
+
190
+ Create a new file called `PdfUploadText.svelte` and copy the following code.
191
+ Its creating a new div to display our "upload text" with some custom styling.
192
+
193
+ Tip: Notice that we're leveraging Gradio core's existing css variables here: `var(--size-60)` and `var(--body-text-color-subdued)`. This allows our component to work nicely in light mode and dark mode, as well as with Gradio's built-in themes.
194
+
195
+
196
+ ```svelte
197
+ <script lang="ts">
198
+ import { Upload as UploadIcon } from "@gradio/icons";
199
+ export let hovered = false;
200
+
201
+ </script>
202
+
203
+ <div class="wrap">
204
+ <span class="icon-wrap" class:hovered><UploadIcon /> </span>
205
+ Drop PDF
206
+ <span class="or">- or -</span>
207
+ Click to Upload
208
+ </div>
209
+
210
+ <style>
211
+ .wrap {
212
+ display: flex;
213
+ flex-direction: column;
214
+ justify-content: center;
215
+ align-items: center;
216
+ min-height: var(--size-60);
217
+ color: var(--block-label-text-color);
218
+ line-height: var(--line-md);
219
+ height: 100%;
220
+ padding-top: var(--size-3);
221
+ }
222
+
223
+ .or {
224
+ color: var(--body-text-color-subdued);
225
+ display: flex;
226
+ }
227
+
228
+ .icon-wrap {
229
+ width: 30px;
230
+ margin-bottom: var(--spacing-lg);
231
+ }
232
+
233
+ @media (--screen-md) {
234
+ .wrap {
235
+ font-size: var(--text-lg);
236
+ }
237
+ }
238
+
239
+ .hovered {
240
+ color: var(--color-accent);
241
+ }
242
+ </style>
243
+ ```
244
+
245
+ Now import `PdfUploadText.svelte` in your `<script>` and pass it to the `Upload` component!
246
+
247
+ ```svelte
248
+ import PdfUploadText from "./PdfUploadText.svelte";
249
+
250
+ ...
251
+
252
+ <Upload
253
+ filetype={"application/pdf"}
254
+ file_count="single"
255
+ {root}
256
+ >
257
+ <PdfUploadText />
258
+ </Upload>
259
+ ```
260
+
261
+ After saving your code, the frontend should now look like this:
262
+
263
+ ![](https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/better_upload.png)
264
+
265
+ ## Step 6: PDF Rendering logic
266
+
267
+ This is the most advanced javascript part.
268
+ It took me a while to figure it out!
269
+ Do not worry if you have trouble, the important thing is to not be discouraged 💪
270
+ Ask for help in the gradio [discord](https://discord.gg/hugging-face-879548962464493619) if you need and ask for help.
271
+
272
+ With that out of the way, let's start off by importing `pdfjs` and loading the code of the pdf worker from the mozilla cdn.
273
+
274
+ ```ts
275
+ import pdfjsLib from "pdfjs-dist";
276
+ ...
277
+ pdfjsLib.GlobalWorkerOptions.workerSrc = "https://cdn.bootcss.com/pdf.js/3.11.174/pdf.worker.js";
278
+ ```
279
+
280
+ Also create the following variables:
281
+
282
+ ```ts
283
+ let pdfDoc;
284
+ let numPages = 1;
285
+ let currentPage = 1;
286
+ let canvasRef;
287
+ ```
288
+
289
+ Now, we will use `pdfjs` to render a given page of the PDF onto an `html` document.
290
+ Add the following code to `Index.svelte`:
291
+
292
+ ```ts
293
+ async function get_doc(value: FileData) {
294
+ const loadingTask = pdfjsLib.getDocument(value.url);
295
+ pdfDoc = await loadingTask.promise;
296
+ numPages = pdfDoc.numPages;
297
+ render_page();
298
+ }
299
+
300
+ function render_page() {
301
+ // Render a specific page of the PDF onto the canvas
302
+ pdfDoc.getPage(currentPage).then(page => {
303
+ const ctx = canvasRef.getContext('2d')
304
+ ctx.clearRect(0, 0, canvasRef.width, canvasRef.height);
305
+ let viewport = page.getViewport({ scale: 1 });
306
+ let scale = height / viewport.height;
307
+ viewport = page.getViewport({ scale: scale });
308
+
309
+ const renderContext = {
310
+ canvasContext: ctx,
311
+ viewport,
312
+ };
313
+ canvasRef.width = viewport.width;
314
+ canvasRef.height = viewport.height;
315
+ page.render(renderContext);
316
+ });
317
+ }
318
+
319
+ // If the value changes, render the PDF of the currentPage
320
+ $: if(JSON.stringify(old_value) != JSON.stringify(_value)) {
321
+ if (_value){
322
+ get_doc(_value);
323
+ }
324
+ old_value = _value;
325
+ gradio.dispatch("change");
326
+ }
327
+ ```
328
+
329
+
330
+ Tip: The `$:` syntax in svelte is how you declare statements to be reactive. Whenever any of the inputs of the statement change, svelte will automatically re-run that statement.
331
+
332
+ Now place the `canvas` underneath the `ModifyUpload` component:
333
+
334
+ ```svelte
335
+ <div class="pdf-canvas" style="height: {height}px">
336
+ <canvas bind:this={canvasRef}></canvas>
337
+ </div>
338
+ ```
339
+
340
+ And add the following styles to the `<style>` tag:
341
+
342
+ ```svelte
343
+ <style>
344
+ .pdf-canvas {
345
+ display: flex;
346
+ justify-content: center;
347
+ align-items: center;
348
+ }
349
+ </style>
350
+ ```
351
+
352
+ ## Step 7: Handling The File Upload And Clear
353
+
354
+ Now for the fun part - actually rendering the PDF when the file is uploaded!
355
+ Add the following functions to the `<script>` tag:
356
+
357
+ ```ts
358
+ async function handle_clear() {
359
+ _value = null;
360
+ await tick();
361
+ gradio.dispatch("change");
362
+ }
363
+
364
+ async function handle_upload({detail}: CustomEvent<FileData>): Promise<void> {
365
+ value = detail;
366
+ await tick();
367
+ gradio.dispatch("change");
368
+ gradio.dispatch("upload");
369
+ }
370
+ ```
371
+
372
+
373
+ Tip: The `gradio.dispatch` method is actually what is triggering the `change` or `upload` events in the backend. For every event defined in the component's backend, we will explain how to do this in Step 9, there must be at least one `gradio.dispatch("<event-name>")` call. These are called `gradio` events and they can be listended from the entire Gradio application. You can dispatch a built-in `svelte` event with the `dispatch` function. These events can only be listened to from the component's direct parent. Learn about svelte events from the [official documentation](https://learn.svelte.dev/tutorial/component-events).
374
+
375
+ Now we will run these functions whenever the `Upload` component uploads a file and whenever the `ModifyUpload` component clears the current file. The `<Upload>` component dispatches a `load` event with a payload of type `FileData` corresponding to the uploaded file. The `on:load` syntax tells `Svelte` to automatically run this function in response to the event.
376
+
377
+ ```svelte
378
+ <ModifyUpload i18n={gradio.i18n} on:clear={handle_clear} absolute />
379
+
380
+ ...
381
+
382
+ <Upload
383
+ on:load={handle_upload}
384
+ filetype={"application/pdf"}
385
+ file_count="single"
386
+ {root}
387
+ >
388
+ <PdfUploadText/>
389
+ </Upload>
390
+ ```
391
+
392
+ Congratulations! You have a working pdf uploader!
393
+
394
+ ![upload-gif](https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/pdf_component_gif_docs.gif)
395
+
396
+ ## Step 8: Adding buttons to navigate pages
397
+
398
+ If a user uploads a PDF document with multiple pages, they will only be able to see the first one.
399
+ Let's add some buttons to help them navigate the page.
400
+ We will use the `BaseButton` from `@gradio/button` so that they look like regular Gradio buttons.
401
+
402
+ Import the `BaseButton` and add the following functions that will render the next and previous page of the PDF.
403
+
404
+ ```ts
405
+ import { BaseButton } from "@gradio/button";
406
+
407
+ ...
408
+
409
+ function next_page() {
410
+ if (currentPage >= numPages) {
411
+ return;
412
+ }
413
+ currentPage++;
414
+ render_page();
415
+ }
416
+
417
+ function prev_page() {
418
+ if (currentPage == 1) {
419
+ return;
420
+ }
421
+ currentPage--;
422
+ render_page();
423
+ }
424
+ ```
425
+
426
+ Now we will add them underneath the canvas in a separate `<div>`
427
+
428
+ ```svelte
429
+ ...
430
+
431
+ <ModifyUpload i18n={gradio.i18n} on:clear={handle_clear} absolute />
432
+ <div class="pdf-canvas" style="height: {height}px">
433
+ <canvas bind:this={canvasRef}></canvas>
434
+ </div>
435
+ <div class="button-row">
436
+ <BaseButton on:click={prev_page}>
437
+ ⬅️
438
+ </BaseButton>
439
+ <span class="page-count"> {currentPage} / {numPages} </span>
440
+ <BaseButton on:click={next_page}>
441
+ ➡️
442
+ </BaseButton>
443
+ </div>
444
+
445
+ ...
446
+
447
+ <style>
448
+ .button-row {
449
+ display: flex;
450
+ flex-direction: row;
451
+ width: 100%;
452
+ justify-content: center;
453
+ align-items: center;
454
+ }
455
+
456
+ .page-count {
457
+ margin: 0 10px;
458
+ font-family: var(--font-mono);
459
+ }
460
+ ```
461
+
462
+ Congratulations! The frontend is almost complete 🎉
463
+
464
+ ![multipage-pdf-gif](https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/pdf_multipage.gif)
465
+
466
+ ## Step 8.5: The Example view
467
+
468
+ We're going to want users of our component to get a preview of the PDF if its used as an `example` in a `gr.Interface` or `gr.Examples`.
469
+
470
+ To do so, we're going to add some of the pdf rendering logic in `Index.svelte` to `Example.svelte`.
471
+
472
+
473
+ ```svelte
474
+ <script lang="ts">
475
+ export let value: string;
476
+ export let type: "gallery" | "table";
477
+ export let selected = false;
478
+ import pdfjsLib from "pdfjs-dist";
479
+ pdfjsLib.GlobalWorkerOptions.workerSrc = "https://cdn.bootcss.com/pdf.js/3.11.174/pdf.worker.js";
480
+
481
+ let pdfDoc;
482
+ let canvasRef;
483
+
484
+ async function get_doc(url: string) {
485
+ const loadingTask = pdfjsLib.getDocument(url);
486
+ pdfDoc = await loadingTask.promise;
487
+ renderPage();
488
+ }
489
+
490
+ function renderPage() {
491
+ // Render a specific page of the PDF onto the canvas
492
+ pdfDoc.getPage(1).then(page => {
493
+ const ctx = canvasRef.getContext('2d')
494
+ ctx.clearRect(0, 0, canvasRef.width, canvasRef.height);
495
+
496
+ const viewport = page.getViewport({ scale: 0.2 });
497
+
498
+ const renderContext = {
499
+ canvasContext: ctx,
500
+ viewport
501
+ };
502
+ canvasRef.width = viewport.width;
503
+ canvasRef.height = viewport.height;
504
+ page.render(renderContext);
505
+ });
506
+ }
507
+
508
+ $: get_doc(value);
509
+ </script>
510
+
511
+ <div
512
+ class:table={type === "table"}
513
+ class:gallery={type === "gallery"}
514
+ class:selected
515
+ style="justify-content: center; align-items: center; display: flex; flex-direction: column;"
516
+ >
517
+ <canvas bind:this={canvasRef}></canvas>
518
+ </div>
519
+
520
+ <style>
521
+ .gallery {
522
+ padding: var(--size-1) var(--size-2);
523
+ }
524
+ </style>
525
+ ```
526
+
527
+
528
+ Tip: Exercise for the reader - reduce the code duplication between `Index.svelte` and `Example.svelte` 😊
529
+
530
+
531
+ You will not be able to render examples until we make some changes to the backend code in the next step!
532
+
533
+ ## Step 9: The backend
534
+
535
+ The backend changes needed are smaller.
536
+ We're almost done!
537
+
538
+ What we're going to do is:
539
+ * Add `change` and `upload` events to our component.
540
+ * Add a `height` property to let users control the height of the PDF.
541
+ * Set the `data_model` of our component to be `FileData`. This is so that Gradio can automatically cache and safely serve any files that are processed by our component.
542
+ * Modify the `preprocess` method to return a string corresponding to the path of our uploaded PDF.
543
+ * Modify the `postprocess` to turn a path to a PDF created in an event handler to a `FileData`.
544
+
545
+ When all is said an done, your component's backend code should look like this:
546
+
547
+ ```python
548
+ from __future__ import annotations
549
+ from typing import Any, Callable, TYPE_CHECKING
550
+
551
+ from gradio.components.base import Component
552
+ from gradio.data_classes import FileData
553
+ from gradio import processing_utils
554
+ if TYPE_CHECKING:
555
+ from gradio.components import Timer
556
+
557
+ class PDF(Component):
558
+
559
+ EVENTS = ["change", "upload"]
560
+
561
+ data_model = FileData
562
+
563
+ def __init__(self, value: Any = None, *,
564
+ height: int | None = None,
565
+ label: str | None = None, info: str | None = None,
566
+ show_label: bool | None = None,
567
+ container: bool = True,
568
+ scale: int | None = None,
569
+ min_width: int | None = None,
570
+ interactive: bool | None = None,
571
+ visible: bool = True,
572
+ elem_id: str | None = None,
573
+ elem_classes: list[str] | str | None = None,
574
+ render: bool = True,
575
+ load_fn: Callable[..., Any] | None = None,
576
+ every: Timer | float | None = None):
577
+ super().__init__(value, label=label, info=info,
578
+ show_label=show_label, container=container,
579
+ scale=scale, min_width=min_width,
580
+ interactive=interactive, visible=visible,
581
+ elem_id=elem_id, elem_classes=elem_classes,
582
+ render=render, load_fn=load_fn, every=every)
583
+ self.height = height
584
+
585
+ def preprocess(self, payload: FileData) -> str:
586
+ return payload.path
587
+
588
+ def postprocess(self, value: str | None) -> FileData:
589
+ if not value:
590
+ return None
591
+ return FileData(path=value)
592
+
593
+ def example_payload(self):
594
+ return "https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/fw9.pdf"
595
+
596
+ def example_value(self):
597
+ return "https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/fw9.pdf"
598
+ ```
599
+
600
+ ## Step 10: Add a demo and publish!
601
+
602
+ To test our backend code, let's add a more complex demo that performs Document Question and Answering with huggingface transformers.
603
+
604
+ In our `demo` directory, create a `requirements.txt` file with the following packages
605
+
606
+ ```
607
+ torch
608
+ transformers
609
+ pdf2image
610
+ pytesseract
611
+ ```
612
+
613
+
614
+ Tip: Remember to install these yourself and restart the dev server! You may need to install extra non-python dependencies for `pdf2image`. See [here](https://pypi.org/project/pdf2image/). Feel free to write your own demo if you have trouble.
615
+
616
+
617
+ ```python
618
+ import gradio as gr
619
+ from gradio_pdf import PDF
620
+ from pdf2image import convert_from_path
621
+ from transformers import pipeline
622
+ from pathlib import Path
623
+
624
+ dir_ = Path(__file__).parent
625
+
626
+ p = pipeline(
627
+ "document-question-answering",
628
+ model="impira/layoutlm-document-qa",
629
+ )
630
+
631
+ def qa(question: str, doc: str) -> str:
632
+ img = convert_from_path(doc)[0]
633
+ output = p(img, question)
634
+ return sorted(output, key=lambda x: x["score"], reverse=True)[0]['answer']
635
+
636
+
637
+ demo = gr.Interface(
638
+ qa,
639
+ [gr.Textbox(label="Question"), PDF(label="Document")],
640
+ gr.Textbox(),
641
+ )
642
+
643
+ demo.launch()
644
+ ```
645
+
646
+ See our demo in action below!
647
+
648
+ <video autoplay muted loop>
649
+ <source src="https://gradio-builds.s3.amazonaws.com/assets/pdf-guide/PDFDemo.mov" type="video/mp4" />
650
+ </video>
651
+
652
+ Finally lets build our component with `gradio cc build` and publish it with the `gradio cc publish` command!
653
+ This will guide you through the process of uploading your component to [PyPi](https://pypi.org/) and [HuggingFace Spaces](https://huggingface.co/spaces).
654
+
655
+
656
+ Tip: You may need to add the following lines to the `Dockerfile` of your HuggingFace Space.
657
+
658
+ ```Dockerfile
659
+ RUN mkdir -p /tmp/cache/
660
+ RUN chmod a+rwx -R /tmp/cache/
661
+ RUN apt-get update && apt-get install -y poppler-utils tesseract-ocr
662
+
663
+ ENV TRANSFORMERS_CACHE=/tmp/cache/
664
+ ```
665
+
666
+ ## Conclusion
667
+
668
+ In order to use our new component in **any** gradio 4.0 app, simply install it with pip, e.g. `pip install gradio-pdf`. Then you can use it like the built-in `gr.File()` component (except that it will only accept and display PDF files).
669
+
670
+ Here is a simple demo with the Blocks api:
671
+
672
+ ```python
673
+ import gradio as gr
674
+ from gradio_pdf import PDF
675
+
676
+ with gr.Blocks() as demo:
677
+ pdf = PDF(label="Upload a PDF", interactive=True)
678
+ name = gr.Textbox()
679
+ pdf.upload(lambda f: f, pdf, name)
680
+
681
+ demo.launch()
682
+ ```
683
+
684
+
685
+ I hope you enjoyed this tutorial!
686
+ The complete source code for our component is [here](https://huggingface.co/spaces/freddyaboulton/gradio_pdf/tree/main/src).
687
+ Please don't hesitate to reach out to the gradio community on the [HuggingFace Discord](https://discord.gg/hugging-face-879548962464493619) if you get stuck.
6 Custom Components/08_multimodal-chatbot-part1.md ADDED
@@ -0,0 +1,359 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Build a Custom Multimodal Chatbot - Part 1
3
+
4
+ This is the first in a two part series where we build a custom Multimodal Chatbot component.
5
+ In part 1, we will modify the Gradio Chatbot component to display text and media files (video, audio, image) in the same message.
6
+ In part 2, we will build a custom Textbox component that will be able to send multimodal messages (text and media files) to the chatbot.
7
+
8
+ You can follow along with the author of this post as he implements the chatbot component in the following YouTube video!
9
+
10
+ <iframe width="560" height="315" src="https://www.youtube.com/embed/IVJkOHTBPn0?si=bs-sBv43X-RVA8ly" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
11
+
12
+ Here's a preview of what our multimodal chatbot component will look like:
13
+
14
+ ![MultiModal Chatbot](https://gradio-builds.s3.amazonaws.com/assets/MultimodalChatbot.png)
15
+
16
+
17
+ ## Part 1 - Creating our project
18
+
19
+ For this demo we will be tweaking the existing Gradio `Chatbot` component to display text and media files in the same message.
20
+ Let's create a new custom component directory by templating off of the `Chatbot` component source code.
21
+
22
+ ```bash
23
+ gradio cc create MultimodalChatbot --template Chatbot
24
+ ```
25
+
26
+ And we're ready to go!
27
+
28
+ Tip: Make sure to modify the `Author` key in the `pyproject.toml` file.
29
+
30
+ ## Part 2a - The backend data_model
31
+
32
+ Open up the `multimodalchatbot.py` file in your favorite code editor and let's get started modifying the backend of our component.
33
+
34
+ The first thing we will do is create the `data_model` of our component.
35
+ The `data_model` is the data format that your python component will receive and send to the javascript client running the UI.
36
+ You can read more about the `data_model` in the [backend guide](./backend).
37
+
38
+ For our component, each chatbot message will consist of two keys: a `text` key that displays the text message and an optional list of media files that can be displayed underneath the text.
39
+
40
+ Import the `FileData` and `GradioModel` classes from `gradio.data_classes` and modify the existing `ChatbotData` class to look like the following:
41
+
42
+ ```python
43
+ class FileMessage(GradioModel):
44
+ file: FileData
45
+ alt_text: Optional[str] = None
46
+
47
+
48
+ class MultimodalMessage(GradioModel):
49
+ text: Optional[str] = None
50
+ files: Optional[List[FileMessage]] = None
51
+
52
+
53
+ class ChatbotData(GradioRootModel):
54
+ root: List[Tuple[Optional[MultimodalMessage], Optional[MultimodalMessage]]]
55
+
56
+
57
+ class MultimodalChatbot(Component):
58
+ ...
59
+ data_model = ChatbotData
60
+ ```
61
+
62
+
63
+ Tip: The `data_model`s are implemented using `Pydantic V2`. Read the documentation [here](https://docs.pydantic.dev/latest/).
64
+
65
+ We've done the hardest part already!
66
+
67
+ ## Part 2b - The pre and postprocess methods
68
+
69
+ For the `preprocess` method, we will keep it simple and pass a list of `MultimodalMessage`s to the python functions that use this component as input.
70
+ This will let users of our component access the chatbot data with `.text` and `.files` attributes.
71
+ This is a design choice that you can modify in your implementation!
72
+ We can return the list of messages with the `root` property of the `ChatbotData` like so:
73
+
74
+ ```python
75
+ def preprocess(
76
+ self,
77
+ payload: ChatbotData | None,
78
+ ) -> List[MultimodalMessage] | None:
79
+ if payload is None:
80
+ return payload
81
+ return payload.root
82
+ ```
83
+
84
+
85
+ Tip: Learn about the reasoning behind the `preprocess` and `postprocess` methods in the [key concepts guide](./key-component-concepts)
86
+
87
+ In the `postprocess` method we will coerce each message returned by the python function to be a `MultimodalMessage` class.
88
+ We will also clean up any indentation in the `text` field so that it can be properly displayed as markdown in the frontend.
89
+
90
+ We can leave the `postprocess` method as is and modify the `_postprocess_chat_messages`
91
+
92
+ ```python
93
+ def _postprocess_chat_messages(
94
+ self, chat_message: MultimodalMessage | dict | None
95
+ ) -> MultimodalMessage | None:
96
+ if chat_message is None:
97
+ return None
98
+ if isinstance(chat_message, dict):
99
+ chat_message = MultimodalMessage(**chat_message)
100
+ chat_message.text = inspect.cleandoc(chat_message.text or "")
101
+ for file_ in chat_message.files:
102
+ file_.file.mime_type = client_utils.get_mimetype(file_.file.path)
103
+ return chat_message
104
+ ```
105
+
106
+ Before we wrap up with the backend code, let's modify the `example_value` and `example_payload` method to return a valid dictionary representation of the `ChatbotData`:
107
+
108
+ ```python
109
+ def example_value(self) -> Any:
110
+ return [[{"text": "Hello!", "files": []}, None]]
111
+
112
+ def example_payload(self) -> Any:
113
+ return [[{"text": "Hello!", "files": []}, None]]
114
+ ```
115
+
116
+ Congrats - the backend is complete!
117
+
118
+ ## Part 3a - The Index.svelte file
119
+
120
+ The frontend for the `Chatbot` component is divided into two parts - the `Index.svelte` file and the `shared/Chatbot.svelte` file.
121
+ The `Index.svelte` file applies some processing to the data received from the server and then delegates the rendering of the conversation to the `shared/Chatbot.svelte` file.
122
+ First we will modify the `Index.svelte` file to apply processing to the new data type the backend will return.
123
+
124
+ Let's begin by porting our custom types from our python `data_model` to typescript.
125
+ Open `frontend/shared/utils.ts` and add the following type definitions at the top of the file:
126
+
127
+ ```ts
128
+ export type FileMessage = {
129
+ file: FileData;
130
+ alt_text?: string;
131
+ };
132
+
133
+
134
+ export type MultimodalMessage = {
135
+ text: string;
136
+ files?: FileMessage[];
137
+ }
138
+ ```
139
+
140
+ Now let's import them in `Index.svelte` and modify the type annotations for `value` and `_value`.
141
+
142
+ ```ts
143
+ import type { FileMessage, MultimodalMessage } from "./shared/utils";
144
+
145
+ export let value: [
146
+ MultimodalMessage | null,
147
+ MultimodalMessage | null
148
+ ][] = [];
149
+
150
+ let _value: [
151
+ MultimodalMessage | null,
152
+ MultimodalMessage | null
153
+ ][];
154
+ ```
155
+
156
+ We need to normalize each message to make sure each file has a proper URL to fetch its contents from.
157
+ We also need to format any embedded file links in the `text` key.
158
+ Let's add a `process_message` utility function and apply it whenever the `value` changes.
159
+
160
+ ```ts
161
+ function process_message(msg: MultimodalMessage | null): MultimodalMessage | null {
162
+ if (msg === null) {
163
+ return msg;
164
+ }
165
+ msg.text = redirect_src_url(msg.text);
166
+ msg.files = msg.files.map(normalize_messages);
167
+ return msg;
168
+ }
169
+
170
+ $: _value = value
171
+ ? value.map(([user_msg, bot_msg]) => [
172
+ process_message(user_msg),
173
+ process_message(bot_msg)
174
+ ])
175
+ : [];
176
+ ```
177
+
178
+ ## Part 3b - the Chatbot.svelte file
179
+
180
+ Let's begin similarly to the `Index.svelte` file and let's first modify the type annotations.
181
+ Import `Mulimodal` message at the top of the `<script>` section and use it to type the `value` and `old_value` variables.
182
+
183
+ ```ts
184
+ import type { MultimodalMessage } from "./utils";
185
+
186
+ export let value:
187
+ | [
188
+ MultimodalMessage | null,
189
+ MultimodalMessage | null
190
+ ][]
191
+ | null;
192
+ let old_value:
193
+ | [
194
+ MultimodalMessage | null,
195
+ MultimodalMessage | null
196
+ ][]
197
+ | null = null;
198
+ ```
199
+
200
+ We also need to modify the `handle_select` and `handle_like` functions:
201
+
202
+ ```ts
203
+ function handle_select(
204
+ i: number,
205
+ j: number,
206
+ message: MultimodalMessage | null
207
+ ): void {
208
+ dispatch("select", {
209
+ index: [i, j],
210
+ value: message
211
+ });
212
+ }
213
+
214
+ function handle_like(
215
+ i: number,
216
+ j: number,
217
+ message: MultimodalMessage | null,
218
+ liked: boolean
219
+ ): void {
220
+ dispatch("like", {
221
+ index: [i, j],
222
+ value: message,
223
+ liked: liked
224
+ });
225
+ }
226
+ ```
227
+
228
+ Now for the fun part, actually rendering the text and files in the same message!
229
+
230
+ You should see some code like the following that determines whether a file or a markdown message should be displayed depending on the type of the message:
231
+
232
+ ```svelte
233
+ {#if typeof message === "string"}
234
+ <Markdown
235
+ {message}
236
+ {latex_delimiters}
237
+ {sanitize_html}
238
+ {render_markdown}
239
+ {line_breaks}
240
+ on:load={scroll}
241
+ />
242
+ {:else if message !== null && message.file?.mime_type?.includes("audio")}
243
+ <audio
244
+ data-testid="chatbot-audio"
245
+ controls
246
+ preload="metadata"
247
+ ...
248
+ ```
249
+
250
+ We will modify this code to always display the text message and then loop through the files and display all of them that are present:
251
+
252
+ ```svelte
253
+ <Markdown
254
+ message={message.text}
255
+ {latex_delimiters}
256
+ {sanitize_html}
257
+ {render_markdown}
258
+ {line_breaks}
259
+ on:load={scroll}
260
+ />
261
+ {#each message.files as file, k}
262
+ {#if file !== null && file.file.mime_type?.includes("audio")}
263
+ <audio
264
+ data-testid="chatbot-audio"
265
+ controls
266
+ preload="metadata"
267
+ src={file.file?.url}
268
+ title={file.alt_text}
269
+ on:play
270
+ on:pause
271
+ on:ended
272
+ />
273
+ {:else if message !== null && file.file?.mime_type?.includes("video")}
274
+ <video
275
+ data-testid="chatbot-video"
276
+ controls
277
+ src={file.file?.url}
278
+ title={file.alt_text}
279
+ preload="auto"
280
+ on:play
281
+ on:pause
282
+ on:ended
283
+ >
284
+ <track kind="captions" />
285
+ </video>
286
+ {:else if message !== null && file.file?.mime_type?.includes("image")}
287
+ <img
288
+ data-testid="chatbot-image"
289
+ src={file.file?.url}
290
+ alt={file.alt_text}
291
+ />
292
+ {:else if message !== null && file.file?.url !== null}
293
+ <a
294
+ data-testid="chatbot-file"
295
+ href={file.file?.url}
296
+ target="_blank"
297
+ download={window.__is_colab__
298
+ ? null
299
+ : file.file?.orig_name || file.file?.path}
300
+ >
301
+ {file.file?.orig_name || file.file?.path}
302
+ </a>
303
+ {:else if pending_message && j === 1}
304
+ <Pending {layout} />
305
+ {/if}
306
+ {/each}
307
+ ```
308
+
309
+ We did it! 🎉
310
+
311
+ ## Part 4 - The demo
312
+
313
+ For this tutorial, let's keep the demo simple and just display a static conversation between a hypothetical user and a bot.
314
+ This demo will show how both the user and the bot can send files.
315
+ In part 2 of this tutorial series we will build a fully functional chatbot demo!
316
+
317
+ The demo code will look like the following:
318
+
319
+ ```python
320
+ import gradio as gr
321
+ from gradio_multimodalchatbot import MultimodalChatbot
322
+ from gradio.data_classes import FileData
323
+
324
+ user_msg1 = {"text": "Hello, what is in this image?",
325
+ "files": [{"file": FileData(path="https://gradio-builds.s3.amazonaws.com/diffusion_image/cute_dog.jpg")}]
326
+ }
327
+ bot_msg1 = {"text": "It is a very cute dog",
328
+ "files": []}
329
+
330
+ user_msg2 = {"text": "Describe this audio clip please.",
331
+ "files": [{"file": FileData(path="cantina.wav")}]}
332
+ bot_msg2 = {"text": "It is the cantina song from Star Wars",
333
+ "files": []}
334
+
335
+ user_msg3 = {"text": "Give me a video clip please.",
336
+ "files": []}
337
+ bot_msg3 = {"text": "Here is a video clip of the world",
338
+ "files": [{"file": FileData(path="world.mp4")},
339
+ {"file": FileData(path="cantina.wav")}]}
340
+
341
+ conversation = [[user_msg1, bot_msg1], [user_msg2, bot_msg2], [user_msg3, bot_msg3]]
342
+
343
+ with gr.Blocks() as demo:
344
+ MultimodalChatbot(value=conversation, height=800)
345
+
346
+
347
+ demo.launch()
348
+ ```
349
+
350
+
351
+ Tip: Change the filepaths so that they correspond to files on your machine. Also, if you are running in development mode, make sure the files are located in the top level of your custom component directory.
352
+
353
+ ## Part 5 - Deploying and Conclusion
354
+
355
+ Let's build and deploy our demo with `gradio cc build` and `gradio cc deploy`!
356
+
357
+ You can check out our component deployed to [HuggingFace Spaces](https://huggingface.co/spaces/freddyaboulton/gradio_multimodalchatbot) and all of the source code is available [here](https://huggingface.co/spaces/freddyaboulton/gradio_multimodalchatbot/tree/main/src).
358
+
359
+ See you in the next installment of this series!
6 Custom Components/09_documenting-custom-components.md ADDED
@@ -0,0 +1,275 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Documenting Custom Components
3
+
4
+ In 4.15, we added a new `gradio cc docs` command to the Gradio CLI to generate rich documentation for your custom component. This command will generate documentation for users automatically, but to get the most out of it, you need to do a few things.
5
+
6
+ ## How do I use it?
7
+
8
+ The documentation will be generated when running `gradio cc build`. You can pass the `--no-generate-docs` argument to turn off this behaviour.
9
+
10
+ There is also a standalone `docs` command that allows for greater customisation. If you are running this command manually it should be run _after_ the `version` in your `pyproject.toml` has been bumped but before building the component.
11
+
12
+ All arguments are optional.
13
+
14
+ ```bash
15
+ gradio cc docs
16
+ path # The directory of the custom component.
17
+ --demo-dir # Path to the demo directory.
18
+ --demo-name # Name of the demo file
19
+ --space-url # URL of the Hugging Face Space to link to
20
+ --generate-space # create a documentation space.
21
+ --no-generate-space # do not create a documentation space
22
+ --readme-path # Path to the README.md file.
23
+ --generate-readme # create a REAMDE.md file
24
+ --no-generate-readme # do not create a README.md file
25
+ --suppress-demo-check # suppress validation checks and warnings
26
+ ```
27
+
28
+ ## What gets generated?
29
+
30
+ The `gradio cc docs` command will generate an interactive Gradio app and a static README file with various features. You can see an example here:
31
+
32
+ - [Gradio app deployed on Hugging Face Spaces]()
33
+ - [README.md rendered by GitHub]()
34
+
35
+ The README.md and space both have the following features:
36
+
37
+ - A description.
38
+ - Installation instructions.
39
+ - A fully functioning code snippet.
40
+ - Optional links to PyPi, GitHub, and Hugging Face Spaces.
41
+ - API documentation including:
42
+ - An argument table for component initialisation showing types, defaults, and descriptions.
43
+ - A description of how the component affects the user's predict function.
44
+ - A table of events and their descriptions.
45
+ - Any additional interfaces or classes that may be used during initialisation or in the pre- or post- processors.
46
+
47
+ Additionally, the Gradio includes:
48
+
49
+ - A live demo.
50
+ - A richer, interactive version of the parameter tables.
51
+ - Nicer styling!
52
+
53
+ ## What do I need to do?
54
+
55
+ The documentation generator uses existing standards to extract the necessary information, namely Type Hints and Docstrings. There are no Gradio-specific APIs for documentation, so following best practices will generally yield the best results.
56
+
57
+ If you already use type hints and docstrings in your component source code, you don't need to do much to benefit from this feature, but there are some details that you should be aware of.
58
+
59
+ ### Python version
60
+
61
+ To get the best documentation experience, you need to use Python `3.10` or greater when generating documentation. This is because some introspection features used to generate the documentation were only added in `3.10`.
62
+
63
+ ### Type hints
64
+
65
+ Python type hints are used extensively to provide helpful information for users.
66
+
67
+ <details>
68
+ <summary> What are type hints?</summary>
69
+
70
+
71
+ If you need to become more familiar with type hints in Python, they are a simple way to express what Python types are expected for arguments and return values of functions and methods. They provide a helpful in-editor experience, aid in maintenance, and integrate with various other tools. These types can be simple primitives, like `list` `str` `bool`; they could be more compound types like `list[str]`, `str | None` or `tuple[str, float | int]`; or they can be more complex types using utility classed like [`TypedDict`](https://peps.python.org/pep-0589/#abstract).
72
+
73
+ [Read more about type hints in Python.](https://realpython.com/lessons/type-hinting/)
74
+
75
+
76
+ </details>
77
+
78
+ #### What do I need to add hints to?
79
+
80
+ You do not need to add type hints to every part of your code. For the documentation to work correctly, you will need to add type hints to the following component methods:
81
+
82
+ - `__init__` parameters should be typed.
83
+ - `postprocess` parameters and return value should be typed.
84
+ - `preprocess` parameters and return value should be typed.
85
+
86
+ If you are using `gradio cc create`, these types should already exist, but you may need to tweak them based on any changes you make.
87
+
88
+ ##### `__init__`
89
+
90
+ Here, you only need to type the parameters. If you have cloned a template with `gradio` cc create`, these should already be in place. You will only need to add new hints for anything you have added or changed:
91
+
92
+ ```py
93
+ def __init__(
94
+ self,
95
+ value: str | None = None,
96
+ *,
97
+ sources: Literal["upload", "microphone"] = "upload,
98
+ every: Timer | float | None = None,
99
+ ...
100
+ ):
101
+ ...
102
+ ```
103
+
104
+ ##### `preprocess` and `postprocess`
105
+
106
+ The `preprocess` and `postprocess` methods determine the value passed to the user function and the value that needs to be returned.
107
+
108
+ Even if the design of your component is primarily as an input or an output, it is worth adding type hints to both the input parameters and the return values because Gradio has no way of limiting how components can be used.
109
+
110
+ In this case, we specifically care about:
111
+
112
+ - The return type of `preprocess`.
113
+ - The input type of `postprocess`.
114
+
115
+ ```py
116
+ def preprocess(
117
+ self, payload: FileData | None # input is optional
118
+ ) -> tuple[int, str] | str | None:
119
+
120
+ # user function input is the preprocess return ▲
121
+ # user function output is the postprocess input ▼
122
+
123
+ def postprocess(
124
+ self, value: tuple[int, str] | None
125
+ ) -> FileData | bytes | None: # return is optional
126
+ ...
127
+ ```
128
+
129
+ ### Docstrings
130
+
131
+ Docstrings are also used extensively to extract more meaningful, human-readable descriptions of certain parts of the API.
132
+
133
+ <details>
134
+ <summary> What are docstrings?</summary>
135
+
136
+
137
+ If you need to become more familiar with docstrings in Python, they are a way to annotate parts of your code with human-readable decisions and explanations. They offer a rich in-editor experience like type hints, but unlike type hints, they don't have any specific syntax requirements. They are simple strings and can take almost any form. The only requirement is where they appear. Docstrings should be "a string literal that occurs as the first statement in a module, function, class, or method definition".
138
+
139
+ [Read more about Python docstrings.](https://peps.python.org/pep-0257/#what-is-a-docstring)
140
+
141
+ </details>
142
+
143
+ While docstrings don't have any syntax requirements, we need a particular structure for documentation purposes.
144
+
145
+ As with type hint, the specific information we care about is as follows:
146
+
147
+ - `__init__` parameter docstrings.
148
+ - `preprocess` return docstrings.
149
+ - `postprocess` input parameter docstrings.
150
+
151
+ Everything else is optional.
152
+
153
+ Docstrings should always take this format to be picked up by the documentation generator:
154
+
155
+ #### Classes
156
+
157
+ ```py
158
+ """
159
+ A description of the class.
160
+
161
+ This can span multiple lines and can _contain_ *markdown*.
162
+ """
163
+ ```
164
+
165
+ #### Methods and functions
166
+
167
+ Markdown in these descriptions will not be converted into formatted text.
168
+
169
+ ```py
170
+ """
171
+ Parameters:
172
+ param_one: A description for this parameter.
173
+ param_two: A description for this parameter.
174
+ Returns:
175
+ A description for this return value.
176
+ """
177
+ ```
178
+
179
+ ### Events
180
+
181
+ In custom components, events are expressed as a list stored on the `events` field of the component class. While we do not need types for events, we _do_ need a human-readable description so users can understand the behaviour of the event.
182
+
183
+ To facilitate this, we must create the event in a specific way.
184
+
185
+ There are two ways to add events to a custom component.
186
+
187
+ #### Built-in events
188
+
189
+ Gradio comes with a variety of built-in events that may be enough for your component. If you are using built-in events, you do not need to do anything as they already have descriptions we can extract:
190
+
191
+ ```py
192
+ from gradio.events import Events
193
+
194
+ class ParamViewer(Component):
195
+ ...
196
+
197
+ EVENTS = [
198
+ Events.change,
199
+ Events.upload,
200
+ ]
201
+ ```
202
+
203
+ #### Custom events
204
+
205
+ You can define a custom event if the built-in events are unsuitable for your use case. This is a straightforward process, but you must create the event in this way for docstrings to work correctly:
206
+
207
+ ```py
208
+ from gradio.events import Events, EventListener
209
+
210
+ class ParamViewer(Component):
211
+ ...
212
+
213
+ EVENTS = [
214
+ Events.change,
215
+ EventListener(
216
+ "bingbong",
217
+ doc="This listener is triggered when the user does a bingbong."
218
+ )
219
+ ]
220
+ ```
221
+
222
+ ### Demo
223
+
224
+ The `demo/app.py`, often used for developing the component, generates the live demo and code snippet. The only strict rule here is that the `demo.launch()` command must be contained with a `__name__ == "__main__"` conditional as below:
225
+
226
+ ```py
227
+ if __name__ == "__main__":
228
+ demo.launch()
229
+ ```
230
+
231
+ The documentation generator will scan for such a clause and error if absent. If you are _not_ launching the demo inside the `demo/app.py`, then you can pass `--suppress-demo-check` to turn off this check.
232
+
233
+ #### Demo recommendations
234
+
235
+ Although there are no additional rules, there are some best practices you should bear in mind to get the best experience from the documentation generator.
236
+
237
+ These are only guidelines, and every situation is unique, but they are sound principles to remember.
238
+
239
+ ##### Keep the demo compact
240
+
241
+ Compact demos look better and make it easier for users to understand what the demo does. Try to remove as many extraneous UI elements as possible to focus the users' attention on the core use case.
242
+
243
+ Sometimes, it might make sense to have a `demo/app.py` just for the docs and an additional, more complex app for your testing purposes. You can also create other spaces, showcasing more complex examples and linking to them from the main class docstring or the `pyproject.toml` description.
244
+
245
+ #### Keep the code concise
246
+
247
+ The 'getting started' snippet utilises the demo code, which should be as short as possible to keep users engaged and avoid confusion.
248
+
249
+ It isn't the job of the sample snippet to demonstrate the whole API; this snippet should be the shortest path to success for a new user. It should be easy to type or copy-paste and easy to understand. Explanatory comments should be brief and to the point.
250
+
251
+ #### Avoid external dependencies
252
+
253
+ As mentioned above, users should be able to copy-paste a snippet and have a fully working app. Try to avoid third-party library dependencies to facilitate this.
254
+
255
+ You should carefully consider any examples; avoiding examples that require additional files or that make assumptions about the environment is generally a good idea.
256
+
257
+ #### Ensure the `demo` directory is self-contained
258
+
259
+ Only the `demo` directory will be uploaded to Hugging Face spaces in certain instances, as the component will be installed via PyPi if possible. It is essential that this directory is self-contained and any files needed for the correct running of the demo are present.
260
+
261
+ ### Additional URLs
262
+
263
+ The documentation generator will generate a few buttons, providing helpful information and links to users. They are obtained automatically in some cases, but some need to be explicitly included in the `pyproject.yaml`.
264
+
265
+ - PyPi Version and link - This is generated automatically.
266
+ - GitHub Repository - This is populated via the `pyproject.toml`'s `project.urls.repository`.
267
+ - Hugging Face Space - This is populated via the `pyproject.toml`'s `project.urls.space`.
268
+
269
+ An example `pyproject.toml` urls section might look like this:
270
+
271
+ ```toml
272
+ [project.urls]
273
+ repository = "https://github.com/user/repo-name"
274
+ space = "https://huggingface.co/spaces/user/space-name"
275
+ ```
7 Tabular Data Science and Plots/01_connecting-to-a-database.md ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Connecting to a Database
3
+
4
+ Related spaces: https://huggingface.co/spaces/gradio/chicago-bikeshare-dashboard
5
+ Tags: TABULAR, PLOTS
6
+
7
+ ## Introduction
8
+
9
+ This guide explains how you can use Gradio to connect your app to a database. We will be
10
+ connecting to a PostgreSQL database hosted on AWS but gradio is completely agnostic to the type of
11
+ database you are connecting to and where it's hosted. So as long as you can write python code to connect
12
+ to your data, you can display it in a web UI with gradio 💪
13
+
14
+ ## Overview
15
+
16
+ We will be analyzing bike share data from Chicago. The data is hosted on kaggle [here](https://www.kaggle.com/datasets/evangower/cyclistic-bike-share?select=202203-divvy-tripdata.csv).
17
+ Our goal is to create a dashboard that will enable our business stakeholders to answer the following questions:
18
+
19
+ 1. Are electric bikes more popular than regular bikes?
20
+ 2. What are the top 5 most popular departure bike stations?
21
+
22
+ At the end of this guide, we will have a functioning application that looks like this:
23
+
24
+ <gradio-app space="gradio/chicago-bikeshare-dashboard"> </gradio-app>
25
+
26
+ ## Step 1 - Creating your database
27
+
28
+ We will be storing our data on a PostgreSQL hosted on Amazon's RDS service. Create an AWS account if you don't already have one
29
+ and create a PostgreSQL database on the free tier.
30
+
31
+ **Important**: If you plan to host this demo on HuggingFace Spaces, make sure database is on port **8080**. Spaces will
32
+ block all outgoing connections unless they are made to port 80, 443, or 8080 as noted [here](https://huggingface.co/docs/hub/spaces-overview#networking).
33
+ RDS will not let you create a postgreSQL instance on ports 80 or 443.
34
+
35
+ Once your database is created, download the dataset from Kaggle and upload it to your database.
36
+ For the sake of this demo, we will only upload March 2022 data.
37
+
38
+ ## Step 2.a - Write your ETL code
39
+
40
+ We will be querying our database for the total count of rides split by the type of bicycle (electric, standard, or docked).
41
+ We will also query for the total count of rides that depart from each station and take the top 5.
42
+
43
+ We will then take the result of our queries and visualize them in with matplotlib.
44
+
45
+ We will use the pandas [read_sql](https://pandas.pydata.org/docs/reference/api/pandas.read_sql.html)
46
+ method to connect to the database. This requires the `psycopg2` library to be installed.
47
+
48
+ In order to connect to our database, we will specify the database username, password, and host as environment variables.
49
+ This will make our app more secure by avoiding storing sensitive information as plain text in our application files.
50
+
51
+ ```python
52
+ import os
53
+ import pandas as pd
54
+ import matplotlib.pyplot as plt
55
+
56
+ DB_USER = os.getenv("DB_USER")
57
+ DB_PASSWORD = os.getenv("DB_PASSWORD")
58
+ DB_HOST = os.getenv("DB_HOST")
59
+ PORT = 8080
60
+ DB_NAME = "bikeshare"
61
+
62
+ connection_string = f"postgresql://{DB_USER}:{DB_PASSWORD}@{DB_HOST}?port={PORT}&dbname={DB_NAME}"
63
+
64
+ def get_count_ride_type():
65
+ df = pd.read_sql(
66
+ """
67
+ SELECT COUNT(ride_id) as n, rideable_type
68
+ FROM rides
69
+ GROUP BY rideable_type
70
+ ORDER BY n DESC
71
+ """,
72
+ con=connection_string
73
+ )
74
+ fig_m, ax = plt.subplots()
75
+ ax.bar(x=df['rideable_type'], height=df['n'])
76
+ ax.set_title("Number of rides by bycycle type")
77
+ ax.set_ylabel("Number of Rides")
78
+ ax.set_xlabel("Bicycle Type")
79
+ return fig_m
80
+
81
+
82
+ def get_most_popular_stations():
83
+
84
+ df = pd.read_sql(
85
+ """
86
+ SELECT COUNT(ride_id) as n, MAX(start_station_name) as station
87
+ FROM RIDES
88
+ WHERE start_station_name is NOT NULL
89
+ GROUP BY start_station_id
90
+ ORDER BY n DESC
91
+ LIMIT 5
92
+ """,
93
+ con=connection_string
94
+ )
95
+ fig_m, ax = plt.subplots()
96
+ ax.bar(x=df['station'], height=df['n'])
97
+ ax.set_title("Most popular stations")
98
+ ax.set_ylabel("Number of Rides")
99
+ ax.set_xlabel("Station Name")
100
+ ax.set_xticklabels(
101
+ df['station'], rotation=45, ha="right", rotation_mode="anchor"
102
+ )
103
+ ax.tick_params(axis="x", labelsize=8)
104
+ fig_m.tight_layout()
105
+ return fig_m
106
+ ```
107
+
108
+ If you were to run our script locally, you could pass in your credentials as environment variables like so
109
+
110
+ ```bash
111
+ DB_USER='username' DB_PASSWORD='password' DB_HOST='host' python app.py
112
+ ```
113
+
114
+ ## Step 2.c - Write your gradio app
115
+
116
+ We will display or matplotlib plots in two separate `gr.Plot` components displayed side by side using `gr.Row()`.
117
+ Because we have wrapped our function to fetch the data in a `demo.load()` event trigger,
118
+ our demo will fetch the latest data **dynamically** from the database each time the web page loads. 🪄
119
+
120
+ ```python
121
+ import gradio as gr
122
+
123
+ with gr.Blocks() as demo:
124
+ with gr.Row():
125
+ bike_type = gr.Plot()
126
+ station = gr.Plot()
127
+
128
+ demo.load(get_count_ride_type, inputs=None, outputs=bike_type)
129
+ demo.load(get_most_popular_stations, inputs=None, outputs=station)
130
+
131
+ demo.launch()
132
+ ```
133
+
134
+ ## Step 3 - Deployment
135
+
136
+ If you run the code above, your app will start running locally.
137
+ You can even get a temporary shareable link by passing the `share=True` parameter to `launch`.
138
+
139
+ But what if you want to a permanent deployment solution?
140
+ Let's deploy our Gradio app to the free HuggingFace Spaces platform.
141
+
142
+ If you haven't used Spaces before, follow the previous guide [here](/using_hugging_face_integrations).
143
+ You will have to add the `DB_USER`, `DB_PASSWORD`, and `DB_HOST` variables as "Repo Secrets". You can do this in the "Settings" tab.
144
+
145
+ ![secrets](https://github.com/gradio-app/gradio/blob/main/guides/assets/secrets.png?raw=true)
146
+
147
+ ## Conclusion
148
+
149
+ Congratulations! You know how to connect your gradio app to a database hosted on the cloud! ☁️
150
+
151
+ Our dashboard is now running on [Spaces](https://huggingface.co/spaces/gradio/chicago-bikeshare-dashboard).
152
+ The complete code is [here](https://huggingface.co/spaces/gradio/chicago-bikeshare-dashboard/blob/main/app.py)
153
+
154
+ As you can see, gradio gives you the power to connect to your data wherever it lives and display however you want! 🔥
7 Tabular Data Science and Plots/creating-a-dashboard-from-bigquery-data.md ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Creating a Real-Time Dashboard from BigQuery Data
3
+
4
+ Tags: TABULAR, DASHBOARD, PLOTS
5
+
6
+ [Google BigQuery](https://cloud.google.com/bigquery) is a cloud-based service for processing very large data sets. It is a serverless and highly scalable data warehousing solution that enables users to analyze data [using SQL-like queries](https://www.oreilly.com/library/view/google-bigquery-the/9781492044451/ch01.html).
7
+
8
+ In this tutorial, we will show you how to query a BigQuery dataset in Python and display the data in a dashboard that updates in real time using `gradio`. The dashboard will look like this:
9
+
10
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/bigquery-dashboard.gif">
11
+
12
+ We'll cover the following steps in this Guide:
13
+
14
+ 1. Setting up your BigQuery credentials
15
+ 2. Using the BigQuery client
16
+ 3. Building the real-time dashboard (in just _7 lines of Python_)
17
+
18
+ We'll be working with the [New York Times' COVID dataset](https://www.nytimes.com/interactive/2021/us/covid-cases.html) that is available as a public dataset on BigQuery. The dataset, named `covid19_nyt.us_counties` contains the latest information about the number of confirmed cases and deaths from COVID across US counties.
19
+
20
+ **Prerequisites**: This Guide uses [Gradio Blocks](/guides/quickstart/#blocks-more-flexibility-and-control), so make your are familiar with the Blocks class.
21
+
22
+ ## Setting up your BigQuery Credentials
23
+
24
+ To use Gradio with BigQuery, you will need to obtain your BigQuery credentials and use them with the [BigQuery Python client](https://pypi.org/project/google-cloud-bigquery/). If you already have BigQuery credentials (as a `.json` file), you can skip this section. If not, you can do this for free in just a couple of minutes.
25
+
26
+ 1. First, log in to your Google Cloud account and go to the Google Cloud Console (https://console.cloud.google.com/)
27
+
28
+ 2. In the Cloud Console, click on the hamburger menu in the top-left corner and select "APIs & Services" from the menu. If you do not have an existing project, you will need to create one.
29
+
30
+ 3. Then, click the "+ Enabled APIs & services" button, which allows you to enable specific services for your project. Search for "BigQuery API", click on it, and click the "Enable" button. If you see the "Manage" button, then the BigQuery is already enabled, and you're all set.
31
+
32
+ 4. In the APIs & Services menu, click on the "Credentials" tab and then click on the "Create credentials" button.
33
+
34
+ 5. In the "Create credentials" dialog, select "Service account key" as the type of credentials to create, and give it a name. Also grant the service account permissions by giving it a role such as "BigQuery User", which will allow you to run queries.
35
+
36
+ 6. After selecting the service account, select the "JSON" key type and then click on the "Create" button. This will download the JSON key file containing your credentials to your computer. It will look something like this:
37
+
38
+ ```json
39
+ {
40
+ "type": "service_account",
41
+ "project_id": "your project",
42
+ "private_key_id": "your private key id",
43
+ "private_key": "private key",
44
+ "client_email": "email",
45
+ "client_id": "client id",
46
+ "auth_uri": "https://accounts.google.com/o/oauth2/auth",
47
+ "token_uri": "https://accounts.google.com/o/oauth2/token",
48
+ "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
49
+ "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id"
50
+ }
51
+ ```
52
+
53
+ ## Using the BigQuery Client
54
+
55
+ Once you have the credentials, you will need to use the BigQuery Python client to authenticate using your credentials. To do this, you will need to install the BigQuery Python client by running the following command in the terminal:
56
+
57
+ ```bash
58
+ pip install google-cloud-bigquery[pandas]
59
+ ```
60
+
61
+ You'll notice that we've installed the pandas add-on, which will be helpful for processing the BigQuery dataset as a pandas dataframe. Once the client is installed, you can authenticate using your credentials by running the following code:
62
+
63
+ ```py
64
+ from google.cloud import bigquery
65
+
66
+ client = bigquery.Client.from_service_account_json("path/to/key.json")
67
+ ```
68
+
69
+ With your credentials authenticated, you can now use the BigQuery Python client to interact with your BigQuery datasets.
70
+
71
+ Here is an example of a function which queries the `covid19_nyt.us_counties` dataset in BigQuery to show the top 20 counties with the most confirmed cases as of the current day:
72
+
73
+ ```py
74
+ import numpy as np
75
+
76
+ QUERY = (
77
+ 'SELECT * FROM `bigquery-public-data.covid19_nyt.us_counties` '
78
+ 'ORDER BY date DESC,confirmed_cases DESC '
79
+ 'LIMIT 20')
80
+
81
+ def run_query():
82
+ query_job = client.query(QUERY)
83
+ query_result = query_job.result()
84
+ df = query_result.to_dataframe()
85
+ # Select a subset of columns
86
+ df = df[["confirmed_cases", "deaths", "county", "state_name"]]
87
+ # Convert numeric columns to standard numpy types
88
+ df = df.astype({"deaths": np.int64, "confirmed_cases": np.int64})
89
+ return df
90
+ ```
91
+
92
+ ## Building the Real-Time Dashboard
93
+
94
+ Once you have a function to query the data, you can use the `gr.DataFrame` component from the Gradio library to display the results in a tabular format. This is a useful way to inspect the data and make sure that it has been queried correctly.
95
+
96
+ Here is an example of how to use the `gr.DataFrame` component to display the results. By passing in the `run_query` function to `gr.DataFrame`, we instruct Gradio to run the function as soon as the page loads and show the results. In addition, you also pass in the keyword `every` to tell the dashboard to refresh every hour (60\*60 seconds).
97
+
98
+ ```py
99
+ import gradio as gr
100
+
101
+ with gr.Blocks() as demo:
102
+ gr.DataFrame(run_query, every=gr.Timer(60*60))
103
+
104
+ demo.launch()
105
+ ```
106
+
107
+ Perhaps you'd like to add a visualization to our dashboard. You can use the `gr.ScatterPlot()` component to visualize the data in a scatter plot. This allows you to see the relationship between different variables such as case count and case deaths in the dataset and can be useful for exploring the data and gaining insights. Again, we can do this in real-time
108
+ by passing in the `every` parameter.
109
+
110
+ Here is a complete example showing how to use the `gr.ScatterPlot` to visualize in addition to displaying data with the `gr.DataFrame`
111
+
112
+ ```py
113
+ import gradio as gr
114
+
115
+ with gr.Blocks() as demo:
116
+ gr.Markdown("# 💉 Covid Dashboard (Updated Hourly)")
117
+ with gr.Row():
118
+ gr.DataFrame(run_query, every=gr.Timer(60*60))
119
+ gr.ScatterPlot(run_query, every=gr.Timer(60*60), x="confirmed_cases",
120
+ y="deaths", tooltip="county", width=500, height=500)
121
+
122
+ demo.queue().launch() # Run the demo with queuing enabled
123
+ ```
7 Tabular Data Science and Plots/creating-a-dashboard-from-supabase-data.md ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Create a Dashboard from Supabase Data
3
+
4
+ Tags: TABULAR, DASHBOARD, PLOTS
5
+
6
+ [Supabase](https://supabase.com/) is a cloud-based open-source backend that provides a PostgreSQL database, authentication, and other useful features for building web and mobile applications. In this tutorial, you will learn how to read data from Supabase and plot it in **real-time** on a Gradio Dashboard.
7
+
8
+ **Prerequisites:** To start, you will need a free Supabase account, which you can sign up for here: [https://app.supabase.com/](https://app.supabase.com/)
9
+
10
+ In this end-to-end guide, you will learn how to:
11
+
12
+ - Create tables in Supabase
13
+ - Write data to Supabase using the Supabase Python Client
14
+ - Visualize the data in a real-time dashboard using Gradio
15
+
16
+ If you already have data on Supabase that you'd like to visualize in a dashboard, you can skip the first two sections and go directly to [visualizing the data](#visualize-the-data-in-a-real-time-gradio-dashboard)!
17
+
18
+ ## Create a table in Supabase
19
+
20
+ First of all, we need some data to visualize. Following this [excellent guide](https://supabase.com/blog/loading-data-supabase-python), we'll create fake commerce data and put it in Supabase.
21
+
22
+ 1\. Start by creating a new project in Supabase. Once you're logged in, click the "New Project" button
23
+
24
+ 2\. Give your project a name and database password. You can also choose a pricing plan (for our purposes, the Free Tier is sufficient!)
25
+
26
+ 3\. You'll be presented with your API keys while the database spins up (can take up to 2 minutes).
27
+
28
+ 4\. Click on "Table Editor" (the table icon) in the left pane to create a new table. We'll create a single table called `Product`, with the following schema:
29
+
30
+ <center>
31
+ <table>
32
+ <tr><td>product_id</td><td>int8</td></tr>
33
+ <tr><td>inventory_count</td><td>int8</td></tr>
34
+ <tr><td>price</td><td>float8</td></tr>
35
+ <tr><td>product_name</td><td>varchar</td></tr>
36
+ </table>
37
+ </center>
38
+
39
+ 5\. Click Save to save the table schema.
40
+
41
+ Our table is now ready!
42
+
43
+ ## Write data to Supabase
44
+
45
+ The next step is to write data to a Supabase dataset. We will use the Supabase Python library to do this.
46
+
47
+ 6\. Install `supabase` by running the following command in your terminal:
48
+
49
+ ```bash
50
+ pip install supabase
51
+ ```
52
+
53
+ 7\. Get your project URL and API key. Click the Settings (gear icon) on the left pane and click 'API'. The URL is listed in the Project URL box, while the API key is listed in Project API keys (with the tags `service_role`, `secret`)
54
+
55
+ 8\. Now, run the following Python script to write some fake data to the table (note you have to put the values of `SUPABASE_URL` and `SUPABASE_SECRET_KEY` from step 7):
56
+
57
+ ```python
58
+ import supabase
59
+
60
+ # Initialize the Supabase client
61
+ client = supabase.create_client('SUPABASE_URL', 'SUPABASE_SECRET_KEY')
62
+
63
+ # Define the data to write
64
+ import random
65
+
66
+ main_list = []
67
+ for i in range(10):
68
+ value = {'product_id': i,
69
+ 'product_name': f"Item {i}",
70
+ 'inventory_count': random.randint(1, 100),
71
+ 'price': random.random()*100
72
+ }
73
+ main_list.append(value)
74
+
75
+ # Write the data to the table
76
+ data = client.table('Product').insert(main_list).execute()
77
+ ```
78
+
79
+ Return to your Supabase dashboard and refresh the page, you should now see 10 rows populated in the `Product` table!
80
+
81
+ ## Visualize the Data in a Real-Time Gradio Dashboard
82
+
83
+ Finally, we will read the data from the Supabase dataset using the same `supabase` Python library and create a realtime dashboard using `gradio`.
84
+
85
+ Note: We repeat certain steps in this section (like creating the Supabase client) in case you did not go through the previous sections. As described in Step 7, you will need the project URL and API Key for your database.
86
+
87
+ 9\. Write a function that loads the data from the `Product` table and returns it as a pandas Dataframe:
88
+
89
+ ```python
90
+ import supabase
91
+ import pandas as pd
92
+
93
+ client = supabase.create_client('SUPABASE_URL', 'SUPABASE_SECRET_KEY')
94
+
95
+ def read_data():
96
+ response = client.table('Product').select("*").execute()
97
+ df = pd.DataFrame(response.data)
98
+ return df
99
+ ```
100
+
101
+ 10\. Create a small Gradio Dashboard with 2 Barplots that plots the prices and inventories of all of the items every minute and updates in real-time:
102
+
103
+ ```python
104
+ import gradio as gr
105
+
106
+ with gr.Blocks() as dashboard:
107
+ with gr.Row():
108
+ gr.BarPlot(read_data, x="product_id", y="price", title="Prices", every=gr.Timer(60))
109
+ gr.BarPlot(read_data, x="product_id", y="inventory_count", title="Inventory", every=gr.Timer(60))
110
+
111
+ dashboard.queue().launch()
112
+ ```
113
+
114
+ Notice that by passing in a function to `gr.BarPlot()`, we have the BarPlot query the database as soon as the web app loads (and then again every 60 seconds because of the `every` parameter). Your final dashboard should look something like this:
115
+
116
+ <gradio-app space="abidlabs/supabase"></gradio-app>
117
+
118
+ ## Conclusion
119
+
120
+ That's it! In this tutorial, you learned how to write data to a Supabase dataset, and then read that data and plot the results as bar plots. If you update the data in the Supabase database, you'll notice that the Gradio dashboard will update within a minute.
121
+
122
+ Try adding more plots and visualizations to this example (or with a different dataset) to build a more complex dashboard!
7 Tabular Data Science and Plots/creating-a-realtime-dashboard-from-google-sheets.md ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Creating a Real-Time Dashboard from Google Sheets
3
+
4
+ Tags: TABULAR, DASHBOARD, PLOTS
5
+
6
+ [Google Sheets](https://www.google.com/sheets/about/) are an easy way to store tabular data in the form of spreadsheets. With Gradio and pandas, it's easy to read data from public or private Google Sheets and then display the data or plot it. In this blog post, we'll build a small _real-time_ dashboard, one that updates when the data in the Google Sheets updates.
7
+
8
+ Building the dashboard itself will just be 9 lines of Python code using Gradio, and our final dashboard will look like this:
9
+
10
+ <gradio-app space="gradio/line-plot"></gradio-app>
11
+
12
+ **Prerequisites**: This Guide uses [Gradio Blocks](/guides/quickstart/#blocks-more-flexibility-and-control), so make you are familiar with the Blocks class.
13
+
14
+ The process is a little different depending on if you are working with a publicly accessible or a private Google Sheet. We'll cover both, so let's get started!
15
+
16
+ ## Public Google Sheets
17
+
18
+ Building a dashboard from a public Google Sheet is very easy, thanks to the [`pandas` library](https://pandas.pydata.org/):
19
+
20
+ 1\. Get the URL of the Google Sheets that you want to use. To do this, simply go to the Google Sheets, click on the "Share" button in the top-right corner, and then click on the "Get shareable link" button. This will give you a URL that looks something like this:
21
+
22
+ ```html
23
+ https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/edit#gid=0
24
+ ```
25
+
26
+ 2\. Now, let's modify this URL and then use it to read the data from the Google Sheets into a Pandas DataFrame. (In the code below, replace the `URL` variable with the URL of your public Google Sheet):
27
+
28
+ ```python
29
+ import pandas as pd
30
+
31
+ URL = "https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/edit#gid=0"
32
+ csv_url = URL.replace('/edit#gid=', '/export?format=csv&gid=')
33
+
34
+ def get_data():
35
+ return pd.read_csv(csv_url)
36
+ ```
37
+
38
+ 3\. The data query is a function, which means that it's easy to display it real-time using the `gr.DataFrame` component, or plot it real-time using the `gr.LinePlot` component (of course, depending on the data, a different plot may be appropriate). To do this, just pass the function into the respective components, and set the `every` parameter based on how frequently (in seconds) you would like the component to refresh. Here's the Gradio code:
39
+
40
+ ```python
41
+ import gradio as gr
42
+
43
+ with gr.Blocks() as demo:
44
+ gr.Markdown("# 📈 Real-Time Line Plot")
45
+ with gr.Row():
46
+ with gr.Column():
47
+ gr.DataFrame(get_data, every=gr.Timer(5))
48
+ with gr.Column():
49
+ gr.LinePlot(get_data, every=gr.Timer(5), x="Date", y="Sales", y_title="Sales ($ millions)", overlay_point=True, width=500, height=500)
50
+
51
+ demo.queue().launch() # Run the demo with queuing enabled
52
+ ```
53
+
54
+ And that's it! You have a dashboard that refreshes every 5 seconds, pulling the data from your Google Sheet.
55
+
56
+ ## Private Google Sheets
57
+
58
+ For private Google Sheets, the process requires a little more work, but not that much! The key difference is that now, you must authenticate yourself to authorize access to the private Google Sheets.
59
+
60
+ ### Authentication
61
+
62
+ To authenticate yourself, obtain credentials from Google Cloud. Here's [how to set up google cloud credentials](https://developers.google.com/workspace/guides/create-credentials):
63
+
64
+ 1\. First, log in to your Google Cloud account and go to the Google Cloud Console (https://console.cloud.google.com/)
65
+
66
+ 2\. In the Cloud Console, click on the hamburger menu in the top-left corner and select "APIs & Services" from the menu. If you do not have an existing project, you will need to create one.
67
+
68
+ 3\. Then, click the "+ Enabled APIs & services" button, which allows you to enable specific services for your project. Search for "Google Sheets API", click on it, and click the "Enable" button. If you see the "Manage" button, then Google Sheets is already enabled, and you're all set.
69
+
70
+ 4\. In the APIs & Services menu, click on the "Credentials" tab and then click on the "Create credentials" button.
71
+
72
+ 5\. In the "Create credentials" dialog, select "Service account key" as the type of credentials to create, and give it a name. **Note down the email of the service account**
73
+
74
+ 6\. After selecting the service account, select the "JSON" key type and then click on the "Create" button. This will download the JSON key file containing your credentials to your computer. It will look something like this:
75
+
76
+ ```json
77
+ {
78
+ "type": "service_account",
79
+ "project_id": "your project",
80
+ "private_key_id": "your private key id",
81
+ "private_key": "private key",
82
+ "client_email": "email",
83
+ "client_id": "client id",
84
+ "auth_uri": "https://accounts.google.com/o/oauth2/auth",
85
+ "token_uri": "https://accounts.google.com/o/oauth2/token",
86
+ "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
87
+ "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/email_id"
88
+ }
89
+ ```
90
+
91
+ ### Querying
92
+
93
+ Once you have the credentials `.json` file, you can use the following steps to query your Google Sheet:
94
+
95
+ 1\. Click on the "Share" button in the top-right corner of the Google Sheet. Share the Google Sheets with the email address of the service from Step 5 of authentication subsection (this step is important!). Then click on the "Get shareable link" button. This will give you a URL that looks something like this:
96
+
97
+ ```html
98
+ https://docs.google.com/spreadsheets/d/1UoKzzRzOCt-FXLLqDKLbryEKEgllGAQUEJ5qtmmQwpU/edit#gid=0
99
+ ```
100
+
101
+ 2\. Install the [`gspread` library](https://docs.gspread.org/en/v5.7.0/), which makes it easy to work with the [Google Sheets API](https://developers.google.com/sheets/api/guides/concepts) in Python by running in the terminal: `pip install gspread`
102
+
103
+ 3\. Write a function to load the data from the Google Sheet, like this (replace the `URL` variable with the URL of your private Google Sheet):
104
+
105
+ ```python
106
+ import gspread
107
+ import pandas as pd
108
+
109
+ # Authenticate with Google and get the sheet
110
+ URL = 'https://docs.google.com/spreadsheets/d/1_91Vps76SKOdDQ8cFxZQdgjTJiz23375sAT7vPvaj4k/edit#gid=0'
111
+
112
+ gc = gspread.service_account("path/to/key.json")
113
+ sh = gc.open_by_url(URL)
114
+ worksheet = sh.sheet1
115
+
116
+ def get_data():
117
+ values = worksheet.get_all_values()
118
+ df = pd.DataFrame(values[1:], columns=values[0])
119
+ return df
120
+
121
+ ```
122
+
123
+ 4\. The data query is a function, which means that it's easy to display it real-time using the `gr.DataFrame` component, or plot it real-time using the `gr.LinePlot` component (of course, depending on the data, a different plot may be appropriate). To do this, we just pass the function into the respective components, and set the `every` parameter based on how frequently (in seconds) we would like the component to refresh. Here's the Gradio code:
124
+
125
+ ```python
126
+ import gradio as gr
127
+
128
+ with gr.Blocks() as demo:
129
+ gr.Markdown("# 📈 Real-Time Line Plot")
130
+ with gr.Row():
131
+ with gr.Column():
132
+ gr.DataFrame(get_data, every=gr.Timer(5))
133
+ with gr.Column():
134
+ gr.LinePlot(get_data, every=gr.Timer(5), x="Date", y="Sales", y_title="Sales ($ millions)", overlay_point=True, width=500, height=500)
135
+
136
+ demo.queue().launch() # Run the demo with queuing enabled
137
+ ```
138
+
139
+ You now have a Dashboard that refreshes every 5 seconds, pulling the data from your Google Sheet.
140
+
141
+ ## Conclusion
142
+
143
+ And that's all there is to it! With just a few lines of code, you can use `gradio` and other libraries to read data from a public or private Google Sheet and then display and plot the data in a real-time dashboard.
7 Tabular Data Science and Plots/plot-component-for-maps.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # How to Use the Plot Component for Maps
3
+
4
+ Tags: PLOTS, MAPS
5
+
6
+ ## Introduction
7
+
8
+ This guide explains how you can use Gradio to plot geographical data on a map using the `gradio.Plot` component. The Gradio `Plot` component works with Matplotlib, Bokeh and Plotly. Plotly is what we will be working with in this guide. Plotly allows developers to easily create all sorts of maps with their geographical data. Take a look [here](https://plotly.com/python/maps/) for some examples.
9
+
10
+ ## Overview
11
+
12
+ We will be using the New York City Airbnb dataset, which is hosted on kaggle [here](https://www.kaggle.com/datasets/dgomonov/new-york-city-airbnb-open-data). I've uploaded it to the Hugging Face Hub as a dataset [here](https://huggingface.co/datasets/gradio/NYC-Airbnb-Open-Data) for easier use and download. Using this data we will plot Airbnb locations on a map output and allow filtering based on price and location. Below is the demo that we will be building. ⚡️
13
+
14
+ $demo_map_airbnb
15
+
16
+ ## Step 1 - Loading CSV data 💾
17
+
18
+ Let's start by loading the Airbnb NYC data from the Hugging Face Hub.
19
+
20
+ ```python
21
+ from datasets import load_dataset
22
+
23
+ dataset = load_dataset("gradio/NYC-Airbnb-Open-Data", split="train")
24
+ df = dataset.to_pandas()
25
+
26
+ def filter_map(min_price, max_price, boroughs):
27
+ new_df = df[(df['neighbourhood_group'].isin(boroughs)) &
28
+ (df['price'] > min_price) & (df['price'] < max_price)]
29
+ names = new_df["name"].tolist()
30
+ prices = new_df["price"].tolist()
31
+ text_list = [(names[i], prices[i]) for i in range(0, len(names))]
32
+ ```
33
+
34
+ In the code above, we first load the csv data into a pandas dataframe. Let's begin by defining a function that we will use as the prediction function for the gradio app. This function will accept the minimum price and maximum price range as well as the list of boroughs to filter the resulting map. We can use the passed in values (`min_price`, `max_price`, and list of `boroughs`) to filter the dataframe and create `new_df`. Next we will create `text_list` of the names and prices of each Airbnb to use as labels on the map.
35
+
36
+ ## Step 2 - Map Figure 🌐
37
+
38
+ Plotly makes it easy to work with maps. Let's take a look below how we can create a map figure.
39
+
40
+ ```python
41
+ import plotly.graph_objects as go
42
+
43
+ fig = go.Figure(go.Scattermapbox(
44
+ customdata=text_list,
45
+ lat=new_df['latitude'].tolist(),
46
+ lon=new_df['longitude'].tolist(),
47
+ mode='markers',
48
+ marker=go.scattermapbox.Marker(
49
+ size=6
50
+ ),
51
+ hoverinfo="text",
52
+ hovertemplate='<b>Name</b>: %{customdata[0]}<br><b>Price</b>: $%{customdata[1]}'
53
+ ))
54
+
55
+ fig.update_layout(
56
+ mapbox_style="open-street-map",
57
+ hovermode='closest',
58
+ mapbox=dict(
59
+ bearing=0,
60
+ center=go.layout.mapbox.Center(
61
+ lat=40.67,
62
+ lon=-73.90
63
+ ),
64
+ pitch=0,
65
+ zoom=9
66
+ ),
67
+ )
68
+ ```
69
+
70
+ Above, we create a scatter plot on mapbox by passing it our list of latitudes and longitudes to plot markers. We also pass in our custom data of names and prices for additional info to appear on every marker we hover over. Next we use `update_layout` to specify other map settings such as zoom, and centering.
71
+
72
+ More info [here](https://plotly.com/python/scattermapbox/) on scatter plots using Mapbox and Plotly.
73
+
74
+ ## Step 3 - Gradio App ⚡️
75
+
76
+ We will use two `gr.Number` components and a `gr.CheckboxGroup` to allow users of our app to specify price ranges and borough locations. We will then use the `gr.Plot` component as an output for our Plotly + Mapbox map we created earlier.
77
+
78
+ ```python
79
+ with gr.Blocks() as demo:
80
+ with gr.Column():
81
+ with gr.Row():
82
+ min_price = gr.Number(value=250, label="Minimum Price")
83
+ max_price = gr.Number(value=1000, label="Maximum Price")
84
+ boroughs = gr.CheckboxGroup(choices=["Queens", "Brooklyn", "Manhattan", "Bronx", "Staten Island"], value=["Queens", "Brooklyn"], label="Select Boroughs:")
85
+ btn = gr.Button(value="Update Filter")
86
+ map = gr.Plot()
87
+ demo.load(filter_map, [min_price, max_price, boroughs], map)
88
+ btn.click(filter_map, [min_price, max_price, boroughs], map)
89
+ ```
90
+
91
+ We layout these components using the `gr.Column` and `gr.Row` and we'll also add event triggers for when the demo first loads and when our "Update Filter" button is clicked in order to trigger the map to update with our new filters.
92
+
93
+ This is what the full demo code looks like:
94
+
95
+ $code_map_airbnb
96
+
97
+ ## Step 4 - Deployment 🤗
98
+
99
+ If you run the code above, your app will start running locally.
100
+ You can even get a temporary shareable link by passing the `share=True` parameter to `launch`.
101
+
102
+ But what if you want to a permanent deployment solution?
103
+ Let's deploy our Gradio app to the free HuggingFace Spaces platform.
104
+
105
+ If you haven't used Spaces before, follow the previous guide [here](/using_hugging_face_integrations).
106
+
107
+ ## Conclusion 🎉
108
+
109
+ And you're all done! That's all the code you need to build a map demo.
110
+
111
+ Here's a link to the demo [Map demo](https://huggingface.co/spaces/gradio/map_airbnb) and [complete code](https://huggingface.co/spaces/gradio/map_airbnb/blob/main/run.py) (on Hugging Face Spaces)
7 Tabular Data Science and Plots/styling-the-gradio-dataframe.md ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # How to Style the Gradio Dataframe
3
+
4
+ Tags: DATAFRAME, STYLE, COLOR
5
+
6
+ ## Introduction
7
+
8
+ Data visualization is a crucial aspect of data analysis and machine learning. The Gradio `DataFrame` component is a popular way to display tabular data (particularly data in the form of a `pandas` `DataFrame` object) within a web application.
9
+
10
+ This post will explore the recent enhancements in Gradio that allow users to integrate the styling options of pandas, e.g. adding colors to the DataFrame component, or setting the display precision of numbers.
11
+
12
+ ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/df-highlight.png)
13
+
14
+ Let's dive in!
15
+
16
+ **Prerequisites**: We'll be using the `gradio.Blocks` class in our examples.
17
+ You can [read the Guide to Blocks first](https://gradio.app/blocks-and-event-listeners) if you are not already familiar with it. Also please make sure you are using the **latest version** version of Gradio: `pip install --upgrade gradio`.
18
+
19
+
20
+ ## Overview
21
+
22
+ The Gradio `DataFrame` component now supports values of the type `Styler` from the `pandas` class. This allows us to reuse the rich existing API and documentation of the `Styler` class instead of inventing a new style format on our own. Here's a complete example of how it looks:
23
+
24
+ ```python
25
+ import pandas as pd
26
+ import gradio as gr
27
+
28
+ # Creating a sample dataframe
29
+ df = pd.DataFrame({
30
+ "A" : [14, 4, 5, 4, 1],
31
+ "B" : [5, 2, 54, 3, 2],
32
+ "C" : [20, 20, 7, 3, 8],
33
+ "D" : [14, 3, 6, 2, 6],
34
+ "E" : [23, 45, 64, 32, 23]
35
+ })
36
+
37
+ # Applying style to highlight the maximum value in each row
38
+ styler = df.style.highlight_max(color = 'lightgreen', axis = 0)
39
+
40
+ # Displaying the styled dataframe in Gradio
41
+ with gr.Blocks() as demo:
42
+ gr.DataFrame(styler)
43
+
44
+ demo.launch()
45
+ ```
46
+
47
+ The Styler class can be used to apply conditional formatting and styling to dataframes, making them more visually appealing and interpretable. You can highlight certain values, apply gradients, or even use custom CSS to style the DataFrame. The Styler object is applied to a DataFrame and it returns a new object with the relevant styling properties, which can then be previewed directly, or rendered dynamically in a Gradio interface.
48
+
49
+ To read more about the Styler object, read the official `pandas` documentation at: https://pandas.pydata.org/docs/user_guide/style.html
50
+
51
+ Below, we'll explore a few examples:
52
+
53
+ ## Highlighting Cells
54
+
55
+ Ok, so let's revisit the previous example. We start by creating a `pd.DataFrame` object and then highlight the highest value in each row with a light green color:
56
+
57
+ ```python
58
+ import pandas as pd
59
+
60
+ # Creating a sample dataframe
61
+ df = pd.DataFrame({
62
+ "A" : [14, 4, 5, 4, 1],
63
+ "B" : [5, 2, 54, 3, 2],
64
+ "C" : [20, 20, 7, 3, 8],
65
+ "D" : [14, 3, 6, 2, 6],
66
+ "E" : [23, 45, 64, 32, 23]
67
+ })
68
+
69
+ # Applying style to highlight the maximum value in each row
70
+ styler = df.style.highlight_max(color = 'lightgreen', axis = 0)
71
+ ```
72
+
73
+ Now, we simply pass this object into the Gradio `DataFrame` and we can visualize our colorful table of data in 4 lines of python:
74
+
75
+ ```python
76
+ import gradio as gr
77
+
78
+ with gr.Blocks() as demo:
79
+ gr.Dataframe(styler)
80
+
81
+ demo.launch()
82
+ ```
83
+
84
+ Here's how it looks:
85
+
86
+ ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/df-highlight.png)
87
+
88
+ ## Font Colors
89
+
90
+ Apart from highlighting cells, you might want to color specific text within the cells. Here's how you can change text colors for certain columns:
91
+
92
+ ```python
93
+ import pandas as pd
94
+ import gradio as gr
95
+
96
+ # Creating a sample dataframe
97
+ df = pd.DataFrame({
98
+ "A" : [14, 4, 5, 4, 1],
99
+ "B" : [5, 2, 54, 3, 2],
100
+ "C" : [20, 20, 7, 3, 8],
101
+ "D" : [14, 3, 6, 2, 6],
102
+ "E" : [23, 45, 64, 32, 23]
103
+ })
104
+
105
+ # Function to apply text color
106
+ def highlight_cols(x):
107
+ df = x.copy()
108
+ df.loc[:, :] = 'color: purple'
109
+ df[['B', 'C', 'E']] = 'color: green'
110
+ return df
111
+
112
+ # Applying the style function
113
+ s = df.style.apply(highlight_cols, axis = None)
114
+
115
+ # Displaying the styled dataframe in Gradio
116
+ with gr.Blocks() as demo:
117
+ gr.DataFrame(s)
118
+
119
+ demo.launch()
120
+ ```
121
+
122
+ In this script, we define a custom function highlight_cols that changes the text color to purple for all cells, but overrides this for columns B, C, and E with green. Here's how it looks:
123
+
124
+ ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/df-color.png)
125
+
126
+ ## Display Precision
127
+
128
+ Sometimes, the data you are dealing with might have long floating numbers, and you may want to display only a fixed number of decimals for simplicity. The pandas Styler object allows you to format the precision of numbers displayed. Here's how you can do this:
129
+
130
+ ```python
131
+ import pandas as pd
132
+ import gradio as gr
133
+
134
+ # Creating a sample dataframe with floating numbers
135
+ df = pd.DataFrame({
136
+ "A" : [14.12345, 4.23456, 5.34567, 4.45678, 1.56789],
137
+ "B" : [5.67891, 2.78912, 54.89123, 3.91234, 2.12345],
138
+ # ... other columns
139
+ })
140
+
141
+ # Setting the precision of numbers to 2 decimal places
142
+ s = df.style.format("{:.2f}")
143
+
144
+ # Displaying the styled dataframe in Gradio
145
+ with gr.Blocks() as demo:
146
+ gr.DataFrame(s)
147
+
148
+ demo.launch()
149
+ ```
150
+
151
+ In this script, the format method of the Styler object is used to set the precision of numbers to two decimal places. Much cleaner now:
152
+
153
+ ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/df-precision.png)
154
+
155
+
156
+ ## Note about Interactivity
157
+
158
+ One thing to keep in mind is that the gradio `DataFrame` component only accepts `Styler` objects when it is non-interactive (i.e. in "static" mode). If the `DataFrame` component is interactive, then the styling information is ignored and instead the raw table values are shown instead.
159
+
160
+ The `DataFrame` component is by default non-interactive, unless it is used as an input to an event. In which case, you can force the component to be non-interactive by setting the `interactive` prop like this:
161
+
162
+ ```python
163
+ c = gr.DataFrame(styler, interactive=False)
164
+ ```
165
+
166
+ ## Conclusion 🎉
167
+
168
+ This is just a taste of what's possible using the `gradio.DataFrame` component with the `Styler` class from `pandas`. Try it out and let us know what you think!
7 Tabular Data Science and Plots/using-gradio-for-tabular-workflows.md ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Using Gradio for Tabular Data Science Workflows
3
+
4
+ Related spaces: https://huggingface.co/spaces/scikit-learn/gradio-skops-integration, https://huggingface.co/spaces/scikit-learn/tabular-playground, https://huggingface.co/spaces/merve/gradio-analysis-dashboard
5
+
6
+ ## Introduction
7
+
8
+ Tabular data science is the most widely used domain of machine learning, with problems ranging from customer segmentation to churn prediction. Throughout various stages of the tabular data science workflow, communicating your work to stakeholders or clients can be cumbersome; which prevents data scientists from focusing on what matters, such as data analysis and model building. Data scientists can end up spending hours building a dashboard that takes in dataframe and returning plots, or returning a prediction or plot of clusters in a dataset. In this guide, we'll go through how to use `gradio` to improve your data science workflows. We will also talk about how to use `gradio` and [skops](https://skops.readthedocs.io/en/stable/) to build interfaces with only one line of code!
9
+
10
+ ### Prerequisites
11
+
12
+ Make sure you have the `gradio` Python package already [installed](/getting_started).
13
+
14
+ ## Let's Create a Simple Interface!
15
+
16
+ We will take a look at how we can create a simple UI that predicts failures based on product information.
17
+
18
+ ```python
19
+ import gradio as gr
20
+ import pandas as pd
21
+ import joblib
22
+ import datasets
23
+
24
+
25
+ inputs = [gr.Dataframe(row_count = (2, "dynamic"), col_count=(4,"dynamic"), label="Input Data", interactive=1)]
26
+
27
+ outputs = [gr.Dataframe(row_count = (2, "dynamic"), col_count=(1, "fixed"), label="Predictions", headers=["Failures"])]
28
+
29
+ model = joblib.load("model.pkl")
30
+
31
+ # we will give our dataframe as example
32
+ df = datasets.load_dataset("merve/supersoaker-failures")
33
+ df = df["train"].to_pandas()
34
+
35
+ def infer(input_dataframe):
36
+ return pd.DataFrame(model.predict(input_dataframe))
37
+
38
+ gr.Interface(fn = infer, inputs = inputs, outputs = outputs, examples = [[df.head(2)]]).launch()
39
+ ```
40
+
41
+ Let's break down above code.
42
+
43
+ - `fn`: the inference function that takes input dataframe and returns predictions.
44
+ - `inputs`: the component we take our input with. We define our input as dataframe with 2 rows and 4 columns, which initially will look like an empty dataframe with the aforementioned shape. When the `row_count` is set to `dynamic`, you don't have to rely on the dataset you're inputting to pre-defined component.
45
+ - `outputs`: The dataframe component that stores outputs. This UI can take single or multiple samples to infer, and returns 0 or 1 for each sample in one column, so we give `row_count` as 2 and `col_count` as 1 above. `headers` is a list made of header names for dataframe.
46
+ - `examples`: You can either pass the input by dragging and dropping a CSV file, or a pandas DataFrame through examples, which headers will be automatically taken by the interface.
47
+
48
+ We will now create an example for a minimal data visualization dashboard. You can find a more comprehensive version in the related Spaces.
49
+
50
+ <gradio-app space="gradio/tabular-playground"></gradio-app>
51
+
52
+ ```python
53
+ import gradio as gr
54
+ import pandas as pd
55
+ import datasets
56
+ import seaborn as sns
57
+ import matplotlib.pyplot as plt
58
+
59
+ df = datasets.load_dataset("merve/supersoaker-failures")
60
+ df = df["train"].to_pandas()
61
+ df.dropna(axis=0, inplace=True)
62
+
63
+ def plot(df):
64
+ plt.scatter(df.measurement_13, df.measurement_15, c = df.loading,alpha=0.5)
65
+ plt.savefig("scatter.png")
66
+ df['failure'].value_counts().plot(kind='bar')
67
+ plt.savefig("bar.png")
68
+ sns.heatmap(df.select_dtypes(include="number").corr())
69
+ plt.savefig("corr.png")
70
+ plots = ["corr.png","scatter.png", "bar.png"]
71
+ return plots
72
+
73
+ inputs = [gr.Dataframe(label="Supersoaker Production Data")]
74
+ outputs = [gr.Gallery(label="Profiling Dashboard", columns=(1,3))]
75
+
76
+ gr.Interface(plot, inputs=inputs, outputs=outputs, examples=[df.head(100)], title="Supersoaker Failures Analysis Dashboard").launch()
77
+ ```
78
+
79
+ <gradio-app space="gradio/gradio-analysis-dashboard-minimal"></gradio-app>
80
+
81
+ We will use the same dataset we used to train our model, but we will make a dashboard to visualize it this time.
82
+
83
+ - `fn`: The function that will create plots based on data.
84
+ - `inputs`: We use the same `Dataframe` component we used above.
85
+ - `outputs`: The `Gallery` component is used to keep our visualizations.
86
+ - `examples`: We will have the dataset itself as the example.
87
+
88
+ ## Easily load tabular data interfaces with one line of code using skops
89
+
90
+ `skops` is a library built on top of `huggingface_hub` and `sklearn`. With the recent `gradio` integration of `skops`, you can build tabular data interfaces with one line of code!
91
+
92
+ ```python
93
+ import gradio as gr
94
+
95
+ # title and description are optional
96
+ title = "Supersoaker Defective Product Prediction"
97
+ description = "This model predicts Supersoaker production line failures. Drag and drop any slice from dataset or edit values as you wish in below dataframe component."
98
+
99
+ gr.load("huggingface/scikit-learn/tabular-playground", title=title, description=description).launch()
100
+ ```
101
+
102
+ <gradio-app space="gradio/gradio-skops-integration"></gradio-app>
103
+
104
+ `sklearn` models pushed to Hugging Face Hub using `skops` include a `config.json` file that contains an example input with column names, the task being solved (that can either be `tabular-classification` or `tabular-regression`). From the task type, `gradio` constructs the `Interface` and consumes column names and the example input to build it. You can [refer to skops documentation on hosting models on Hub](https://skops.readthedocs.io/en/latest/auto_examples/plot_hf_hub.html#sphx-glr-auto-examples-plot-hf-hub-py) to learn how to push your models to Hub using `skops`.
8 Gradio Clients and Lite/01_getting-started-with-the-python-client.md ADDED
@@ -0,0 +1,352 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Getting Started with the Gradio Python client
3
+
4
+ Tags: CLIENT, API, SPACES
5
+
6
+ The Gradio Python client makes it very easy to use any Gradio app as an API. As an example, consider this [Hugging Face Space that transcribes audio files](https://huggingface.co/spaces/abidlabs/whisper) that are recorded from the microphone.
7
+
8
+ ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/whisper-screenshot.jpg)
9
+
10
+ Using the `gradio_client` library, we can easily use the Gradio as an API to transcribe audio files programmatically.
11
+
12
+ Here's the entire code to do it:
13
+
14
+ ```python
15
+ from gradio_client import Client, file
16
+
17
+ client = Client("abidlabs/whisper")
18
+
19
+ client.predict(
20
+ audio=file("audio_sample.wav")
21
+ )
22
+
23
+ >> "This is a test of the whisper speech recognition model."
24
+ ```
25
+
26
+ The Gradio client works with any hosted Gradio app! Although the Client is mostly used with apps hosted on [Hugging Face Spaces](https://hf.space), your app can be hosted anywhere, such as your own server.
27
+
28
+ **Prerequisites**: To use the Gradio client, you do _not_ need to know the `gradio` library in great detail. However, it is helpful to have general familiarity with Gradio's concepts of input and output components.
29
+
30
+ ## Installation
31
+
32
+ If you already have a recent version of `gradio`, then the `gradio_client` is included as a dependency. But note that this documentation reflects the latest version of the `gradio_client`, so upgrade if you're not sure!
33
+
34
+ The lightweight `gradio_client` package can be installed from pip (or pip3) and is tested to work with **Python versions 3.9 or higher**:
35
+
36
+ ```bash
37
+ $ pip install --upgrade gradio_client
38
+ ```
39
+
40
+ ## Connecting to a Gradio App on Hugging Face Spaces
41
+
42
+ Start by connecting instantiating a `Client` object and connecting it to a Gradio app that is running on Hugging Face Spaces.
43
+
44
+ ```python
45
+ from gradio_client import Client
46
+
47
+ client = Client("abidlabs/en2fr") # a Space that translates from English to French
48
+ ```
49
+
50
+ You can also connect to private Spaces by passing in your HF token with the `hf_token` parameter. You can get your HF token here: https://huggingface.co/settings/tokens
51
+
52
+ ```python
53
+ from gradio_client import Client
54
+
55
+ client = Client("abidlabs/my-private-space", hf_token="...")
56
+ ```
57
+
58
+
59
+ ## Duplicating a Space for private use
60
+
61
+ While you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space,
62
+ and then use it to make as many requests as you'd like!
63
+
64
+ The `gradio_client` includes a class method: `Client.duplicate()` to make this process simple (you'll need to pass in your [Hugging Face token](https://huggingface.co/settings/tokens) or be logged in using the Hugging Face CLI):
65
+
66
+ ```python
67
+ import os
68
+ from gradio_client import Client, file
69
+
70
+ HF_TOKEN = os.environ.get("HF_TOKEN")
71
+
72
+ client = Client.duplicate("abidlabs/whisper", hf_token=HF_TOKEN)
73
+ client.predict(file("audio_sample.wav"))
74
+
75
+ >> "This is a test of the whisper speech recognition model."
76
+ ```
77
+
78
+ If you have previously duplicated a Space, re-running `duplicate()` will _not_ create a new Space. Instead, the Client will attach to the previously-created Space. So it is safe to re-run the `Client.duplicate()` method multiple times.
79
+
80
+ **Note:** if the original Space uses GPUs, your private Space will as well, and your Hugging Face account will get billed based on the price of the GPU. To minimize charges, your Space will automatically go to sleep after 1 hour of inactivity. You can also set the hardware using the `hardware` parameter of `duplicate()`.
81
+
82
+ ## Connecting a general Gradio app
83
+
84
+ If your app is running somewhere else, just provide the full URL instead, including the "http://" or "https://". Here's an example of making predictions to a Gradio app that is running on a share URL:
85
+
86
+ ```python
87
+ from gradio_client import Client
88
+
89
+ client = Client("https://bec81a83-5b5c-471e.gradio.live")
90
+ ```
91
+
92
+ ## Connecting to a Gradio app with auth
93
+
94
+ If the Gradio application you are connecting to [requires a username and password](/guides/sharing-your-app#authentication), then provide them as a tuple to the `auth` argument of the `Client` class:
95
+
96
+ ```python
97
+ from gradio_client import Client
98
+
99
+ Client(
100
+ space_name,
101
+ auth=[username, password]
102
+ )
103
+ ```
104
+
105
+
106
+ ## Inspecting the API endpoints
107
+
108
+ Once you have connected to a Gradio app, you can view the APIs that are available to you by calling the `Client.view_api()` method. For the Whisper Space, we see the following:
109
+
110
+ ```bash
111
+ Client.predict() Usage Info
112
+ ---------------------------
113
+ Named API endpoints: 1
114
+
115
+ - predict(audio, api_name="/predict") -> output
116
+ Parameters:
117
+ - [Audio] audio: filepath (required)
118
+ Returns:
119
+ - [Textbox] output: str
120
+ ```
121
+
122
+ We see that we have 1 API endpoint in this space, and shows us how to use the API endpoint to make a prediction: we should call the `.predict()` method (which we will explore below), providing a parameter `input_audio` of type `str`, which is a `filepath or URL`.
123
+
124
+ We should also provide the `api_name='/predict'` argument to the `predict()` method. Although this isn't necessary if a Gradio app has only 1 named endpoint, it does allow us to call different endpoints in a single app if they are available.
125
+
126
+ ## The "View API" Page
127
+
128
+ As an alternative to running the `.view_api()` method, you can click on the "Use via API" link in the footer of the Gradio app, which shows us the same information, along with example usage.
129
+
130
+ ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api.png)
131
+
132
+ The View API page also includes an "API Recorder" that lets you interact with the Gradio UI normally and converts your interactions into the corresponding code to run with the Python Client.
133
+
134
+ ## Making a prediction
135
+
136
+ The simplest way to make a prediction is simply to call the `.predict()` function with the appropriate arguments:
137
+
138
+ ```python
139
+ from gradio_client import Client
140
+
141
+ client = Client("abidlabs/en2fr", api_name='/predict')
142
+ client.predict("Hello")
143
+
144
+ >> Bonjour
145
+ ```
146
+
147
+ If there are multiple parameters, then you should pass them as separate arguments to `.predict()`, like this:
148
+
149
+ ```python
150
+ from gradio_client import Client
151
+
152
+ client = Client("gradio/calculator")
153
+ client.predict(4, "add", 5)
154
+
155
+ >> 9.0
156
+ ```
157
+
158
+ It is recommended to provide key-word arguments instead of positional arguments:
159
+
160
+
161
+ ```python
162
+ from gradio_client import Client
163
+
164
+ client = Client("gradio/calculator")
165
+ client.predict(num1=4, operation="add", num2=5)
166
+
167
+ >> 9.0
168
+ ```
169
+
170
+ This allows you to take advantage of default arguments. For example, this Space includes the default value for the Slider component so you do not need to provide it when accessing it with the client.
171
+
172
+ ```python
173
+ from gradio_client import Client
174
+
175
+ client = Client("abidlabs/image_generator")
176
+ client.predict(text="an astronaut riding a camel")
177
+ ```
178
+
179
+ The default value is the initial value of the corresponding Gradio component. If the component does not have an initial value, but if the corresponding argument in the predict function has a default value of `None`, then that parameter is also optional in the client. Of course, if you'd like to override it, you can include it as well:
180
+
181
+ ```python
182
+ from gradio_client import Client
183
+
184
+ client = Client("abidlabs/image_generator")
185
+ client.predict(text="an astronaut riding a camel", steps=25)
186
+ ```
187
+
188
+ For providing files or URLs as inputs, you should pass in the filepath or URL to the file enclosed within `gradio_client.file()`. This takes care of uploading the file to the Gradio server and ensures that the file is preprocessed correctly:
189
+
190
+ ```python
191
+ from gradio_client import Client, file
192
+
193
+ client = Client("abidlabs/whisper")
194
+ client.predict(
195
+ audio=file("https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3")
196
+ )
197
+
198
+ >> "My thought I have nobody by a beauty and will as you poured. Mr. Rochester is serve in that so don't find simpus, and devoted abode, to at might in a r—"
199
+ ```
200
+
201
+ ## Running jobs asynchronously
202
+
203
+ Oe should note that `.predict()` is a _blocking_ operation as it waits for the operation to complete before returning the prediction.
204
+
205
+ In many cases, you may be better off letting the job run in the background until you need the results of the prediction. You can do this by creating a `Job` instance using the `.submit()` method, and then later calling `.result()` on the job to get the result. For example:
206
+
207
+ ```python
208
+ from gradio_client import Client
209
+
210
+ client = Client(space="abidlabs/en2fr")
211
+ job = client.submit("Hello", api_name="/predict") # This is not blocking
212
+
213
+ # Do something else
214
+
215
+ job.result() # This is blocking
216
+
217
+ >> Bonjour
218
+ ```
219
+
220
+ ## Adding callbacks
221
+
222
+ Alternatively, one can add one or more callbacks to perform actions after the job has completed running, like this:
223
+
224
+ ```python
225
+ from gradio_client import Client
226
+
227
+ def print_result(x):
228
+ print("The translated result is: {x}")
229
+
230
+ client = Client(space="abidlabs/en2fr")
231
+
232
+ job = client.submit("Hello", api_name="/predict", result_callbacks=[print_result])
233
+
234
+ # Do something else
235
+
236
+ >> The translated result is: Bonjour
237
+
238
+ ```
239
+
240
+ ## Status
241
+
242
+ The `Job` object also allows you to get the status of the running job by calling the `.status()` method. This returns a `StatusUpdate` object with the following attributes: `code` (the status code, one of a set of defined strings representing the status. See the `utils.Status` class), `rank` (the current position of this job in the queue), `queue_size` (the total queue size), `eta` (estimated time this job will complete), `success` (a boolean representing whether the job completed successfully), and `time` (the time that the status was generated).
243
+
244
+ ```py
245
+ from gradio_client import Client
246
+
247
+ client = Client(src="gradio/calculator")
248
+ job = client.submit(5, "add", 4, api_name="/predict")
249
+ job.status()
250
+
251
+ >> <Status.STARTING: 'STARTING'>
252
+ ```
253
+
254
+ _Note_: The `Job` class also has a `.done()` instance method which returns a boolean indicating whether the job has completed.
255
+
256
+ ## Cancelling Jobs
257
+
258
+ The `Job` class also has a `.cancel()` instance method that cancels jobs that have been queued but not started. For example, if you run:
259
+
260
+ ```py
261
+ client = Client("abidlabs/whisper")
262
+ job1 = client.submit(file("audio_sample1.wav"))
263
+ job2 = client.submit(file("audio_sample2.wav"))
264
+ job1.cancel() # will return False, assuming the job has started
265
+ job2.cancel() # will return True, indicating that the job has been canceled
266
+ ```
267
+
268
+ If the first job has started processing, then it will not be canceled. If the second job
269
+ has not yet started, it will be successfully canceled and removed from the queue.
270
+
271
+ ## Generator Endpoints
272
+
273
+ Some Gradio API endpoints do not return a single value, rather they return a series of values. You can get the series of values that have been returned at any time from such a generator endpoint by running `job.outputs()`:
274
+
275
+ ```py
276
+ from gradio_client import Client
277
+
278
+ client = Client(src="gradio/count_generator")
279
+ job = client.submit(3, api_name="/count")
280
+ while not job.done():
281
+ time.sleep(0.1)
282
+ job.outputs()
283
+
284
+ >> ['0', '1', '2']
285
+ ```
286
+
287
+ Note that running `job.result()` on a generator endpoint only gives you the _first_ value returned by the endpoint.
288
+
289
+ The `Job` object is also iterable, which means you can use it to display the results of a generator function as they are returned from the endpoint. Here's the equivalent example using the `Job` as a generator:
290
+
291
+ ```py
292
+ from gradio_client import Client
293
+
294
+ client = Client(src="gradio/count_generator")
295
+ job = client.submit(3, api_name="/count")
296
+
297
+ for o in job:
298
+ print(o)
299
+
300
+ >> 0
301
+ >> 1
302
+ >> 2
303
+ ```
304
+
305
+ You can also cancel jobs that that have iterative outputs, in which case the job will finish as soon as the current iteration finishes running.
306
+
307
+ ```py
308
+ from gradio_client import Client
309
+ import time
310
+
311
+ client = Client("abidlabs/test-yield")
312
+ job = client.submit("abcdef")
313
+ time.sleep(3)
314
+ job.cancel() # job cancels after 2 iterations
315
+ ```
316
+
317
+ ## Demos with Session State
318
+
319
+ Gradio demos can include [session state](https://www.gradio.app/guides/state-in-blocks), which provides a way for demos to persist information from user interactions within a page session.
320
+
321
+ For example, consider the following demo, which maintains a list of words that a user has submitted in a `gr.State` component. When a user submits a new word, it is added to the state, and the number of previous occurrences of that word is displayed:
322
+
323
+ ```python
324
+ import gradio as gr
325
+
326
+ def count(word, list_of_words):
327
+ return list_of_words.count(word), list_of_words + [word]
328
+
329
+ with gr.Blocks() as demo:
330
+ words = gr.State([])
331
+ textbox = gr.Textbox()
332
+ number = gr.Number()
333
+ textbox.submit(count, inputs=[textbox, words], outputs=[number, words])
334
+
335
+ demo.launch()
336
+ ```
337
+
338
+ If you were to connect this this Gradio app using the Python Client, you would notice that the API information only shows a single input and output:
339
+
340
+ ```csv
341
+ Client.predict() Usage Info
342
+ ---------------------------
343
+ Named API endpoints: 1
344
+
345
+ - predict(word, api_name="/count") -> value_31
346
+ Parameters:
347
+ - [Textbox] word: str (required)
348
+ Returns:
349
+ - [Number] value_31: float
350
+ ```
351
+
352
+ That is because the Python client handles state automatically for you -- as you make a series of requests, the returned state from one request is stored internally and automatically supplied for the subsequent request. If you'd like to reset the state, you can do that by calling `Client.reset_session()`.
8 Gradio Clients and Lite/02_getting-started-with-the-js-client.md ADDED
@@ -0,0 +1,328 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Getting Started with the Gradio JavaScript Client
3
+
4
+ Tags: CLIENT, API, SPACES
5
+
6
+ The Gradio JavaScript Client makes it very easy to use any Gradio app as an API. As an example, consider this [Hugging Face Space that transcribes audio files](https://huggingface.co/spaces/abidlabs/whisper) that are recorded from the microphone.
7
+
8
+ ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/whisper-screenshot.jpg)
9
+
10
+ Using the `@gradio/client` library, we can easily use the Gradio as an API to transcribe audio files programmatically.
11
+
12
+ Here's the entire code to do it:
13
+
14
+ ```js
15
+ import { Client } from "@gradio/client";
16
+
17
+ const response = await fetch(
18
+ "https://github.com/audio-samples/audio-samples.github.io/raw/master/samples/wav/ted_speakers/SalmanKhan/sample-1.wav"
19
+ );
20
+ const audio_file = await response.blob();
21
+
22
+ const app = await Client.connect("abidlabs/whisper");
23
+ const transcription = await app.predict("/predict", [audio_file]);
24
+
25
+ console.log(transcription.data);
26
+ // [ "I said the same phrase 30 times." ]
27
+ ```
28
+
29
+ The Gradio Client works with any hosted Gradio app, whether it be an image generator, a text summarizer, a stateful chatbot, a tax calculator, or anything else! The Gradio Client is mostly used with apps hosted on [Hugging Face Spaces](https://hf.space), but your app can be hosted anywhere, such as your own server.
30
+
31
+ **Prequisites**: To use the Gradio client, you do _not_ need to know the `gradio` library in great detail. However, it is helpful to have general familiarity with Gradio's concepts of input and output components.
32
+
33
+ ## Installation via npm
34
+
35
+ Install the @gradio/client package to interact with Gradio APIs using Node.js version >=18.0.0 or in browser-based projects. Use npm or any compatible package manager:
36
+
37
+ ```bash
38
+ npm i @gradio/client
39
+ ```
40
+
41
+ This command adds @gradio/client to your project dependencies, allowing you to import it in your JavaScript or TypeScript files.
42
+
43
+ ## Installation via CDN
44
+
45
+ For quick addition to your web project, you can use the jsDelivr CDN to load the latest version of @gradio/client directly into your HTML:
46
+
47
+ ```bash
48
+ <script src="https://cdn.jsdelivr.net/npm/@gradio/client/dist/index.min.js"></script>
49
+ ```
50
+
51
+ Be sure to add this to the `<head>` of your HTML. This will install the latest version but we advise hardcoding the version in production. You can find all available versions [here](https://www.jsdelivr.com/package/npm/@gradio/client). This approach is ideal for experimental or prototying purposes, though has some limitations.
52
+
53
+ ## Connecting to a running Gradio App
54
+
55
+ Start by connecting instantiating a `client` instance and connecting it to a Gradio app that is running on Hugging Face Spaces or generally anywhere on the web.
56
+
57
+ ## Connecting to a Hugging Face Space
58
+
59
+ ```js
60
+ import { Client } from "@gradio/client";
61
+
62
+ const app = Client.connect("abidlabs/en2fr"); // a Space that translates from English to French
63
+ ```
64
+
65
+ You can also connect to private Spaces by passing in your HF token with the `hf_token` property of the options parameter. You can get your HF token here: https://huggingface.co/settings/tokens
66
+
67
+ ```js
68
+ import { Client } from "@gradio/client";
69
+
70
+ const app = Client.connect("abidlabs/my-private-space", { hf_token="hf_..." })
71
+ ```
72
+
73
+ ## Duplicating a Space for private use
74
+
75
+ While you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space, and then use it to make as many requests as you'd like! You'll need to pass in your [Hugging Face token](https://huggingface.co/settings/tokens)).
76
+
77
+ `Client.duplicate` is almost identical to `Client.connect`, the only difference is under the hood:
78
+
79
+ ```js
80
+ import { Client } from "@gradio/client";
81
+
82
+ const response = await fetch(
83
+ "https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3"
84
+ );
85
+ const audio_file = await response.blob();
86
+
87
+ const app = await Client.duplicate("abidlabs/whisper", { hf_token: "hf_..." });
88
+ const transcription = await app.predict("/predict", [audio_file]);
89
+ ```
90
+
91
+ If you have previously duplicated a Space, re-running `Client.duplicate` will _not_ create a new Space. Instead, the client will attach to the previously-created Space. So it is safe to re-run the `Client.duplicate` method multiple times with the same space.
92
+
93
+ **Note:** if the original Space uses GPUs, your private Space will as well, and your Hugging Face account will get billed based on the price of the GPU. To minimize charges, your Space will automatically go to sleep after 5 minutes of inactivity. You can also set the hardware using the `hardware` and `timeout` properties of `duplicate`'s options object like this:
94
+
95
+ ```js
96
+ import { Client } from "@gradio/client";
97
+
98
+ const app = await Client.duplicate("abidlabs/whisper", {
99
+ hf_token: "hf_...",
100
+ timeout: 60,
101
+ hardware: "a10g-small"
102
+ });
103
+ ```
104
+
105
+ ## Connecting a general Gradio app
106
+
107
+ If your app is running somewhere else, just provide the full URL instead, including the "http://" or "https://". Here's an example of making predictions to a Gradio app that is running on a share URL:
108
+
109
+ ```js
110
+ import { Client } from "@gradio/client";
111
+
112
+ const app = Client.connect("https://bec81a83-5b5c-471e.gradio.live");
113
+ ```
114
+
115
+ ## Connecting to a Gradio app with auth
116
+
117
+ If the Gradio application you are connecting to [requires a username and password](/guides/sharing-your-app#authentication), then provide them as a tuple to the `auth` argument of the `Client` class:
118
+
119
+ ```js
120
+ import { Client } from "@gradio/client";
121
+
122
+ Client.connect(
123
+ space_name,
124
+ { auth: [username, password] }
125
+ )
126
+ ```
127
+
128
+
129
+ ## Inspecting the API endpoints
130
+
131
+ Once you have connected to a Gradio app, you can view the APIs that are available to you by calling the `Client`'s `view_api` method.
132
+
133
+ For the Whisper Space, we can do this:
134
+
135
+ ```js
136
+ import { Client } from "@gradio/client";
137
+
138
+ const app = await Client.connect("abidlabs/whisper");
139
+
140
+ const app_info = await app.view_api();
141
+
142
+ console.log(app_info);
143
+ ```
144
+
145
+ And we will see the following:
146
+
147
+ ```json
148
+ {
149
+ "named_endpoints": {
150
+ "/predict": {
151
+ "parameters": [
152
+ {
153
+ "label": "text",
154
+ "component": "Textbox",
155
+ "type": "string"
156
+ }
157
+ ],
158
+ "returns": [
159
+ {
160
+ "label": "output",
161
+ "component": "Textbox",
162
+ "type": "string"
163
+ }
164
+ ]
165
+ }
166
+ },
167
+ "unnamed_endpoints": {}
168
+ }
169
+ ```
170
+
171
+ This shows us that we have 1 API endpoint in this space, and shows us how to use the API endpoint to make a prediction: we should call the `.predict()` method (which we will explore below), providing a parameter `input_audio` of type `string`, which is a url to a file.
172
+
173
+ We should also provide the `api_name='/predict'` argument to the `predict()` method. Although this isn't necessary if a Gradio app has only 1 named endpoint, it does allow us to call different endpoints in a single app if they are available. If an app has unnamed API endpoints, these can also be displayed by running `.view_api(all_endpoints=True)`.
174
+
175
+ ## The "View API" Page
176
+
177
+ As an alternative to running the `.view_api()` method, you can click on the "Use via API" link in the footer of the Gradio app, which shows us the same information, along with example usage.
178
+
179
+ ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/view-api.png)
180
+
181
+ The View API page also includes an "API Recorder" that lets you interact with the Gradio UI normally and converts your interactions into the corresponding code to run with the JS Client.
182
+
183
+
184
+ ## Making a prediction
185
+
186
+ The simplest way to make a prediction is simply to call the `.predict()` method with the appropriate arguments:
187
+
188
+ ```js
189
+ import { Client } from "@gradio/client";
190
+
191
+ const app = await Client.connect("abidlabs/en2fr");
192
+ const result = await app.predict("/predict", ["Hello"]);
193
+ ```
194
+
195
+ If there are multiple parameters, then you should pass them as an array to `.predict()`, like this:
196
+
197
+ ```js
198
+ import { Client } from "@gradio/client";
199
+
200
+ const app = await Client.connect("gradio/calculator");
201
+ const result = await app.predict("/predict", [4, "add", 5]);
202
+ ```
203
+
204
+ For certain inputs, such as images, you should pass in a `Buffer`, `Blob` or `File` depending on what is most convenient. In node, this would be a `Buffer` or `Blob`; in a browser environment, this would be a `Blob` or `File`.
205
+
206
+ ```js
207
+ import { Client } from "@gradio/client";
208
+
209
+ const response = await fetch(
210
+ "https://audio-samples.github.io/samples/mp3/blizzard_unconditional/sample-0.mp3"
211
+ );
212
+ const audio_file = await response.blob();
213
+
214
+ const app = await Client.connect("abidlabs/whisper");
215
+ const result = await app.predict("/predict", [audio_file]);
216
+ ```
217
+
218
+ ## Using events
219
+
220
+ If the API you are working with can return results over time, or you wish to access information about the status of a job, you can use the iterable interface for more flexibility. This is especially useful for iterative endpoints or generator endpoints that will produce a series of values over time as discreet responses.
221
+
222
+ ```js
223
+ import { Client } from "@gradio/client";
224
+
225
+ function log_result(payload) {
226
+ const {
227
+ data: [translation]
228
+ } = payload;
229
+
230
+ console.log(`The translated result is: ${translation}`);
231
+ }
232
+
233
+ const app = await Client.connect("abidlabs/en2fr");
234
+ const job = app.submit("/predict", ["Hello"]);
235
+
236
+ for await (const message of job) {
237
+ log_result(message);
238
+ }
239
+ ```
240
+
241
+ ## Status
242
+
243
+ The event interface also allows you to get the status of the running job by instantiating the client with the `events` options passing `status` and `data` as an array:
244
+
245
+
246
+ ```ts
247
+ import { Client } from "@gradio/client";
248
+
249
+ const app = await Client.connect("abidlabs/en2fr", {
250
+ events: ["status", "data"]
251
+ });
252
+ ```
253
+
254
+ This ensures that status messages are also reported to the client.
255
+
256
+ `status`es are returned as an object with the following attributes: `status` (a human readbale status of the current job, `"pending" | "generating" | "complete" | "error"`), `code` (the detailed gradio code for the job), `position` (the current position of this job in the queue), `queue_size` (the total queue size), `eta` (estimated time this job will complete), `success` (a boolean representing whether the job completed successfully), and `time` ( as `Date` object detailing the time that the status was generated).
257
+
258
+ ```js
259
+ import { Client } from "@gradio/client";
260
+
261
+ function log_status(status) {
262
+ console.log(
263
+ `The current status for this job is: ${JSON.stringify(status, null, 2)}.`
264
+ );
265
+ }
266
+
267
+ const app = await Client.connect("abidlabs/en2fr", {
268
+ events: ["status", "data"]
269
+ });
270
+ const job = app.submit("/predict", ["Hello"]);
271
+
272
+ for await (const message of job) {
273
+ if (message.type === "status") {
274
+ log_status(message);
275
+ }
276
+ }
277
+ ```
278
+
279
+ ## Cancelling Jobs
280
+
281
+ The job instance also has a `.cancel()` method that cancels jobs that have been queued but not started. For example, if you run:
282
+
283
+ ```js
284
+ import { Client } from "@gradio/client";
285
+
286
+ const app = await Client.connect("abidlabs/en2fr");
287
+ const job_one = app.submit("/predict", ["Hello"]);
288
+ const job_two = app.submit("/predict", ["Friends"]);
289
+
290
+ job_one.cancel();
291
+ job_two.cancel();
292
+ ```
293
+
294
+ If the first job has started processing, then it will not be canceled but the client will no longer listen for updates (throwing away the job). If the second job has not yet started, it will be successfully canceled and removed from the queue.
295
+
296
+ ## Generator Endpoints
297
+
298
+ Some Gradio API endpoints do not return a single value, rather they return a series of values. You can listen for these values in real time using the iterable interface:
299
+
300
+ ```js
301
+ import { Client } from "@gradio/client";
302
+
303
+ const app = await Client.connect("gradio/count_generator");
304
+ const job = app.submit(0, [9]);
305
+
306
+ for await (const message of job) {
307
+ console.log(message.data);
308
+ }
309
+ ```
310
+
311
+ This will log out the values as they are generated by the endpoint.
312
+
313
+ You can also cancel jobs that that have iterative outputs, in which case the job will finish immediately.
314
+
315
+ ```js
316
+ import { Client } from "@gradio/client";
317
+
318
+ const app = await Client.connect("gradio/count_generator");
319
+ const job = app.submit(0, [9]);
320
+
321
+ for await (const message of job) {
322
+ console.log(message.data);
323
+ }
324
+
325
+ setTimeout(() => {
326
+ job.cancel();
327
+ }, 3000);
328
+ ```
8 Gradio Clients and Lite/03_querying-gradio-apps-with-curl.md ADDED
@@ -0,0 +1,304 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Querying Gradio Apps with Curl
3
+
4
+ Tags: CURL, API, SPACES
5
+
6
+ It is possible to use any Gradio app as an API using cURL, the command-line tool that is pre-installed on many operating systems. This is particularly useful if you are trying to query a Gradio app from an environment other than Python or Javascript (since specialized Gradio clients exist for both [Python](/guides/getting-started-with-the-python-client) and [Javascript](/guides/getting-started-with-the-js-client)).
7
+
8
+ As an example, consider this Gradio demo that translates text from English to French: https://abidlabs-en2fr.hf.space/.
9
+
10
+ Using `curl`, we can translate text programmatically.
11
+
12
+ Here's the code to do it:
13
+
14
+ ```bash
15
+ $ curl -X POST https://abidlabs-en2fr.hf.space/call/predict -H "Content-Type: application/json" -d '{
16
+ "data": ["Hello, my friend."]
17
+ }'
18
+
19
+ >> {"event_id": $EVENT_ID}
20
+ ```
21
+
22
+ ```bash
23
+ $ curl -N https://abidlabs-en2fr.hf.space/call/predict/$EVENT_ID
24
+
25
+ >> event: complete
26
+ >> data: ["Bonjour, mon ami."]
27
+ ```
28
+
29
+
30
+ Note: making a prediction and getting a result requires two `curl` requests: a `POST` and a `GET`. The `POST` request returns an `EVENT_ID` and prints it to the console, which is used in the second `GET` request to fetch the results. You can combine these into a single command using `awk` and `read` to parse the results of the first command and pipe into the second, like this:
31
+
32
+ ```bash
33
+ $ curl -X POST https://abidlabs-en2fr.hf.space/call/predict -H "Content-Type: application/json" -d '{
34
+ "data": ["Hello, my friend."]
35
+ }' \
36
+ | awk -F'"' '{ print $4}' \
37
+ | read EVENT_ID; curl -N https://abidlabs-en2fr.hf.space/call/predict/$EVENT_ID
38
+
39
+ >> event: complete
40
+ >> data: ["Bonjour, mon ami."]
41
+ ```
42
+
43
+ In the rest of this Guide, we'll explain these two steps in more detail and provide additional examples of querying Gradio apps with `curl`.
44
+
45
+
46
+ **Prerequisites**: For this Guide, you do _not_ need to know how to build Gradio apps in great detail. However, it is helpful to have general familiarity with Gradio's concepts of input and output components.
47
+
48
+ ## Installation
49
+
50
+ You generally don't need to install cURL, as it comes pre-installed on many operating systems. Run:
51
+
52
+ ```bash
53
+ curl --version
54
+ ```
55
+
56
+ to confirm that `curl` is installed. If it is not already installed, you can install it by visiting https://curl.se/download.html.
57
+
58
+
59
+ ## Step 0: Get the URL for your Gradio App
60
+
61
+ To query a Gradio app, you'll need its full URL. This is usually just the URL that the Gradio app is hosted on, for example: https://bec81a83-5b5c-471e.gradio.live
62
+
63
+
64
+ **Hugging Face Spaces**
65
+
66
+ However, if you are querying a Gradio on Hugging Face Spaces, you will need to use the URL of the embedded Gradio app, not the URL of the Space webpage. For example:
67
+
68
+ ```bash
69
+ ❌ Space URL: https://huggingface.co/spaces/abidlabs/en2fr
70
+ ✅ Gradio app URL: https://abidlabs-en2fr.hf.space/
71
+ ```
72
+
73
+ You can get the Gradio app URL by clicking the "view API" link at the bottom of the page. Or, you can right-click on the page and then click on "View Frame Source" or the equivalent in your browser to view the URL of the embedded Gradio app.
74
+
75
+ While you can use any public Space as an API, you may get rate limited by Hugging Face if you make too many requests. For unlimited usage of a Space, simply duplicate the Space to create a private Space,
76
+ and then use it to make as many requests as you'd like!
77
+
78
+ Note: to query private Spaces, you will need to pass in your Hugging Face (HF) token. You can get your HF token here: https://huggingface.co/settings/tokens. In this case, you will need to include an additional header in both of your `curl` calls that we'll discuss below:
79
+
80
+ ```bash
81
+ -H "Authorization: Bearer $HF_TOKEN"
82
+ ```
83
+
84
+ Now, we are ready to make the two `curl` requests.
85
+
86
+ ## Step 1: Make a Prediction (POST)
87
+
88
+ The first of the two `curl` requests is `POST` request that submits the input payload to the Gradio app.
89
+
90
+ The syntax of the `POST` request is as follows:
91
+
92
+ ```bash
93
+ $ curl -X POST $URL/call/$API_NAME -H "Content-Type: application/json" -d '{
94
+ "data": $PAYLOAD
95
+ }'
96
+ ```
97
+
98
+ Here:
99
+
100
+ * `$URL` is the URL of the Gradio app as obtained in Step 0
101
+ * `$API_NAME` is the name of the API endpoint for the event that you are running. You can get the API endpoint names by clicking the "view API" link at the bottom of the page.
102
+ * `$PAYLOAD` is a valid JSON data list containing the input payload, one element for each input component.
103
+
104
+ When you make this `POST` request successfully, you will get an event id that is printed to the terminal in this format:
105
+
106
+ ```bash
107
+ >> {"event_id": $EVENT_ID}
108
+ ```
109
+
110
+ This `EVENT_ID` will be needed in the subsequent `curl` request to fetch the results of the prediction.
111
+
112
+ Here are some examples of how to make the `POST` request
113
+
114
+ **Basic Example**
115
+
116
+ Revisiting the example at the beginning of the page, here is how to make the `POST` request for a simple Gradio application that takes in a single input text component:
117
+
118
+ ```bash
119
+ $ curl -X POST https://abidlabs-en2fr.hf.space/call/predict -H "Content-Type: application/json" -d '{
120
+ "data": ["Hello, my friend."]
121
+ }'
122
+ ```
123
+
124
+ **Multiple Input Components**
125
+
126
+ This [Gradio demo](https://huggingface.co/spaces/gradio/hello_world_3) accepts three inputs: a string corresponding to the `gr.Textbox`, a boolean value corresponding to the `gr.Checkbox`, and a numerical value corresponding to the `gr.Slider`. Here is the `POST` request:
127
+
128
+ ```bash
129
+ curl -X POST https://gradio-hello-world-3.hf.space/call/predict -H "Content-Type: application/json" -d '{
130
+ "data": ["Hello", true, 5]
131
+ }'
132
+ ```
133
+
134
+ **Private Spaces**
135
+
136
+ As mentioned earlier, if you are making a request to a private Space, you will need to pass in a [Hugging Face token](https://huggingface.co/settings/tokens) that has read access to the Space. The request will look like this:
137
+
138
+ ```bash
139
+ $ curl -X POST https://private-space.hf.space/call/predict -H "Content-Type: application/json" -H "Authorization: Bearer $HF_TOKEN" -d '{
140
+ "data": ["Hello, my friend."]
141
+ }'
142
+ ```
143
+
144
+ **Files**
145
+
146
+ If you are using `curl` to query a Gradio application that requires file inputs, the files *need* to be provided as URLs, and The URL needs to be enclosed in a dictionary in this format:
147
+
148
+ ```bash
149
+ {"path": $URL}
150
+ ```
151
+
152
+ Here is an example `POST` request:
153
+
154
+ ```bash
155
+ $ curl -X POST https://gradio-image-mod.hf.space/call/predict -H "Content-Type: application/json" -d '{
156
+ "data": [{"path": "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png"}]
157
+ }'
158
+ ```
159
+
160
+
161
+ **Stateful Demos**
162
+
163
+ If your Gradio demo [persists user state](/guides/interface-state) across multiple interactions (e.g. is a chatbot), you can pass in a `session_hash` alongside the `data`. Requests with the same `session_hash` are assumed to be part of the same user session. Here's how that might look:
164
+
165
+ ```bash
166
+ # These two requests will share a session
167
+
168
+ curl -X POST https://gradio-chatinterface-random-response.hf.space/call/chat -H "Content-Type: application/json" -d '{
169
+ "data": ["Are you sentient?"],
170
+ "session_hash": "randomsequence1234"
171
+ }'
172
+
173
+ curl -X POST https://gradio-chatinterface-random-response.hf.space/call/chat -H "Content-Type: application/json" -d '{
174
+ "data": ["Really?"],
175
+ "session_hash": "randomsequence1234"
176
+ }'
177
+
178
+ # This request will be treated as a new session
179
+
180
+ curl -X POST https://gradio-chatinterface-random-response.hf.space/call/chat -H "Content-Type: application/json" -d '{
181
+ "data": ["Are you sentient?"],
182
+ "session_hash": "newsequence5678"
183
+ }'
184
+ ```
185
+
186
+
187
+
188
+ ## Step 2: GET the result
189
+
190
+ Once you have received the `EVENT_ID` corresponding to your prediction, you can stream the results. Gradio stores these results in a least-recently-used cache in the Gradio app. By default, the cache can store 2,000 results (across all users and endpoints of the app).
191
+
192
+ To stream the results for your prediction, make a `GET` request with the following syntax:
193
+
194
+ ```bash
195
+ $ curl -N $URL/call/$API_NAME/$EVENT_ID
196
+ ```
197
+
198
+
199
+ Tip: If you are fetching results from a private Space, include a header with your HF token like this: `-H "Authorization: Bearer $HF_TOKEN"` in the `GET` request.
200
+
201
+ This should produce a stream of responses in this format:
202
+
203
+ ```bash
204
+ event: ...
205
+ data: ...
206
+ event: ...
207
+ data: ...
208
+ ...
209
+ ```
210
+
211
+ Here: `event` can be one of the following:
212
+ * `generating`: indicating an intermediate result
213
+ * `complete`: indicating that the prediction is complete and the final result
214
+ * `error`: indicating that the prediction was not completed successfully
215
+ * `heartbeat`: sent every 15 seconds to keep the request alive
216
+
217
+ The `data` is in the same format as the input payload: valid JSON data list containing the output result, one element for each output component.
218
+
219
+ Here are some examples of what results you should expect if a request is completed successfully:
220
+
221
+ **Basic Example**
222
+
223
+ Revisiting the example at the beginning of the page, we would expect the result to look like this:
224
+
225
+ ```bash
226
+ event: complete
227
+ data: ["Bonjour, mon ami."]
228
+ ```
229
+
230
+ **Multiple Outputs**
231
+
232
+ If your endpoint returns multiple values, they will appear as elements of the `data` list:
233
+
234
+ ```bash
235
+ event: complete
236
+ data: ["Good morning Hello. It is 5 degrees today", -15.0]
237
+ ```
238
+
239
+ **Streaming Example**
240
+
241
+ If your Gradio app [streams a sequence of values](/guides/streaming-outputs), then they will be streamed directly to your terminal, like this:
242
+
243
+ ```bash
244
+ event: generating
245
+ data: ["Hello, w!"]
246
+ event: generating
247
+ data: ["Hello, wo!"]
248
+ event: generating
249
+ data: ["Hello, wor!"]
250
+ event: generating
251
+ data: ["Hello, worl!"]
252
+ event: generating
253
+ data: ["Hello, world!"]
254
+ event: complete
255
+ data: ["Hello, world!"]
256
+ ```
257
+
258
+ **File Example**
259
+
260
+ If your Gradio app returns a file, the file will be represented as a dictionary in this format (including potentially some additional keys):
261
+
262
+ ```python
263
+ {
264
+ "orig_name": "example.jpg",
265
+ "path": "/path/in/server.jpg",
266
+ "url": "https:/example.com/example.jpg",
267
+ "meta": {"_type": "gradio.FileData"}
268
+ }
269
+ ```
270
+
271
+ In your terminal, it may appear like this:
272
+
273
+ ```bash
274
+ event: complete
275
+ data: [{"path": "/tmp/gradio/359933dc8d6cfe1b022f35e2c639e6e42c97a003/image.webp", "url": "https://gradio-image-mod.hf.space/c/file=/tmp/gradio/359933dc8d6cfe1b022f35e2c639e6e42c97a003/image.webp", "size": null, "orig_name": "image.webp", "mime_type": null, "is_stream": false, "meta": {"_type": "gradio.FileData"}}]
276
+ ```
277
+
278
+ ## Authentication
279
+
280
+ What if your Gradio application has [authentication enabled](/guides/sharing-your-app#authentication)? In that case, you'll need to make an additional `POST` request with cURL to authenticate yourself before you make any queries. Here are the complete steps:
281
+
282
+ First, login with a `POST` request supplying a valid username and password:
283
+
284
+ ```bash
285
+ curl -X POST $URL/login \
286
+ -d "username=$USERNAME&password=$PASSWORD" \
287
+ -c cookies.txt
288
+ ```
289
+
290
+ If the credentials are correct, you'll get `{"success":true}` in response and the cookies will be saved in `cookies.txt`.
291
+
292
+ Next, you'll need to include these cookies when you make the original `POST` request, like this:
293
+
294
+ ```bash
295
+ $ curl -X POST $URL/call/$API_NAME -b cookies.txt -H "Content-Type: application/json" -d '{
296
+ "data": $PAYLOAD
297
+ }'
298
+ ```
299
+
300
+ Finally, you'll need to `GET` the results, again supplying the cookies from the file:
301
+
302
+ ```bash
303
+ curl -N $URL/call/$API_NAME/$EVENT_ID -b cookies.txt
304
+ ```
8 Gradio Clients and Lite/04_gradio-and-llm-agents.md ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Gradio & LLM Agents 🤝
3
+
4
+ Large Language Models (LLMs) are very impressive but they can be made even more powerful if we could give them skills to accomplish specialized tasks.
5
+
6
+ The [gradio_tools](https://github.com/freddyaboulton/gradio-tools) library can turn any [Gradio](https://github.com/gradio-app/gradio) application into a [tool](https://python.langchain.com/en/latest/modules/agents/tools.html) that an [agent](https://docs.langchain.com/docs/components/agents/agent) can use to complete its task. For example, an LLM could use a Gradio tool to transcribe a voice recording it finds online and then summarize it for you. Or it could use a different Gradio tool to apply OCR to a document on your Google Drive and then answer questions about it.
7
+
8
+ This guide will show how you can use `gradio_tools` to grant your LLM Agent access to the cutting edge Gradio applications hosted in the world. Although `gradio_tools` are compatible with more than one agent framework, we will focus on [Langchain Agents](https://docs.langchain.com/docs/components/agents/) in this guide.
9
+
10
+ ## Some background
11
+
12
+ ### What are agents?
13
+
14
+ A [LangChain agent](https://docs.langchain.com/docs/components/agents/agent) is a Large Language Model (LLM) that takes user input and reports an output based on using one of many tools at its disposal.
15
+
16
+ ### What is Gradio?
17
+
18
+ [Gradio](https://github.com/gradio-app/gradio) is the defacto standard framework for building Machine Learning Web Applications and sharing them with the world - all with just python! 🐍
19
+
20
+ ## gradio_tools - An end-to-end example
21
+
22
+ To get started with `gradio_tools`, all you need to do is import and initialize your tools and pass them to the langchain agent!
23
+
24
+ In the following example, we import the `StableDiffusionPromptGeneratorTool` to create a good prompt for stable diffusion, the
25
+ `StableDiffusionTool` to create an image with our improved prompt, the `ImageCaptioningTool` to caption the generated image, and
26
+ the `TextToVideoTool` to create a video from a prompt.
27
+
28
+ We then tell our agent to create an image of a dog riding a skateboard, but to please improve our prompt ahead of time. We also ask
29
+ it to caption the generated image and create a video for it. The agent can decide which tool to use without us explicitly telling it.
30
+
31
+ ```python
32
+ import os
33
+
34
+ if not os.getenv("OPENAI_API_KEY"):
35
+ raise ValueError("OPENAI_API_KEY must be set")
36
+
37
+ from langchain.agents import initialize_agent
38
+ from langchain.llms import OpenAI
39
+ from gradio_tools import (StableDiffusionTool, ImageCaptioningTool, StableDiffusionPromptGeneratorTool,
40
+ TextToVideoTool)
41
+
42
+ from langchain.memory import ConversationBufferMemory
43
+
44
+ llm = OpenAI(temperature=0)
45
+ memory = ConversationBufferMemory(memory_key="chat_history")
46
+ tools = [StableDiffusionTool().langchain, ImageCaptioningTool().langchain,
47
+ StableDiffusionPromptGeneratorTool().langchain, TextToVideoTool().langchain]
48
+
49
+
50
+ agent = initialize_agent(tools, llm, memory=memory, agent="conversational-react-description", verbose=True)
51
+ output = agent.run(input=("Please create a photo of a dog riding a skateboard "
52
+ "but improve my prompt prior to using an image generator."
53
+ "Please caption the generated image and create a video for it using the improved prompt."))
54
+ ```
55
+
56
+ You'll note that we are using some pre-built tools that come with `gradio_tools`. Please see this [doc](https://github.com/freddyaboulton/gradio-tools#gradio-tools-gradio--llm-agents) for a complete list of the tools that come with `gradio_tools`.
57
+ If you would like to use a tool that's not currently in `gradio_tools`, it is very easy to add your own. That's what the next section will cover.
58
+
59
+ ## gradio_tools - creating your own tool
60
+
61
+ The core abstraction is the `GradioTool`, which lets you define a new tool for your LLM as long as you implement a standard interface:
62
+
63
+ ```python
64
+ class GradioTool(BaseTool):
65
+
66
+ def __init__(self, name: str, description: str, src: str) -> None:
67
+
68
+ @abstractmethod
69
+ def create_job(self, query: str) -> Job:
70
+ pass
71
+
72
+ @abstractmethod
73
+ def postprocess(self, output: Tuple[Any] | Any) -> str:
74
+ pass
75
+ ```
76
+
77
+ The requirements are:
78
+
79
+ 1. The name for your tool
80
+ 2. The description for your tool. This is crucial! Agents decide which tool to use based on their description. Be precise and be sure to include example of what the input and the output of the tool should look like.
81
+ 3. The url or space id, e.g. `freddyaboulton/calculator`, of the Gradio application. Based on this value, `gradio_tool` will create a [gradio client](https://github.com/gradio-app/gradio/blob/main/client/python/README.md) instance to query the upstream application via API. Be sure to click the link and learn more about the gradio client library if you are not familiar with it.
82
+ 4. create_job - Given a string, this method should parse that string and return a job from the client. Most times, this is as simple as passing the string to the `submit` function of the client. More info on creating jobs [here](https://github.com/gradio-app/gradio/blob/main/client/python/README.md#making-a-prediction)
83
+ 5. postprocess - Given the result of the job, convert it to a string the LLM can display to the user.
84
+ 6. _Optional_ - Some libraries, e.g. [MiniChain](https://github.com/srush/MiniChain/tree/main), may need some info about the underlying gradio input and output types used by the tool. By default, this will return gr.Textbox() but
85
+ if you'd like to provide more accurate info, implement the `_block_input(self, gr)` and `_block_output(self, gr)` methods of the tool. The `gr` variable is the gradio module (the result of `import gradio as gr`). It will be
86
+ automatically imported by the `GradiTool` parent class and passed to the `_block_input` and `_block_output` methods.
87
+
88
+ And that's it!
89
+
90
+ Once you have created your tool, open a pull request to the `gradio_tools` repo! We welcome all contributions.
91
+
92
+ ## Example tool - Stable Diffusion
93
+
94
+ Here is the code for the StableDiffusion tool as an example:
95
+
96
+ ```python
97
+ from gradio_tool import GradioTool
98
+ import os
99
+
100
+ class StableDiffusionTool(GradioTool):
101
+ """Tool for calling stable diffusion from llm"""
102
+
103
+ def __init__(
104
+ self,
105
+ name="StableDiffusion",
106
+ description=(
107
+ "An image generator. Use this to generate images based on "
108
+ "text input. Input should be a description of what the image should "
109
+ "look like. The output will be a path to an image file."
110
+ ),
111
+ src="gradio-client-demos/stable-diffusion",
112
+ hf_token=None,
113
+ ) -> None:
114
+ super().__init__(name, description, src, hf_token)
115
+
116
+ def create_job(self, query: str) -> Job:
117
+ return self.client.submit(query, "", 9, fn_index=1)
118
+
119
+ def postprocess(self, output: str) -> str:
120
+ return [os.path.join(output, i) for i in os.listdir(output) if not i.endswith("json")][0]
121
+
122
+ def _block_input(self, gr) -> "gr.components.Component":
123
+ return gr.Textbox()
124
+
125
+ def _block_output(self, gr) -> "gr.components.Component":
126
+ return gr.Image()
127
+ ```
128
+
129
+ Some notes on this implementation:
130
+
131
+ 1. All instances of `GradioTool` have an attribute called `client` that is a pointed to the underlying [gradio client](https://github.com/gradio-app/gradio/tree/main/client/python#gradio_client-use-a-gradio-app-as-an-api----in-3-lines-of-python). That is what you should use
132
+ in the `create_job` method.
133
+ 2. `create_job` just passes the query string to the `submit` function of the client with some other parameters hardcoded, i.e. the negative prompt string and the guidance scale. We could modify our tool to also accept these values from the input string in a subsequent version.
134
+ 3. The `postprocess` method simply returns the first image from the gallery of images created by the stable diffusion space. We use the `os` module to get the full path of the image.
135
+
136
+ ## Conclusion
137
+
138
+ You now know how to extend the abilities of your LLM with the 1000s of gradio spaces running in the wild!
139
+ Again, we welcome any contributions to the [gradio_tools](https://github.com/freddyaboulton/gradio-tools) library.
140
+ We're excited to see the tools you all build!
8 Gradio Clients and Lite/05_gradio-lite.md ADDED
@@ -0,0 +1,236 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Gradio-Lite: Serverless Gradio Running Entirely in Your Browser
3
+
4
+ Tags: SERVERLESS, BROWSER, PYODIDE
5
+
6
+ Gradio is a popular Python library for creating interactive machine learning apps. Traditionally, Gradio applications have relied on server-side infrastructure to run, which can be a hurdle for developers who need to host their applications.
7
+
8
+ Enter Gradio-lite (`@gradio/lite`): a library that leverages [Pyodide](https://pyodide.org/en/stable/) to bring Gradio directly to your browser. In this blog post, we'll explore what `@gradio/lite` is, go over example code, and discuss the benefits it offers for running Gradio applications.
9
+
10
+ ## What is `@gradio/lite`?
11
+
12
+ `@gradio/lite` is a JavaScript library that enables you to run Gradio applications directly within your web browser. It achieves this by utilizing Pyodide, a Python runtime for WebAssembly, which allows Python code to be executed in the browser environment. With `@gradio/lite`, you can **write regular Python code for your Gradio applications**, and they will **run seamlessly in the browser** without the need for server-side infrastructure.
13
+
14
+ ## Getting Started
15
+
16
+ Let's build a "Hello World" Gradio app in `@gradio/lite`
17
+
18
+
19
+ ### 1. Import JS and CSS
20
+
21
+ Start by creating a new HTML file, if you don't have one already. Importing the JavaScript and CSS corresponding to the `@gradio/lite` package by using the following code:
22
+
23
+
24
+ ```html
25
+ <html>
26
+ <head>
27
+ <script type="module" crossorigin src="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.js"></script>
28
+ <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.css" />
29
+ </head>
30
+ </html>
31
+ ```
32
+
33
+ Note that you should generally use the latest version of `@gradio/lite` that is available. You can see the [versions available here](https://www.jsdelivr.com/package/npm/@gradio/lite?tab=files).
34
+
35
+ ### 2. Create the `<gradio-lite>` tags
36
+
37
+ Somewhere in the body of your HTML page (wherever you'd like the Gradio app to be rendered), create opening and closing `<gradio-lite>` tags.
38
+
39
+ ```html
40
+ <html>
41
+ <head>
42
+ <script type="module" crossorigin src="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.js"></script>
43
+ <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.css" />
44
+ </head>
45
+ <body>
46
+ <gradio-lite>
47
+ </gradio-lite>
48
+ </body>
49
+ </html>
50
+ ```
51
+
52
+ Note: you can add the `theme` attribute to the `<gradio-lite>` tag to force the theme to be dark or light (by default, it respects the system theme). E.g.
53
+
54
+ ```html
55
+ <gradio-lite theme="dark">
56
+ ...
57
+ </gradio-lite>
58
+ ```
59
+
60
+ ### 3. Write your Gradio app inside of the tags
61
+
62
+ Now, write your Gradio app as you would normally, in Python! Keep in mind that since this is Python, whitespace and indentations matter.
63
+
64
+ ```html
65
+ <html>
66
+ <head>
67
+ <script type="module" crossorigin src="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.js"></script>
68
+ <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.css" />
69
+ </head>
70
+ <body>
71
+ <gradio-lite>
72
+ import gradio as gr
73
+
74
+ def greet(name):
75
+ return "Hello, " + name + "!"
76
+
77
+ gr.Interface(greet, "textbox", "textbox").launch()
78
+ </gradio-lite>
79
+ </body>
80
+ </html>
81
+ ```
82
+
83
+ And that's it! You should now be able to open your HTML page in the browser and see the Gradio app rendered! Note that it may take a little while for the Gradio app to load initially since Pyodide can take a while to install in your browser.
84
+
85
+ **Note on debugging**: to see any errors in your Gradio-lite application, open the inspector in your web browser. All errors (including Python errors) will be printed there.
86
+
87
+ ## More Examples: Adding Additional Files and Requirements
88
+
89
+ What if you want to create a Gradio app that spans multiple files? Or that has custom Python requirements? Both are possible with `@gradio/lite`!
90
+
91
+ ### Multiple Files
92
+
93
+ Adding multiple files within a `@gradio/lite` app is very straightforward: use the `<gradio-file>` tag. You can have as many `<gradio-file>` tags as you want, but each one needs to have a `name` attribute and the entry point to your Gradio app should have the `entrypoint` attribute.
94
+
95
+ Here's an example:
96
+
97
+ ```html
98
+ <gradio-lite>
99
+
100
+ <gradio-file name="app.py" entrypoint>
101
+ import gradio as gr
102
+ from utils import add
103
+
104
+ demo = gr.Interface(fn=add, inputs=["number", "number"], outputs="number")
105
+
106
+ demo.launch()
107
+ </gradio-file>
108
+
109
+ <gradio-file name="utils.py" >
110
+ def add(a, b):
111
+ return a + b
112
+ </gradio-file>
113
+
114
+ </gradio-lite>
115
+
116
+ ```
117
+
118
+ ### Additional Requirements
119
+
120
+ If your Gradio app has additional requirements, it is usually possible to [install them in the browser using micropip](https://pyodide.org/en/stable/usage/loading-packages.html#loading-packages). We've created a wrapper to make this paticularly convenient: simply list your requirements in the same syntax as a `requirements.txt` and enclose them with `<gradio-requirements>` tags.
121
+
122
+ Here, we install `transformers_js_py` to run a text classification model directly in the browser!
123
+
124
+ ```html
125
+ <gradio-lite>
126
+
127
+ <gradio-requirements>
128
+ transformers_js_py
129
+ </gradio-requirements>
130
+
131
+ <gradio-file name="app.py" entrypoint>
132
+ from transformers_js import import_transformers_js
133
+ import gradio as gr
134
+
135
+ transformers = await import_transformers_js()
136
+ pipeline = transformers.pipeline
137
+ pipe = await pipeline('sentiment-analysis')
138
+
139
+ async def classify(text):
140
+ return await pipe(text)
141
+
142
+ demo = gr.Interface(classify, "textbox", "json")
143
+ demo.launch()
144
+ </gradio-file>
145
+
146
+ </gradio-lite>
147
+
148
+ ```
149
+
150
+ **Try it out**: You can see this example running in [this Hugging Face Static Space](https://huggingface.co/spaces/abidlabs/gradio-lite-classify), which lets you host static (serverless) web applications for free. Visit the page and you'll be able to run a machine learning model without internet access!
151
+
152
+ ### SharedWorker mode
153
+
154
+ By default, Gradio-Lite executes Python code in a [Web Worker](https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API) with [Pyodide](https://pyodide.org/) runtime, and each Gradio-Lite app has its own worker.
155
+ It has some benefits such as environment isolation.
156
+
157
+ However, when there are many Gradio-Lite apps in the same page, it may cause performance issues such as high memory usage because each app has its own worker and Pyodide runtime.
158
+ In such cases, you can use the **SharedWorker mode** to share a single Pyodide runtime in a [SharedWorker](https://developer.mozilla.org/en-US/docs/Web/API/SharedWorker) among multiple Gradio-Lite apps. To enable the SharedWorker mode, set the `shared-worker` attribute to the `<gradio-lite>` tag.
159
+
160
+ ```html
161
+ <!-- These two Gradio-Lite apps share a single worker -->
162
+
163
+ <gradio-lite shared-worker>
164
+ import gradio as gr
165
+ # ...
166
+ </gradio-lite>
167
+
168
+ <gradio-lite shared-worker>
169
+ import gradio as gr
170
+ # ...
171
+ </gradio-lite>
172
+ ```
173
+
174
+ When using the SharedWorker mode, you should be aware of the following points:
175
+ * The apps share the same Python environment, which means that they can access the same modules and objects. If, for example, one app makes changes to some modules, the changes will be visible to other apps.
176
+ * The file system is shared among the apps, while each app's files are mounted in each home directory, so each app can access the files of other apps.
177
+
178
+ ### Code and Demo Playground
179
+
180
+ If you'd like to see the code side-by-side with the demo just pass in the `playground` attribute to the gradio-lite element. This will create an interactive playground that allows you to change the code and update the demo! If you're using playground, you can also set layout to either 'vertical' or 'horizontal' which will determine if the code editor and preview are side-by-side or on top of each other (by default it's reposnsive with the width of the page).
181
+
182
+ ```html
183
+ <gradio-lite playground layout="horizontal">
184
+ import gradio as gr
185
+
186
+ gr.Interface(fn=lambda x: x,
187
+ inputs=gr.Textbox(),
188
+ outputs=gr.Textbox()
189
+ ).launch()
190
+ </gradio-lite>
191
+ ```
192
+
193
+ ## Benefits of Using `@gradio/lite`
194
+
195
+ ### 1. Serverless Deployment
196
+ The primary advantage of @gradio/lite is that it eliminates the need for server infrastructure. This simplifies deployment, reduces server-related costs, and makes it easier to share your Gradio applications with others.
197
+
198
+ ### 2. Low Latency
199
+ By running in the browser, @gradio/lite offers low-latency interactions for users. There's no need for data to travel to and from a server, resulting in faster responses and a smoother user experience.
200
+
201
+ ### 3. Privacy and Security
202
+ Since all processing occurs within the user's browser, `@gradio/lite` enhances privacy and security. User data remains on their device, providing peace of mind regarding data handling.
203
+
204
+ ### Limitations
205
+
206
+ * Currently, the biggest limitation in using `@gradio/lite` is that your Gradio apps will generally take more time (usually 5-15 seconds) to load initially in the browser. This is because the browser needs to load the Pyodide runtime before it can render Python code.
207
+
208
+ * Not every Python package is supported by Pyodide. While `gradio` and many other popular packages (including `numpy`, `scikit-learn`, and `transformers-js`) can be installed in Pyodide, if your app has many dependencies, its worth checking whether whether the dependencies are included in Pyodide, or can be [installed with `micropip`](https://micropip.pyodide.org/en/v0.2.2/project/api.html#micropip.install).
209
+
210
+ ## Try it out!
211
+
212
+ You can immediately try out `@gradio/lite` by copying and pasting this code in a local `index.html` file and opening it with your browser:
213
+
214
+ ```html
215
+ <html>
216
+ <head>
217
+ <script type="module" crossorigin src="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.js"></script>
218
+ <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.css" />
219
+ </head>
220
+ <body>
221
+ <gradio-lite>
222
+ import gradio as gr
223
+
224
+ def greet(name):
225
+ return "Hello, " + name + "!"
226
+
227
+ gr.Interface(greet, "textbox", "textbox").launch()
228
+ </gradio-lite>
229
+ </body>
230
+ </html>
231
+ ```
232
+
233
+
234
+ We've also created a playground on the Gradio website that allows you to interactively edit code and see the results immediately!
235
+
236
+ Playground: https://www.gradio.app/playground
8 Gradio Clients and Lite/06_gradio-lite-and-transformers-js.md ADDED
@@ -0,0 +1,197 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Building Serverless Machine Learning Apps with Gradio-Lite and Transformers.js
3
+
4
+ Tags: SERVERLESS, BROWSER, PYODIDE, TRANSFORMERS
5
+
6
+ Gradio and [Transformers](https://huggingface.co/docs/transformers/index) are a powerful combination for building machine learning apps with a web interface. Both libraries have serverless versions that can run entirely in the browser: [Gradio-Lite](./gradio-lite) and [Transformers.js](https://huggingface.co/docs/transformers.js/index).
7
+ In this document, we will introduce how to create a serverless machine learning application using Gradio-Lite and Transformers.js.
8
+ You will just write Python code within a static HTML file and host it without setting up a server-side Python runtime.
9
+
10
+
11
+ ## Libraries Used
12
+
13
+ ### Gradio-Lite
14
+
15
+ Gradio-Lite is the serverless version of Gradio, allowing you to build serverless web UI applications by embedding Python code within HTML. For a detailed introduction to Gradio-Lite itself, please read [this Guide](./gradio-lite).
16
+
17
+ ### Transformers.js and Transformers.js.py
18
+
19
+ Transformers.js is the JavaScript version of the Transformers library that allows you to run machine learning models entirely in the browser.
20
+ Since Transformers.js is a JavaScript library, it cannot be directly used from the Python code of Gradio-Lite applications. To address this, we use a wrapper library called [Transformers.js.py](https://github.com/whitphx/transformers.js.py).
21
+ The name Transformers.js.py may sound unusual, but it represents the necessary technology stack for using Transformers.js from Python code within a browser environment. The regular Transformers library is not compatible with browser environments.
22
+
23
+ ## Sample Code
24
+
25
+ Here's an example of how to use Gradio-Lite and Transformers.js together.
26
+ Please create an HTML file and paste the following code:
27
+
28
+ ```html
29
+ <html>
30
+ <head>
31
+ <script type="module" crossorigin src="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.js"></script>
32
+ <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.css" />
33
+ </head>
34
+ <body>
35
+ <gradio-lite>
36
+ import gradio as gr
37
+ from transformers_js_py import pipeline
38
+
39
+ pipe = await pipeline('sentiment-analysis')
40
+
41
+ demo = gr.Interface.from_pipeline(pipe)
42
+
43
+ demo.launch()
44
+
45
+ <gradio-requirements>
46
+ transformers-js-py
47
+ </gradio-requirements>
48
+ </gradio-lite>
49
+ </body>
50
+ </html>
51
+ ```
52
+
53
+ Here is a running example of the code above (after the app has loaded, you could disconnect your Internet connection and the app will still work since its running entirely in your browser):
54
+
55
+ <gradio-lite shared-worker>
56
+ import gradio as gr
57
+ from transformers_js_py import pipeline
58
+ <!-- --->
59
+ pipe = await pipeline('sentiment-analysis')
60
+ <!-- --->
61
+ demo = gr.Interface.from_pipeline(pipe)
62
+ <!-- --->
63
+ demo.launch()
64
+ <gradio-requirements>
65
+ transformers-js-py
66
+ </gradio-requirements>
67
+ </gradio-lite>
68
+
69
+ And you you can open your HTML file in a browser to see the Gradio app running!
70
+
71
+ The Python code inside the `<gradio-lite>` tag is the Gradio application code. For more details on this part, please refer to [this article](./gradio-lite).
72
+ The `<gradio-requirements>` tag is used to specify packages to be installed in addition to Gradio-Lite and its dependencies. In this case, we are using Transformers.js.py (`transformers-js-py`), so it is specified here.
73
+
74
+ Let's break down the code:
75
+
76
+ `pipe = await pipeline('sentiment-analysis')` creates a Transformers.js pipeline.
77
+ In this example, we create a sentiment analysis pipeline.
78
+ For more information on the available pipeline types and usage, please refer to the [Transformers.js documentation](https://huggingface.co/docs/transformers.js/index).
79
+
80
+ `demo = gr.Interface.from_pipeline(pipe)` creates a Gradio app instance. By passing the Transformers.js.py pipeline to `gr.Interface.from_pipeline()`, we can create an interface that utilizes that pipeline with predefined input and output components.
81
+
82
+ Finally, `demo.launch()` launches the created app.
83
+
84
+ ## Customizing the Model or Pipeline
85
+
86
+ You can modify the line `pipe = await pipeline('sentiment-analysis')` in the sample above to try different models or tasks.
87
+
88
+ For example, if you change it to `pipe = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment')`, you can test the same sentiment analysis task but with a different model. The second argument of the `pipeline` function specifies the model name.
89
+ If it's not specified like in the first example, the default model is used. For more details on these specs, refer to the [Transformers.js documentation](https://huggingface.co/docs/transformers.js/index).
90
+
91
+ <gradio-lite shared-worker>
92
+ import gradio as gr
93
+ from transformers_js_py import pipeline
94
+ <!-- --->
95
+ pipe = await pipeline('sentiment-analysis', 'Xenova/bert-base-multilingual-uncased-sentiment')
96
+ <!-- --->
97
+ demo = gr.Interface.from_pipeline(pipe)
98
+ <!-- --->
99
+ demo.launch()
100
+ <gradio-requirements>
101
+ transformers-js-py
102
+ </gradio-requirements>
103
+ </gradio-lite>
104
+
105
+ As another example, changing it to `pipe = await pipeline('image-classification')` creates a pipeline for image classification instead of sentiment analysis.
106
+ In this case, the interface created with `demo = gr.Interface.from_pipeline(pipe)` will have a UI for uploading an image and displaying the classification result. The `gr.Interface.from_pipeline` function automatically creates an appropriate UI based on the type of pipeline.
107
+
108
+ <gradio-lite shared-worker>
109
+ import gradio as gr
110
+ from transformers_js_py import pipeline
111
+ <!-- --->
112
+ pipe = await pipeline('image-classification')
113
+ <!-- --->
114
+ demo = gr.Interface.from_pipeline(pipe)
115
+ <!-- --->
116
+ demo.launch()
117
+ <gradio-requirements>
118
+ transformers-js-py
119
+ </gradio-requirements>
120
+ </gradio-lite>
121
+
122
+ <br>
123
+
124
+ **Note**: If you use an audio pipeline, such as `automatic-speech-recognition`, you will need to put `transformers-js-py[audio]` in your `<gradio-requirements>` as there are additional requirements needed to process audio files.
125
+
126
+ ## Customizing the UI
127
+
128
+ Instead of using `gr.Interface.from_pipeline()`, you can define the user interface using Gradio's regular API.
129
+ Here's an example where the Python code inside the `<gradio-lite>` tag has been modified from the previous sample:
130
+
131
+ ```html
132
+ <html>
133
+ <head>
134
+ <script type="module" crossorigin src="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.js"></script>
135
+ <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@gradio/lite/dist/lite.css" />
136
+ </head>
137
+ <body>
138
+ <gradio-lite>
139
+ import gradio as gr
140
+ from transformers_js_py import pipeline
141
+
142
+ pipe = await pipeline('sentiment-analysis')
143
+
144
+ async def fn(text):
145
+ result = await pipe(text)
146
+ return result
147
+
148
+ demo = gr.Interface(
149
+ fn=fn,
150
+ inputs=gr.Textbox(),
151
+ outputs=gr.JSON(),
152
+ )
153
+
154
+ demo.launch()
155
+
156
+ <gradio-requirements>
157
+ transformers-js-py
158
+ </gradio-requirements>
159
+ </gradio-lite>
160
+ </body>
161
+ </html>
162
+ ```
163
+
164
+ In this example, we modified the code to construct the Gradio user interface manually so that we could output the result as JSON.
165
+
166
+ <gradio-lite shared-worker>
167
+ import gradio as gr
168
+ from transformers_js_py import pipeline
169
+ <!-- --->
170
+ pipe = await pipeline('sentiment-analysis')
171
+ <!-- --->
172
+ async def fn(text):
173
+ result = await pipe(text)
174
+ return result
175
+ <!-- --->
176
+ demo = gr.Interface(
177
+ fn=fn,
178
+ inputs=gr.Textbox(),
179
+ outputs=gr.JSON(),
180
+ )
181
+ <!-- --->
182
+ demo.launch()
183
+ <gradio-requirements>
184
+ transformers-js-py
185
+ </gradio-requirements>
186
+ </gradio-lite>
187
+
188
+ ## Conclusion
189
+
190
+ By combining Gradio-Lite and Transformers.js (and Transformers.js.py), you can create serverless machine learning applications that run entirely in the browser.
191
+
192
+ Gradio-Lite provides a convenient method to create an interface for a given Transformers.js pipeline, `gr.Interface.from_pipeline()`.
193
+ This method automatically constructs the interface based on the pipeline's task type.
194
+
195
+ Alternatively, you can define the interface manually using Gradio's regular API, as shown in the second example.
196
+
197
+ By using these libraries, you can build and deploy machine learning applications without the need for server-side Python setup or external dependencies.
8 Gradio Clients and Lite/07_fastapi-app-with-the-gradio-client.md ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Building a Web App with the Gradio Python Client
3
+
4
+ Tags: CLIENT, API, WEB APP
5
+
6
+ In this blog post, we will demonstrate how to use the `gradio_client` [Python library](getting-started-with-the-python-client/), which enables developers to make requests to a Gradio app programmatically, by creating an end-to-end example web app using FastAPI. The web app we will be building is called "Acapellify," and it will allow users to upload video files as input and return a version of that video without instrumental music. It will also display a gallery of generated videos.
7
+
8
+ **Prerequisites**
9
+
10
+ Before we begin, make sure you are running Python 3.9 or later, and have the following libraries installed:
11
+
12
+ - `gradio_client`
13
+ - `fastapi`
14
+ - `uvicorn`
15
+
16
+ You can install these libraries from `pip`:
17
+
18
+ ```bash
19
+ $ pip install gradio_client fastapi uvicorn
20
+ ```
21
+
22
+ You will also need to have ffmpeg installed. You can check to see if you already have ffmpeg by running in your terminal:
23
+
24
+ ```bash
25
+ $ ffmpeg version
26
+ ```
27
+
28
+ Otherwise, install ffmpeg [by following these instructions](https://www.hostinger.com/tutorials/how-to-install-ffmpeg).
29
+
30
+ ## Step 1: Write the Video Processing Function
31
+
32
+ Let's start with what seems like the most complex bit -- using machine learning to remove the music from a video.
33
+
34
+ Luckily for us, there's an existing Space we can use to make this process easier: [https://huggingface.co/spaces/abidlabs/music-separation](https://huggingface.co/spaces/abidlabs/music-separation). This Space takes an audio file and produces two separate audio files: one with the instrumental music and one with all other sounds in the original clip. Perfect to use with our client!
35
+
36
+ Open a new Python file, say `main.py`, and start by importing the `Client` class from `gradio_client` and connecting it to this Space:
37
+
38
+ ```py
39
+ from gradio_client import Client
40
+
41
+ client = Client("abidlabs/music-separation")
42
+
43
+ def acapellify(audio_path):
44
+ result = client.predict(audio_path, api_name="/predict")
45
+ return result[0]
46
+ ```
47
+
48
+ That's all the code that's needed -- notice that the API endpoints returns two audio files (one without the music, and one with just the music) in a list, and so we just return the first element of the list.
49
+
50
+ ---
51
+
52
+ **Note**: since this is a public Space, there might be other users using this Space as well, which might result in a slow experience. You can duplicate this Space with your own [Hugging Face token](https://huggingface.co/settings/tokens) and create a private Space that only you have will have access to and bypass the queue. To do that, simply replace the first two lines above with:
53
+
54
+ ```py
55
+ from gradio_client import Client
56
+
57
+ client = Client.duplicate("abidlabs/music-separation", hf_token=YOUR_HF_TOKEN)
58
+ ```
59
+
60
+ Everything else remains the same!
61
+
62
+ ---
63
+
64
+ Now, of course, we are working with video files, so we first need to extract the audio from the video files. For this, we will be using the `ffmpeg` library, which does a lot of heavy lifting when it comes to working with audio and video files. The most common way to use `ffmpeg` is through the command line, which we'll call via Python's `subprocess` module:
65
+
66
+ Our video processing workflow will consist of three steps:
67
+
68
+ 1. First, we start by taking in a video filepath and extracting the audio using `ffmpeg`.
69
+ 2. Then, we pass in the audio file through the `acapellify()` function above.
70
+ 3. Finally, we combine the new audio with the original video to produce a final acapellified video.
71
+
72
+ Here's the complete code in Python, which you can add to your `main.py` file:
73
+
74
+ ```python
75
+ import subprocess
76
+
77
+ def process_video(video_path):
78
+ old_audio = os.path.basename(video_path).split(".")[0] + ".m4a"
79
+ subprocess.run(['ffmpeg', '-y', '-i', video_path, '-vn', '-acodec', 'copy', old_audio])
80
+
81
+ new_audio = acapellify(old_audio)
82
+
83
+ new_video = f"acap_{video_path}"
84
+ subprocess.call(['ffmpeg', '-y', '-i', video_path, '-i', new_audio, '-map', '0:v', '-map', '1:a', '-c:v', 'copy', '-c:a', 'aac', '-strict', 'experimental', f"static/{new_video}"])
85
+ return new_video
86
+ ```
87
+
88
+ You can read up on [ffmpeg documentation](https://ffmpeg.org/ffmpeg.html) if you'd like to understand all of the command line parameters, as they are beyond the scope of this tutorial.
89
+
90
+ ## Step 2: Create a FastAPI app (Backend Routes)
91
+
92
+ Next up, we'll create a simple FastAPI app. If you haven't used FastAPI before, check out [the great FastAPI docs](https://fastapi.tiangolo.com/). Otherwise, this basic template, which we add to `main.py`, will look pretty familiar:
93
+
94
+ ```python
95
+ import os
96
+ from fastapi import FastAPI, File, UploadFile, Request
97
+ from fastapi.responses import HTMLResponse, RedirectResponse
98
+ from fastapi.staticfiles import StaticFiles
99
+ from fastapi.templating import Jinja2Templates
100
+
101
+ app = FastAPI()
102
+ os.makedirs("static", exist_ok=True)
103
+ app.mount("/static", StaticFiles(directory="static"), name="static")
104
+ templates = Jinja2Templates(directory="templates")
105
+
106
+ videos = []
107
+
108
+ @app.get("/", response_class=HTMLResponse)
109
+ async def home(request: Request):
110
+ return templates.TemplateResponse(
111
+ "home.html", {"request": request, "videos": videos})
112
+
113
+ @app.post("/uploadvideo/")
114
+ async def upload_video(video: UploadFile = File(...)):
115
+ video_path = video.filename
116
+ with open(video_path, "wb+") as fp:
117
+ fp.write(video.file.read())
118
+
119
+ new_video = process_video(video.filename)
120
+ videos.append(new_video)
121
+ return RedirectResponse(url='/', status_code=303)
122
+ ```
123
+
124
+ In this example, the FastAPI app has two routes: `/` and `/uploadvideo/`.
125
+
126
+ The `/` route returns an HTML template that displays a gallery of all uploaded videos.
127
+
128
+ The `/uploadvideo/` route accepts a `POST` request with an `UploadFile` object, which represents the uploaded video file. The video file is "acapellified" via the `process_video()` method, and the output video is stored in a list which stores all of the uploaded videos in memory.
129
+
130
+ Note that this is a very basic example and if this were a production app, you will need to add more logic to handle file storage, user authentication, and security considerations.
131
+
132
+ ## Step 3: Create a FastAPI app (Frontend Template)
133
+
134
+ Finally, we create the frontend of our web application. First, we create a folder called `templates` in the same directory as `main.py`. We then create a template, `home.html` inside the `templates` folder. Here is the resulting file structure:
135
+
136
+ ```csv
137
+ ├── main.py
138
+ ├── templates
139
+ │ └── home.html
140
+ ```
141
+
142
+ Write the following as the contents of `home.html`:
143
+
144
+ ```html
145
+ &lt;!DOCTYPE html> &lt;html> &lt;head> &lt;title>Video Gallery&lt;/title>
146
+ &lt;style> body { font-family: sans-serif; margin: 0; padding: 0;
147
+ background-color: #f5f5f5; } h1 { text-align: center; margin-top: 30px;
148
+ margin-bottom: 20px; } .gallery { display: flex; flex-wrap: wrap;
149
+ justify-content: center; gap: 20px; padding: 20px; } .video { border: 2px solid
150
+ #ccc; box-shadow: 0px 0px 10px rgba(0, 0, 0, 0.2); border-radius: 5px; overflow:
151
+ hidden; width: 300px; margin-bottom: 20px; } .video video { width: 100%; height:
152
+ 200px; } .video p { text-align: center; margin: 10px 0; } form { margin-top:
153
+ 20px; text-align: center; } input[type="file"] { display: none; } .upload-btn {
154
+ display: inline-block; background-color: #3498db; color: #fff; padding: 10px
155
+ 20px; font-size: 16px; border: none; border-radius: 5px; cursor: pointer; }
156
+ .upload-btn:hover { background-color: #2980b9; } .file-name { margin-left: 10px;
157
+ } &lt;/style> &lt;/head> &lt;body> &lt;h1>Video Gallery&lt;/h1> {% if videos %}
158
+ &lt;div class="gallery"> {% for video in videos %} &lt;div class="video">
159
+ &lt;video controls> &lt;source src="{{ url_for('static', path=video) }}"
160
+ type="video/mp4"> Your browser does not support the video tag. &lt;/video>
161
+ &lt;p>{{ video }}&lt;/p> &lt;/div> {% endfor %} &lt;/div> {% else %} &lt;p>No
162
+ videos uploaded yet.&lt;/p> {% endif %} &lt;form action="/uploadvideo/"
163
+ method="post" enctype="multipart/form-data"> &lt;label for="video-upload"
164
+ class="upload-btn">Choose video file&lt;/label> &lt;input type="file"
165
+ name="video" id="video-upload"> &lt;span class="file-name">&lt;/span> &lt;button
166
+ type="submit" class="upload-btn">Upload&lt;/button> &lt;/form> &lt;script> //
167
+ Display selected file name in the form const fileUpload =
168
+ document.getElementById("video-upload"); const fileName =
169
+ document.querySelector(".file-name"); fileUpload.addEventListener("change", (e)
170
+ => { fileName.textContent = e.target.files[0].name; }); &lt;/script> &lt;/body>
171
+ &lt;/html>
172
+ ```
173
+
174
+ ## Step 4: Run your FastAPI app
175
+
176
+ Finally, we are ready to run our FastAPI app, powered by the Gradio Python Client!
177
+
178
+ Open up a terminal and navigate to the directory containing `main.py`. Then run the following command in the terminal:
179
+
180
+ ```bash
181
+ $ uvicorn main:app
182
+ ```
183
+
184
+ You should see an output that looks like this:
185
+
186
+ ```csv
187
+ Loaded as API: https://abidlabs-music-separation.hf.space ✔
188
+ INFO: Started server process [1360]
189
+ INFO: Waiting for application startup.
190
+ INFO: Application startup complete.
191
+ INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
192
+ ```
193
+
194
+ And that's it! Start uploading videos and you'll get some "acapellified" videos in response (might take seconds to minutes to process depending on the length of your videos). Here's how the UI looks after uploading two videos:
195
+
196
+ ![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/gradio-guides/acapellify.png)
197
+
198
+ If you'd like to learn more about how to use the Gradio Python Client in your projects, [read the dedicated Guide](/guides/getting-started-with-the-python-client/).
9 Other Tutorials/01_using-hugging-face-integrations.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Using Hugging Face Integrations
3
+
4
+ Related spaces: https://huggingface.co/spaces/gradio/en2es
5
+ Tags: HUB, SPACES, EMBED
6
+
7
+ Contributed by <a href="https://huggingface.co/osanseviero">Omar Sanseviero</a> 🦙
8
+
9
+ ## Introduction
10
+
11
+ The Hugging Face Hub is a central platform that has hundreds of thousands of [models](https://huggingface.co/models), [datasets](https://huggingface.co/datasets) and [demos](https://huggingface.co/spaces) (also known as Spaces).
12
+
13
+ Gradio has multiple features that make it extremely easy to leverage existing models and Spaces on the Hub. This guide walks through these features.
14
+
15
+
16
+ ## Demos with the Hugging Face Inference Endpoints
17
+
18
+ Hugging Face has a service called [Serverless Inference Endpoints](https://huggingface.co/docs/api-inference/index), which allows you to send HTTP requests to models on the Hub. The API includes a generous free tier, and you can switch to [dedicated Inference Endpoints](https://huggingface.co/inference-endpoints/dedicated) when you want to use it in production. Gradio integrates directly with Serverless Inference Endpoints so that you can create a demo simply by specifying a model's name (e.g. `Helsinki-NLP/opus-mt-en-es`), like this:
19
+
20
+ ```python
21
+ import gradio as gr
22
+
23
+ demo = gr.load("Helsinki-NLP/opus-mt-en-es", src="models")
24
+
25
+ demo.launch()
26
+ ```
27
+
28
+ For any Hugging Face model supported in Inference Endpoints, Gradio automatically infers the expected input and output and make the underlying server calls, so you don't have to worry about defining the prediction function.
29
+
30
+ Notice that we just put specify the model name and state that the `src` should be `models` (Hugging Face's Model Hub). There is no need to install any dependencies (except `gradio`) since you are not loading the model on your computer.
31
+
32
+ You might notice that the first inference takes a little bit longer. This happens since the Inference Endpoints is loading the model in the server. You get some benefits afterward:
33
+
34
+ - The inference will be much faster.
35
+ - The server caches your requests.
36
+ - You get built-in automatic scaling.
37
+
38
+ ## Hosting your Gradio demos on Spaces
39
+
40
+ [Hugging Face Spaces](https://hf.co/spaces) allows anyone to host their Gradio demos freely, and uploading your Gradio demos take a couple of minutes. You can head to [hf.co/new-space](https://huggingface.co/new-space), select the Gradio SDK, create an `app.py` file, and voila! You have a demo you can share with anyone else. To learn more, read [this guide how to host on Hugging Face Spaces using the website](https://huggingface.co/blog/gradio-spaces).
41
+
42
+ Alternatively, you can create a Space programmatically, making use of the [huggingface_hub client library](https://huggingface.co/docs/huggingface_hub/index) library. Here's an example:
43
+
44
+ ```python
45
+ from huggingface_hub import (
46
+ create_repo,
47
+ get_full_repo_name,
48
+ upload_file,
49
+ )
50
+ create_repo(name=target_space_name, token=hf_token, repo_type="space", space_sdk="gradio")
51
+ repo_name = get_full_repo_name(model_id=target_space_name, token=hf_token)
52
+ file_url = upload_file(
53
+ path_or_fileobj="file.txt",
54
+ path_in_repo="app.py",
55
+ repo_id=repo_name,
56
+ repo_type="space",
57
+ token=hf_token,
58
+ )
59
+ ```
60
+
61
+ Here, `create_repo` creates a gradio repo with the target name under a specific account using that account's Write Token. `repo_name` gets the full repo name of the related repo. Finally `upload_file` uploads a file inside the repo with the name `app.py`.
62
+
63
+
64
+ ## Loading demos from Spaces
65
+
66
+ You can also use and remix existing Gradio demos on Hugging Face Spaces. For example, you could take two existing Gradio demos on Spaces and put them as separate tabs and create a new demo. You can run this new demo locally, or upload it to Spaces, allowing endless possibilities to remix and create new demos!
67
+
68
+ Here's an example that does exactly that:
69
+
70
+ ```python
71
+ import gradio as gr
72
+
73
+ with gr.Blocks() as demo:
74
+ with gr.Tab("Translate to Spanish"):
75
+ gr.load("gradio/en2es", src="spaces")
76
+ with gr.Tab("Translate to French"):
77
+ gr.load("abidlabs/en2fr", src="spaces")
78
+
79
+ demo.launch()
80
+ ```
81
+
82
+ Notice that we use `gr.load()`, the same method we used to load models using Inference Endpoints. However, here we specify that the `src` is `spaces` (Hugging Face Spaces).
83
+
84
+ Note: loading a Space in this way may result in slight differences from the original Space. In particular, any attributes that apply to the entire Blocks, such as the theme or custom CSS/JS, will not be loaded. You can copy these properties from the Space you are loading into your own `Blocks` object.
85
+
86
+ ## Demos with the `Pipeline` in `transformers`
87
+
88
+ Hugging Face's popular `transformers` library has a very easy-to-use abstraction, [`pipeline()`](https://huggingface.co/docs/transformers/v4.16.2/en/main_classes/pipelines#transformers.pipeline) that handles most of the complex code to offer a simple API for common tasks. By specifying the task and an (optional) model, you can build a demo around an existing model with few lines of Python:
89
+
90
+ ```python
91
+ import gradio as gr
92
+
93
+ from transformers import pipeline
94
+
95
+ pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-en-es")
96
+
97
+ def predict(text):
98
+ return pipe(text)[0]["translation_text"]
99
+
100
+ demo = gr.Interface(
101
+ fn=predict,
102
+ inputs='text',
103
+ outputs='text',
104
+ )
105
+
106
+ demo.launch()
107
+ ```
108
+
109
+ But `gradio` actually makes it even easier to convert a `pipeline` to a demo, simply by using the `gradio.Interface.from_pipeline` methods, which skips the need to specify the input and output components:
110
+
111
+ ```python
112
+ from transformers import pipeline
113
+ import gradio as gr
114
+
115
+ pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-en-es")
116
+
117
+ demo = gr.Interface.from_pipeline(pipe)
118
+ demo.launch()
119
+ ```
120
+
121
+ The previous code produces the following interface, which you can try right here in your browser:
122
+
123
+ <gradio-app space="gradio/en2es"></gradio-app>
124
+
125
+
126
+ ## Recap
127
+
128
+ That's it! Let's recap the various ways Gradio and Hugging Face work together:
129
+
130
+ 1. You can build a demo around Inference Endpoints without having to load the model, by using `gr.load()`.
131
+ 2. You host your Gradio demo on Hugging Face Spaces, either using the GUI or entirely in Python.
132
+ 3. You can load demos from Hugging Face Spaces to remix and create new Gradio demos using `gr.load()`.
133
+ 4. You can convert a `transformers` pipeline into a Gradio demo using `from_pipeline()`.
134
+
135
+ 🤗