freddyaboulton HF staff commited on
Commit
c1d186b
1 Parent(s): 3e3c913

Upload folder using huggingface_hub

Browse files
.env ADDED
@@ -0,0 +1 @@
 
 
1
+ SERPAPI_API_KEY=3d009ed0a9764853b3e75ee2c70ceb3aabd92b810456307c5af711559ec4aff7
.gitignore ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ .eggs/
2
+ dist/
3
+ *.pyc
4
+ __pycache__/
5
+ *.py[cod]
6
+ *$py.class
7
+ __tmp/*
8
+ *.pyi
9
+ .mypycache
10
+ .ruff_cache
11
+ node_modules
12
+ backend/**/templates/
README.md CHANGED
@@ -1,12 +1,476 @@
1
  ---
2
- title: Gradio Agentchatbot
3
- emoji: 🐨
4
- colorFrom: yellow
5
- colorTo: gray
 
6
  sdk: gradio
7
- sdk_version: 4.31.4
8
- app_file: app.py
9
  pinned: false
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ tags: [gradio-custom-component, Chatbot, chatbot, agents, streaming, tools]
3
+ title: gradio_agentchatbot
4
+ short_description: Chat with agents and see the tools they use
5
+ colorFrom: blue
6
+ colorTo: yellow
7
  sdk: gradio
 
 
8
  pinned: false
9
+ app_file: space.py
10
  ---
11
 
12
+ # gradio_agentchatbot
13
+
14
+ ## 🤖 Chat UI for displaying the thoughts of LLM Agents 💭
15
+ <img alt="Static Badge" src="https://img.shields.io/badge/version%20-%200.0.1%20-%20orange">
16
+
17
+ The `gradio_agentchatbot` package introduces the `AgentChatbot` component which can display the thought process and tool usage of an LLM agent. Its message format is compatible with the OpenAI conversation message format.
18
+
19
+ For example usage with transformers agents, please see the [Transformers Usage](#transformers-usage) section.
20
+
21
+ For general usage, see the [General Usage](#general-usage) section
22
+
23
+ For the API reference, see the [Initialization](#initialization) section.
24
+
25
+ ## Installation
26
+
27
+ ```bash
28
+ pip install gradio_agentchatbot
29
+ ```
30
+
31
+ or add `gradio_agentchatbot` to your `requirements.txt`.
32
+
33
+ ## Transformers Usage
34
+
35
+ For [transformers agents](https://huggingface.co/learn/cookbook/agents), you can use the `stream_from_transformers_agent` function and yield all subsequent messages.
36
+
37
+ ```python
38
+
39
+ import gradio as gr
40
+ from transformers import load_tool, ReactCodeAgent, HfEngine, Tool
41
+ from gradio_agentchatbot import AgentChatbot, stream_from_transformers_agent, ChatMessage
42
+ from dotenv import load_dotenv
43
+ from langchain.agents import load_tools
44
+
45
+ # to load SerpAPI key
46
+ load_dotenv()
47
+
48
+ # Import tool from Hub
49
+ image_generation_tool = load_tool("m-ric/text-to-image")
50
+
51
+ search_tool = Tool.from_langchain(load_tools(["serpapi"])[0])
52
+
53
+ llm_engine = HfEngine("meta-llama/Meta-Llama-3-70B-Instruct")
54
+ # Initialize the agent with both tools
55
+ agent = ReactCodeAgent(tools=[image_generation_tool, search_tool], llm_engine=llm_engine)
56
+
57
+
58
+ def interact_with_agent(prompt, messages):
59
+ messages.append(ChatMessage(role="user", content=prompt))
60
+ yield messages
61
+ for msg in stream_from_transformers_agent(agent, prompt):
62
+ messages.append(msg)
63
+ yield messages
64
+ yield messages
65
+
66
+
67
+ with gr.Blocks() as demo:
68
+ chatbot = AgentChatbot(label="Agent")
69
+ text_input = gr.Textbox(lines=1, label="Chat Message")
70
+ text_input.submit(interact_with_agent, [text_input, chatbot], [chatbot])
71
+
72
+
73
+ if __name__ == "__main__":
74
+ demo.launch()
75
+ ```
76
+
77
+ ![AgentChatbot with transformers](https://gradio-builds.s3.amazonaws.com/demo-files/tf_agent_ui.gif)
78
+
79
+ ## General Usage
80
+
81
+ The `AgentChatbot` is similar to the core `Gradio` `Chatbot` but the key difference is in the expected data format of the `value` property.
82
+
83
+ Instead of a list of tuples, each of which can be either a string or tuple, the value is a list of message instances. Each message can be either a `ChatMessage` or a `ChatFileMessage`. These are pydantic classes that are compatible with the OpenAI [message format](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages). This is how the are defined:
84
+
85
+ ```python
86
+ class ThoughtMetadata(GradioModel):
87
+ tool_name: Optional[str] = None
88
+ error: bool = False
89
+
90
+
91
+ class ChatMessage(GradioModel):
92
+ role: Literal["user", "assistant"]
93
+ content: str
94
+ thought: bool = False
95
+ thought_metadata: ThoughtMetadata = Field(default_factory=ThoughtMetadata)
96
+
97
+
98
+ class ChatFileMessage(GradioModel):
99
+ role: Literal["user", "assistant"]
100
+ file: FileData
101
+ thought: bool = False
102
+ thought_metadata: ThoughtMetadata = Field(default_factory=ThoughtMetadata)
103
+ alt_text: Optional[str] = None
104
+ ```
105
+
106
+ In order to properly display data in `AgentChatbot`, simply return a list of `ChatMessage` or `ChatFileMessage` instances from your python function. For example:
107
+
108
+ ```python
109
+ def chat_echo(prompt: str, messages: List[ChatMessage | ChatFileMessage]) -> List[ChatMessage | ChatFileMessage]:
110
+ messages.append(ChatMessage(role="user", content=prompt))
111
+ messages.append(ChatMessage(role="assistant", content=prompt))
112
+ return messages
113
+ ```
114
+
115
+ ### Why a different data format?
116
+
117
+ The OpenAI data format is the standard format for representing LLM conversations and most API providers have adopted it.
118
+ By using a compliant data format, it should be easier to use `AgentChatbot` with multiple API providers and libraries.
119
+
120
+
121
+ ### Why are pydantic data classes required?
122
+
123
+ It should improve developer experience since your editor will auto-complete the required fields and use smart autocomplete for the `role` class. You will also get an error message if your data does not conform to the data format.
124
+
125
+ I will probably relax this in the future so that a plain python dict can be passed instead of one of the chat classes.
126
+
127
+
128
+
129
+ ## `API Reference`
130
+
131
+ ### Initialization
132
+
133
+ <table>
134
+ <thead>
135
+ <tr>
136
+ <th align="left">name</th>
137
+ <th align="left" style="width: 25%;">type</th>
138
+ <th align="left">default</th>
139
+ <th align="left">description</th>
140
+ </tr>
141
+ </thead>
142
+ <tbody>
143
+ <tr>
144
+ <td align="left"><code>value</code></td>
145
+ <td align="left" style="width: 25%;">
146
+
147
+ ```python
148
+ list[ChatMessage | ChatFileMessage]
149
+ | Callable
150
+ | None
151
+ ```
152
+
153
+ </td>
154
+ <td align="left"><code>None</code></td>
155
+ <td align="left">Default value to show in chatbot. If callable, the function will be called whenever the app loads to set the initial value of the component.</td>
156
+ </tr>
157
+
158
+ <tr>
159
+ <td align="left"><code>label</code></td>
160
+ <td align="left" style="width: 25%;">
161
+
162
+ ```python
163
+ str | None
164
+ ```
165
+
166
+ </td>
167
+ <td align="left"><code>None</code></td>
168
+ <td align="left">The label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to.</td>
169
+ </tr>
170
+
171
+ <tr>
172
+ <td align="left"><code>every</code></td>
173
+ <td align="left" style="width: 25%;">
174
+
175
+ ```python
176
+ float | None
177
+ ```
178
+
179
+ </td>
180
+ <td align="left"><code>None</code></td>
181
+ <td align="left">If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute.</td>
182
+ </tr>
183
+
184
+ <tr>
185
+ <td align="left"><code>show_label</code></td>
186
+ <td align="left" style="width: 25%;">
187
+
188
+ ```python
189
+ bool | None
190
+ ```
191
+
192
+ </td>
193
+ <td align="left"><code>None</code></td>
194
+ <td align="left">if True, will display label.</td>
195
+ </tr>
196
+
197
+ <tr>
198
+ <td align="left"><code>container</code></td>
199
+ <td align="left" style="width: 25%;">
200
+
201
+ ```python
202
+ bool
203
+ ```
204
+
205
+ </td>
206
+ <td align="left"><code>True</code></td>
207
+ <td align="left">If True, will place the component in a container - providing some extra padding around the border.</td>
208
+ </tr>
209
+
210
+ <tr>
211
+ <td align="left"><code>scale</code></td>
212
+ <td align="left" style="width: 25%;">
213
+
214
+ ```python
215
+ int | None
216
+ ```
217
+
218
+ </td>
219
+ <td align="left"><code>None</code></td>
220
+ <td align="left">relative size compared to adjacent Components. For example if Components A and B are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide as B. Should be an integer. scale applies in Rows, and to top-level Components in Blocks where fill_height=True.</td>
221
+ </tr>
222
+
223
+ <tr>
224
+ <td align="left"><code>min_width</code></td>
225
+ <td align="left" style="width: 25%;">
226
+
227
+ ```python
228
+ int
229
+ ```
230
+
231
+ </td>
232
+ <td align="left"><code>160</code></td>
233
+ <td align="left">minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.</td>
234
+ </tr>
235
+
236
+ <tr>
237
+ <td align="left"><code>visible</code></td>
238
+ <td align="left" style="width: 25%;">
239
+
240
+ ```python
241
+ bool
242
+ ```
243
+
244
+ </td>
245
+ <td align="left"><code>True</code></td>
246
+ <td align="left">If False, component will be hidden.</td>
247
+ </tr>
248
+
249
+ <tr>
250
+ <td align="left"><code>elem_id</code></td>
251
+ <td align="left" style="width: 25%;">
252
+
253
+ ```python
254
+ str | None
255
+ ```
256
+
257
+ </td>
258
+ <td align="left"><code>None</code></td>
259
+ <td align="left">An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.</td>
260
+ </tr>
261
+
262
+ <tr>
263
+ <td align="left"><code>elem_classes</code></td>
264
+ <td align="left" style="width: 25%;">
265
+
266
+ ```python
267
+ list[str] | str | None
268
+ ```
269
+
270
+ </td>
271
+ <td align="left"><code>None</code></td>
272
+ <td align="left">An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.</td>
273
+ </tr>
274
+
275
+ <tr>
276
+ <td align="left"><code>render</code></td>
277
+ <td align="left" style="width: 25%;">
278
+
279
+ ```python
280
+ bool
281
+ ```
282
+
283
+ </td>
284
+ <td align="left"><code>True</code></td>
285
+ <td align="left">If False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later.</td>
286
+ </tr>
287
+
288
+ <tr>
289
+ <td align="left"><code>key</code></td>
290
+ <td align="left" style="width: 25%;">
291
+
292
+ ```python
293
+ int | str | None
294
+ ```
295
+
296
+ </td>
297
+ <td align="left"><code>None</code></td>
298
+ <td align="left">if assigned, will be used to assume identity across a re-render. Components that have the same key across a re-render will have their value preserved.</td>
299
+ </tr>
300
+
301
+ <tr>
302
+ <td align="left"><code>height</code></td>
303
+ <td align="left" style="width: 25%;">
304
+
305
+ ```python
306
+ int | str | None
307
+ ```
308
+
309
+ </td>
310
+ <td align="left"><code>None</code></td>
311
+ <td align="left">The height of the component, specified in pixels if a number is passed, or in CSS units if a string is passed.</td>
312
+ </tr>
313
+
314
+ <tr>
315
+ <td align="left"><code>latex_delimiters</code></td>
316
+ <td align="left" style="width: 25%;">
317
+
318
+ ```python
319
+ list[dict[str, str | bool]] | None
320
+ ```
321
+
322
+ </td>
323
+ <td align="left"><code>None</code></td>
324
+ <td align="left">A list of dicts of the form {"left": open delimiter (str), "right": close delimiter (str), "display": whether to display in newline (bool)} that will be used to render LaTeX expressions. If not provided, `latex_delimiters` is set to `[{ "left": "$$", "right": "$$", "display": True }]`, so only expressions enclosed in $$ delimiters will be rendered as LaTeX, and in a new line. Pass in an empty list to disable LaTeX rendering. For more information, see the [KaTeX documentation](https://katex.org/docs/autorender.html).</td>
325
+ </tr>
326
+
327
+ <tr>
328
+ <td align="left"><code>rtl</code></td>
329
+ <td align="left" style="width: 25%;">
330
+
331
+ ```python
332
+ bool
333
+ ```
334
+
335
+ </td>
336
+ <td align="left"><code>False</code></td>
337
+ <td align="left">If True, sets the direction of the rendered text to right-to-left. Default is False, which renders text left-to-right.</td>
338
+ </tr>
339
+
340
+ <tr>
341
+ <td align="left"><code>show_share_button</code></td>
342
+ <td align="left" style="width: 25%;">
343
+
344
+ ```python
345
+ bool | None
346
+ ```
347
+
348
+ </td>
349
+ <td align="left"><code>None</code></td>
350
+ <td align="left">If True, will show a share icon in the corner of the component that allows user to share outputs to Hugging Face Spaces Discussions. If False, icon does not appear. If set to None (default behavior), then the icon appears if this Gradio app is launched on Spaces, but not otherwise.</td>
351
+ </tr>
352
+
353
+ <tr>
354
+ <td align="left"><code>show_copy_button</code></td>
355
+ <td align="left" style="width: 25%;">
356
+
357
+ ```python
358
+ bool
359
+ ```
360
+
361
+ </td>
362
+ <td align="left"><code>False</code></td>
363
+ <td align="left">If True, will show a copy button for each chatbot message.</td>
364
+ </tr>
365
+
366
+ <tr>
367
+ <td align="left"><code>avatar_images</code></td>
368
+ <td align="left" style="width: 25%;">
369
+
370
+ ```python
371
+ tuple[str | Path | None, str | Path | None] | None
372
+ ```
373
+
374
+ </td>
375
+ <td align="left"><code>None</code></td>
376
+ <td align="left">Tuple of two avatar image paths or URLs for user and bot (in that order). Pass None for either the user or bot image to skip. Must be within the working directory of the Gradio app or an external URL.</td>
377
+ </tr>
378
+
379
+ <tr>
380
+ <td align="left"><code>sanitize_html</code></td>
381
+ <td align="left" style="width: 25%;">
382
+
383
+ ```python
384
+ bool
385
+ ```
386
+
387
+ </td>
388
+ <td align="left"><code>True</code></td>
389
+ <td align="left">If False, will disable HTML sanitization for chatbot messages. This is not recommended, as it can lead to security vulnerabilities.</td>
390
+ </tr>
391
+
392
+ <tr>
393
+ <td align="left"><code>render_markdown</code></td>
394
+ <td align="left" style="width: 25%;">
395
+
396
+ ```python
397
+ bool
398
+ ```
399
+
400
+ </td>
401
+ <td align="left"><code>True</code></td>
402
+ <td align="left">If False, will disable Markdown rendering for chatbot messages.</td>
403
+ </tr>
404
+
405
+ <tr>
406
+ <td align="left"><code>bubble_full_width</code></td>
407
+ <td align="left" style="width: 25%;">
408
+
409
+ ```python
410
+ bool
411
+ ```
412
+
413
+ </td>
414
+ <td align="left"><code>True</code></td>
415
+ <td align="left">If False, the chat bubble will fit to the content of the message. If True (default), the chat bubble will be the full width of the component.</td>
416
+ </tr>
417
+
418
+ <tr>
419
+ <td align="left"><code>line_breaks</code></td>
420
+ <td align="left" style="width: 25%;">
421
+
422
+ ```python
423
+ bool
424
+ ```
425
+
426
+ </td>
427
+ <td align="left"><code>True</code></td>
428
+ <td align="left">If True (default), will enable Github-flavored Markdown line breaks in chatbot messages. If False, single new lines will be ignored. Only applies if `render_markdown` is True.</td>
429
+ </tr>
430
+
431
+ <tr>
432
+ <td align="left"><code>likeable</code></td>
433
+ <td align="left" style="width: 25%;">
434
+
435
+ ```python
436
+ bool
437
+ ```
438
+
439
+ </td>
440
+ <td align="left"><code>False</code></td>
441
+ <td align="left">Whether the chat messages display a like or dislike button. Set automatically by the .like method but has to be present in the signature for it to show up in the config.</td>
442
+ </tr>
443
+
444
+ <tr>
445
+ <td align="left"><code>layout</code></td>
446
+ <td align="left" style="width: 25%;">
447
+
448
+ ```python
449
+ Literal["panel", "bubble"] | None
450
+ ```
451
+
452
+ </td>
453
+ <td align="left"><code>None</code></td>
454
+ <td align="left">If "panel", will display the chatbot in a llm style layout. If "bubble", will display the chatbot with message bubbles, with the user and bot messages on alterating sides. Will default to "bubble".</td>
455
+ </tr>
456
+
457
+ <tr>
458
+ <td align="left"><code>placeholder</code></td>
459
+ <td align="left" style="width: 25%;">
460
+
461
+ ```python
462
+ str | None
463
+ ```
464
+
465
+ </td>
466
+ <td align="left"><code>None</code></td>
467
+ <td align="left">a placeholder message to display in the chatbot when it is empty. Centered vertically and horizontally in the TestChatbot. Supports Markdown and HTML. If None, no placeholder is displayed.</td>
468
+ </tr>
469
+ </tbody></table>
470
+
471
+
472
+ ### Events
473
+
474
+ | name | description |
475
+ |:-----|:------------|
476
+ | `change` | Triggered when the value of the TestChatbot changes either because of user input (e.g. a user types in a textbox) OR because of a function update (e.g. an image receives a value from the output of an event trigger). See `.input()` for a listener that is only triggered by user input. |
__init__.py ADDED
File without changes
app.py ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ from transformers import load_tool, ReactCodeAgent, HfEngine, Tool
3
+ from gradio_agentchatbot import (
4
+ AgentChatbot,
5
+ stream_from_transformers_agent,
6
+ ChatMessage,
7
+ )
8
+ from dotenv import load_dotenv
9
+ from langchain.agents import load_tools
10
+ from pathlib import Path
11
+
12
+ current_dir = Path(__file__).parent
13
+
14
+ load_dotenv()
15
+
16
+ # Import tool from Hub
17
+ image_generation_tool = load_tool("m-ric/text-to-image")
18
+
19
+ search_tool = Tool.from_langchain(load_tools(["serpapi"])[0])
20
+
21
+ llm_engine = HfEngine("meta-llama/Meta-Llama-3-70B-Instruct")
22
+ # Initialize the agent with both tools
23
+ agent = ReactCodeAgent(
24
+ tools=[image_generation_tool, search_tool], llm_engine=llm_engine
25
+ )
26
+
27
+
28
+ def interact_with_agent(prompt, messages):
29
+ messages.append(ChatMessage(role="user", content=prompt))
30
+ yield messages
31
+ for msg in stream_from_transformers_agent(agent, prompt):
32
+ messages.append(msg)
33
+ yield messages
34
+ yield messages
35
+
36
+
37
+ with gr.Blocks() as demo:
38
+ with gr.Tabs():
39
+ with gr.Tab("Demo"):
40
+ chatbot = AgentChatbot(
41
+ label="Agent",
42
+ avatar_images=[
43
+ None,
44
+ "https://em-content.zobj.net/source/twitter/53/robot-face_1f916.png",
45
+ ],
46
+ )
47
+ text_input = gr.Textbox(lines=1, label="Chat Message")
48
+ text_input.submit(interact_with_agent, [text_input, chatbot], [chatbot])
49
+ with gr.Tab("Docs"):
50
+ gr.Markdown(Path(current_dir / "docs.md").read_text())
51
+
52
+
53
+ if __name__ == "__main__":
54
+ demo.launch()
docs.md ADDED
@@ -0,0 +1,465 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # gradio_agentchatbot
2
+
3
+ ## 🤖 Chat UI for displaying the thoughts of LLM Agents 💭
4
+ <img alt="Static Badge" src="https://img.shields.io/badge/version%20-%200.0.1%20-%20orange">
5
+
6
+ The `gradio_agentchatbot` package introduces the `AgentChatbot` component which can display the thought process and tool usage of an LLM agent. Its message format is compatible with the OpenAI conversation message format.
7
+
8
+ For example usage with transformers agents, please see the [Transformers Usage](#transformers-usage) section.
9
+
10
+ For general usage, see the [General Usage](#general-usage) section.
11
+
12
+ For the API reference, see the [Initialization](#initialization) section.
13
+
14
+ ## Installation
15
+
16
+ ```bash
17
+ pip install gradio_agentchatbot
18
+ ```
19
+
20
+ or add `gradio_agentchatbot` to your `requirements.txt`.
21
+
22
+ ## Transformers Usage
23
+
24
+ For [transformers agents](https://huggingface.co/learn/cookbook/agents), you can use the `stream_from_transformers_agent` function and yield all subsequent messages.
25
+
26
+ ```python
27
+
28
+ import gradio as gr
29
+ from transformers import load_tool, ReactCodeAgent, HfEngine, Tool
30
+ from gradio_agentchatbot import AgentChatbot, stream_from_transformers_agent, ChatMessage
31
+ from dotenv import load_dotenv
32
+ from langchain.agents import load_tools
33
+
34
+ # to load SerpAPI key
35
+ load_dotenv()
36
+
37
+ # Import tool from Hub
38
+ image_generation_tool = load_tool("m-ric/text-to-image")
39
+
40
+ search_tool = Tool.from_langchain(load_tools(["serpapi"])[0])
41
+
42
+ llm_engine = HfEngine("meta-llama/Meta-Llama-3-70B-Instruct")
43
+ # Initialize the agent with both tools
44
+ agent = ReactCodeAgent(tools=[image_generation_tool, search_tool], llm_engine=llm_engine)
45
+
46
+
47
+ def interact_with_agent(prompt, messages):
48
+ messages.append(ChatMessage(role="user", content=prompt))
49
+ yield messages
50
+ for msg in stream_from_transformers_agent(agent, prompt):
51
+ messages.append(msg)
52
+ yield messages
53
+ yield messages
54
+
55
+
56
+ with gr.Blocks() as demo:
57
+ chatbot = AgentChatbot(label="Agent")
58
+ text_input = gr.Textbox(lines=1, label="Chat Message")
59
+ text_input.submit(interact_with_agent, [text_input, chatbot], [chatbot])
60
+
61
+
62
+ if __name__ == "__main__":
63
+ demo.launch()
64
+ ```
65
+
66
+ ![AgentChatbot with transformers](https://gradio-builds.s3.amazonaws.com/demo-files/tf_agent_ui.gif)
67
+
68
+ ## General Usage
69
+
70
+ The `AgentChatbot` is similar to the core `Gradio` `Chatbot` but the key difference is in the expected data format of the `value` property.
71
+
72
+ Instead of a list of tuples, each of which can be either a string or tuple, the value is a list of message instances. Each message can be either a `ChatMessage` or a `ChatFileMessage`. These are pydantic classes that are compatible with the OpenAI [message format](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages). This is how the are defined:
73
+
74
+ ```python
75
+ class ThoughtMetadata(GradioModel):
76
+ tool_name: Optional[str] = None
77
+ error: bool = False
78
+
79
+
80
+ class ChatMessage(GradioModel):
81
+ role: Literal["user", "assistant"]
82
+ content: str
83
+ thought: bool = False
84
+ thought_metadata: ThoughtMetadata = Field(default_factory=ThoughtMetadata)
85
+
86
+
87
+ class ChatFileMessage(GradioModel):
88
+ role: Literal["user", "assistant"]
89
+ file: FileData
90
+ thought: bool = False
91
+ thought_metadata: ThoughtMetadata = Field(default_factory=ThoughtMetadata)
92
+ alt_text: Optional[str] = None
93
+ ```
94
+
95
+ In order to properly display data in `AgentChatbot`, simply return a list of `ChatMessage` or `ChatFileMessage` instances from your python function. For example:
96
+
97
+ ```python
98
+ def chat_echo(prompt: str, messages: List[ChatMessage | ChatFileMessage]) -> List[ChatMessage | ChatFileMessage]:
99
+ messages.append(ChatMessage(role="user", content=prompt))
100
+ messages.append(ChatMessage(role="assistant", content=prompt))
101
+ return messages
102
+ ```
103
+
104
+ ### Why a different data format?
105
+
106
+ The OpenAI data format is the standard format for representing LLM conversations and most API providers have adopted it.
107
+ By using a compliant data format, it should be easier to use `AgentChatbot` with multiple API providers and libraries.
108
+
109
+
110
+ ### Why are pydantic data classes required?
111
+
112
+ It should improve developer experience since your editor will auto-complete the required fields and use smart autocomplete for the `role` class. You will also get an error message if your data does not conform to the data format.
113
+
114
+ I will probably relax this in the future so that a plain python dict can be passed instead of one of the chat classes.
115
+
116
+
117
+
118
+ ## `API Reference`
119
+
120
+ ### Initialization
121
+
122
+ <table>
123
+ <thead>
124
+ <tr>
125
+ <th align="left">name</th>
126
+ <th align="left" style="width: 25%;">type</th>
127
+ <th align="left">default</th>
128
+ <th align="left">description</th>
129
+ </tr>
130
+ </thead>
131
+ <tbody>
132
+ <tr>
133
+ <td align="left"><code>value</code></td>
134
+ <td align="left" style="width: 25%;">
135
+
136
+ ```python
137
+ list[ChatMessage | ChatFileMessage]
138
+ | Callable
139
+ | None
140
+ ```
141
+
142
+ </td>
143
+ <td align="left"><code>None</code></td>
144
+ <td align="left">Default value to show in chatbot. If callable, the function will be called whenever the app loads to set the initial value of the component.</td>
145
+ </tr>
146
+
147
+ <tr>
148
+ <td align="left"><code>label</code></td>
149
+ <td align="left" style="width: 25%;">
150
+
151
+ ```python
152
+ str | None
153
+ ```
154
+
155
+ </td>
156
+ <td align="left"><code>None</code></td>
157
+ <td align="left">The label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to.</td>
158
+ </tr>
159
+
160
+ <tr>
161
+ <td align="left"><code>every</code></td>
162
+ <td align="left" style="width: 25%;">
163
+
164
+ ```python
165
+ float | None
166
+ ```
167
+
168
+ </td>
169
+ <td align="left"><code>None</code></td>
170
+ <td align="left">If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute.</td>
171
+ </tr>
172
+
173
+ <tr>
174
+ <td align="left"><code>show_label</code></td>
175
+ <td align="left" style="width: 25%;">
176
+
177
+ ```python
178
+ bool | None
179
+ ```
180
+
181
+ </td>
182
+ <td align="left"><code>None</code></td>
183
+ <td align="left">if True, will display label.</td>
184
+ </tr>
185
+
186
+ <tr>
187
+ <td align="left"><code>container</code></td>
188
+ <td align="left" style="width: 25%;">
189
+
190
+ ```python
191
+ bool
192
+ ```
193
+
194
+ </td>
195
+ <td align="left"><code>True</code></td>
196
+ <td align="left">If True, will place the component in a container - providing some extra padding around the border.</td>
197
+ </tr>
198
+
199
+ <tr>
200
+ <td align="left"><code>scale</code></td>
201
+ <td align="left" style="width: 25%;">
202
+
203
+ ```python
204
+ int | None
205
+ ```
206
+
207
+ </td>
208
+ <td align="left"><code>None</code></td>
209
+ <td align="left">relative size compared to adjacent Components. For example if Components A and B are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide as B. Should be an integer. scale applies in Rows, and to top-level Components in Blocks where fill_height=True.</td>
210
+ </tr>
211
+
212
+ <tr>
213
+ <td align="left"><code>min_width</code></td>
214
+ <td align="left" style="width: 25%;">
215
+
216
+ ```python
217
+ int
218
+ ```
219
+
220
+ </td>
221
+ <td align="left"><code>160</code></td>
222
+ <td align="left">minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.</td>
223
+ </tr>
224
+
225
+ <tr>
226
+ <td align="left"><code>visible</code></td>
227
+ <td align="left" style="width: 25%;">
228
+
229
+ ```python
230
+ bool
231
+ ```
232
+
233
+ </td>
234
+ <td align="left"><code>True</code></td>
235
+ <td align="left">If False, component will be hidden.</td>
236
+ </tr>
237
+
238
+ <tr>
239
+ <td align="left"><code>elem_id</code></td>
240
+ <td align="left" style="width: 25%;">
241
+
242
+ ```python
243
+ str | None
244
+ ```
245
+
246
+ </td>
247
+ <td align="left"><code>None</code></td>
248
+ <td align="left">An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.</td>
249
+ </tr>
250
+
251
+ <tr>
252
+ <td align="left"><code>elem_classes</code></td>
253
+ <td align="left" style="width: 25%;">
254
+
255
+ ```python
256
+ list[str] | str | None
257
+ ```
258
+
259
+ </td>
260
+ <td align="left"><code>None</code></td>
261
+ <td align="left">An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.</td>
262
+ </tr>
263
+
264
+ <tr>
265
+ <td align="left"><code>render</code></td>
266
+ <td align="left" style="width: 25%;">
267
+
268
+ ```python
269
+ bool
270
+ ```
271
+
272
+ </td>
273
+ <td align="left"><code>True</code></td>
274
+ <td align="left">If False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later.</td>
275
+ </tr>
276
+
277
+ <tr>
278
+ <td align="left"><code>key</code></td>
279
+ <td align="left" style="width: 25%;">
280
+
281
+ ```python
282
+ int | str | None
283
+ ```
284
+
285
+ </td>
286
+ <td align="left"><code>None</code></td>
287
+ <td align="left">if assigned, will be used to assume identity across a re-render. Components that have the same key across a re-render will have their value preserved.</td>
288
+ </tr>
289
+
290
+ <tr>
291
+ <td align="left"><code>height</code></td>
292
+ <td align="left" style="width: 25%;">
293
+
294
+ ```python
295
+ int | str | None
296
+ ```
297
+
298
+ </td>
299
+ <td align="left"><code>None</code></td>
300
+ <td align="left">The height of the component, specified in pixels if a number is passed, or in CSS units if a string is passed.</td>
301
+ </tr>
302
+
303
+ <tr>
304
+ <td align="left"><code>latex_delimiters</code></td>
305
+ <td align="left" style="width: 25%;">
306
+
307
+ ```python
308
+ list[dict[str, str | bool]] | None
309
+ ```
310
+
311
+ </td>
312
+ <td align="left"><code>None</code></td>
313
+ <td align="left">A list of dicts of the form {"left": open delimiter (str), "right": close delimiter (str), "display": whether to display in newline (bool)} that will be used to render LaTeX expressions. If not provided, `latex_delimiters` is set to `[{ "left": "$$", "right": "$$", "display": True }]`, so only expressions enclosed in $$ delimiters will be rendered as LaTeX, and in a new line. Pass in an empty list to disable LaTeX rendering. For more information, see the [KaTeX documentation](https://katex.org/docs/autorender.html).</td>
314
+ </tr>
315
+
316
+ <tr>
317
+ <td align="left"><code>rtl</code></td>
318
+ <td align="left" style="width: 25%;">
319
+
320
+ ```python
321
+ bool
322
+ ```
323
+
324
+ </td>
325
+ <td align="left"><code>False</code></td>
326
+ <td align="left">If True, sets the direction of the rendered text to right-to-left. Default is False, which renders text left-to-right.</td>
327
+ </tr>
328
+
329
+ <tr>
330
+ <td align="left"><code>show_share_button</code></td>
331
+ <td align="left" style="width: 25%;">
332
+
333
+ ```python
334
+ bool | None
335
+ ```
336
+
337
+ </td>
338
+ <td align="left"><code>None</code></td>
339
+ <td align="left">If True, will show a share icon in the corner of the component that allows user to share outputs to Hugging Face Spaces Discussions. If False, icon does not appear. If set to None (default behavior), then the icon appears if this Gradio app is launched on Spaces, but not otherwise.</td>
340
+ </tr>
341
+
342
+ <tr>
343
+ <td align="left"><code>show_copy_button</code></td>
344
+ <td align="left" style="width: 25%;">
345
+
346
+ ```python
347
+ bool
348
+ ```
349
+
350
+ </td>
351
+ <td align="left"><code>False</code></td>
352
+ <td align="left">If True, will show a copy button for each chatbot message.</td>
353
+ </tr>
354
+
355
+ <tr>
356
+ <td align="left"><code>avatar_images</code></td>
357
+ <td align="left" style="width: 25%;">
358
+
359
+ ```python
360
+ tuple[str | Path | None, str | Path | None] | None
361
+ ```
362
+
363
+ </td>
364
+ <td align="left"><code>None</code></td>
365
+ <td align="left">Tuple of two avatar image paths or URLs for user and bot (in that order). Pass None for either the user or bot image to skip. Must be within the working directory of the Gradio app or an external URL.</td>
366
+ </tr>
367
+
368
+ <tr>
369
+ <td align="left"><code>sanitize_html</code></td>
370
+ <td align="left" style="width: 25%;">
371
+
372
+ ```python
373
+ bool
374
+ ```
375
+
376
+ </td>
377
+ <td align="left"><code>True</code></td>
378
+ <td align="left">If False, will disable HTML sanitization for chatbot messages. This is not recommended, as it can lead to security vulnerabilities.</td>
379
+ </tr>
380
+
381
+ <tr>
382
+ <td align="left"><code>render_markdown</code></td>
383
+ <td align="left" style="width: 25%;">
384
+
385
+ ```python
386
+ bool
387
+ ```
388
+
389
+ </td>
390
+ <td align="left"><code>True</code></td>
391
+ <td align="left">If False, will disable Markdown rendering for chatbot messages.</td>
392
+ </tr>
393
+
394
+ <tr>
395
+ <td align="left"><code>bubble_full_width</code></td>
396
+ <td align="left" style="width: 25%;">
397
+
398
+ ```python
399
+ bool
400
+ ```
401
+
402
+ </td>
403
+ <td align="left"><code>True</code></td>
404
+ <td align="left">If False, the chat bubble will fit to the content of the message. If True (default), the chat bubble will be the full width of the component.</td>
405
+ </tr>
406
+
407
+ <tr>
408
+ <td align="left"><code>line_breaks</code></td>
409
+ <td align="left" style="width: 25%;">
410
+
411
+ ```python
412
+ bool
413
+ ```
414
+
415
+ </td>
416
+ <td align="left"><code>True</code></td>
417
+ <td align="left">If True (default), will enable Github-flavored Markdown line breaks in chatbot messages. If False, single new lines will be ignored. Only applies if `render_markdown` is True.</td>
418
+ </tr>
419
+
420
+ <tr>
421
+ <td align="left"><code>likeable</code></td>
422
+ <td align="left" style="width: 25%;">
423
+
424
+ ```python
425
+ bool
426
+ ```
427
+
428
+ </td>
429
+ <td align="left"><code>False</code></td>
430
+ <td align="left">Whether the chat messages display a like or dislike button. Set automatically by the .like method but has to be present in the signature for it to show up in the config.</td>
431
+ </tr>
432
+
433
+ <tr>
434
+ <td align="left"><code>layout</code></td>
435
+ <td align="left" style="width: 25%;">
436
+
437
+ ```python
438
+ Literal["panel", "bubble"] | None
439
+ ```
440
+
441
+ </td>
442
+ <td align="left"><code>None</code></td>
443
+ <td align="left">If "panel", will display the chatbot in a llm style layout. If "bubble", will display the chatbot with message bubbles, with the user and bot messages on alterating sides. Will default to "bubble".</td>
444
+ </tr>
445
+
446
+ <tr>
447
+ <td align="left"><code>placeholder</code></td>
448
+ <td align="left" style="width: 25%;">
449
+
450
+ ```python
451
+ str | None
452
+ ```
453
+
454
+ </td>
455
+ <td align="left"><code>None</code></td>
456
+ <td align="left">a placeholder message to display in the chatbot when it is empty. Centered vertically and horizontally in the TestChatbot. Supports Markdown and HTML. If None, no placeholder is displayed.</td>
457
+ </tr>
458
+ </tbody></table>
459
+
460
+
461
+ ### Events
462
+
463
+ | name | description |
464
+ |:-----|:------------|
465
+ | `change` | Triggered when the value of the TestChatbot changes either because of user input (e.g. a user types in a textbox) OR because of a function update (e.g. an image receives a value from the output of an event trigger). See `.input()` for a listener that is only triggered by user input. |
requirements.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ gradio_agentchatbot
2
+ "git+https://github.com/huggingface/transformers.git#egg=transformers[agents]"
3
+ langchain
4
+ sentence-transformers
5
+ faiss-cpu
6
+ serpapi
7
+ google-search-results
src/.gitignore ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ .eggs/
2
+ dist/
3
+ *.pyc
4
+ __pycache__/
5
+ *.py[cod]
6
+ *$py.class
7
+ __tmp/*
8
+ *.pyi
9
+ .mypycache
10
+ .ruff_cache
11
+ node_modules
12
+ backend/**/templates/
src/README.md ADDED
@@ -0,0 +1,476 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags: [gradio-custom-component, Chatbot, chatbot, agents, streaming, tools]
3
+ title: gradio_agentchatbot
4
+ short_description: Chat with agents and see the tools they use
5
+ colorFrom: blue
6
+ colorTo: yellow
7
+ sdk: gradio
8
+ pinned: false
9
+ app_file: space.py
10
+ ---
11
+
12
+ # gradio_agentchatbot
13
+
14
+ ## 🤖 Chat UI for displaying the thoughts of LLM Agents 💭
15
+ <img alt="Static Badge" src="https://img.shields.io/badge/version%20-%200.0.1%20-%20orange">
16
+
17
+ The `gradio_agentchatbot` package introduces the `AgentChatbot` component which can display the thought process and tool usage of an LLM agent. Its message format is compatible with the OpenAI conversation message format.
18
+
19
+ For example usage with transformers agents, please see the [Transformers Usage](#transformers-usage) section.
20
+
21
+ For general usage, see the [General Usage](#general-usage) section
22
+
23
+ For the API reference, see the [Initialization](#initialization) section.
24
+
25
+ ## Installation
26
+
27
+ ```bash
28
+ pip install gradio_agentchatbot
29
+ ```
30
+
31
+ or add `gradio_agentchatbot` to your `requirements.txt`.
32
+
33
+ ## Transformers Usage
34
+
35
+ For [transformers agents](https://huggingface.co/learn/cookbook/agents), you can use the `stream_from_transformers_agent` function and yield all subsequent messages.
36
+
37
+ ```python
38
+
39
+ import gradio as gr
40
+ from transformers import load_tool, ReactCodeAgent, HfEngine, Tool
41
+ from gradio_agentchatbot import AgentChatbot, stream_from_transformers_agent, ChatMessage
42
+ from dotenv import load_dotenv
43
+ from langchain.agents import load_tools
44
+
45
+ # to load SerpAPI key
46
+ load_dotenv()
47
+
48
+ # Import tool from Hub
49
+ image_generation_tool = load_tool("m-ric/text-to-image")
50
+
51
+ search_tool = Tool.from_langchain(load_tools(["serpapi"])[0])
52
+
53
+ llm_engine = HfEngine("meta-llama/Meta-Llama-3-70B-Instruct")
54
+ # Initialize the agent with both tools
55
+ agent = ReactCodeAgent(tools=[image_generation_tool, search_tool], llm_engine=llm_engine)
56
+
57
+
58
+ def interact_with_agent(prompt, messages):
59
+ messages.append(ChatMessage(role="user", content=prompt))
60
+ yield messages
61
+ for msg in stream_from_transformers_agent(agent, prompt):
62
+ messages.append(msg)
63
+ yield messages
64
+ yield messages
65
+
66
+
67
+ with gr.Blocks() as demo:
68
+ chatbot = AgentChatbot(label="Agent")
69
+ text_input = gr.Textbox(lines=1, label="Chat Message")
70
+ text_input.submit(interact_with_agent, [text_input, chatbot], [chatbot])
71
+
72
+
73
+ if __name__ == "__main__":
74
+ demo.launch()
75
+ ```
76
+
77
+ ![AgentChatbot with transformers](https://gradio-builds.s3.amazonaws.com/demo-files/tf_agent_ui.gif)
78
+
79
+ ## General Usage
80
+
81
+ The `AgentChatbot` is similar to the core `Gradio` `Chatbot` but the key difference is in the expected data format of the `value` property.
82
+
83
+ Instead of a list of tuples, each of which can be either a string or tuple, the value is a list of message instances. Each message can be either a `ChatMessage` or a `ChatFileMessage`. These are pydantic classes that are compatible with the OpenAI [message format](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages). This is how the are defined:
84
+
85
+ ```python
86
+ class ThoughtMetadata(GradioModel):
87
+ tool_name: Optional[str] = None
88
+ error: bool = False
89
+
90
+
91
+ class ChatMessage(GradioModel):
92
+ role: Literal["user", "assistant"]
93
+ content: str
94
+ thought: bool = False
95
+ thought_metadata: ThoughtMetadata = Field(default_factory=ThoughtMetadata)
96
+
97
+
98
+ class ChatFileMessage(GradioModel):
99
+ role: Literal["user", "assistant"]
100
+ file: FileData
101
+ thought: bool = False
102
+ thought_metadata: ThoughtMetadata = Field(default_factory=ThoughtMetadata)
103
+ alt_text: Optional[str] = None
104
+ ```
105
+
106
+ In order to properly display data in `AgentChatbot`, simply return a list of `ChatMessage` or `ChatFileMessage` instances from your python function. For example:
107
+
108
+ ```python
109
+ def chat_echo(prompt: str, messages: List[ChatMessage | ChatFileMessage]) -> List[ChatMessage | ChatFileMessage]:
110
+ messages.append(ChatMessage(role="user", content=prompt))
111
+ messages.append(ChatMessage(role="assistant", content=prompt))
112
+ return messages
113
+ ```
114
+
115
+ ### Why a different data format?
116
+
117
+ The OpenAI data format is the standard format for representing LLM conversations and most API providers have adopted it.
118
+ By using a compliant data format, it should be easier to use `AgentChatbot` with multiple API providers and libraries.
119
+
120
+
121
+ ### Why are pydantic data classes required?
122
+
123
+ It should improve developer experience since your editor will auto-complete the required fields and use smart autocomplete for the `role` class. You will also get an error message if your data does not conform to the data format.
124
+
125
+ I will probably relax this in the future so that a plain python dict can be passed instead of one of the chat classes.
126
+
127
+
128
+
129
+ ## `API Reference`
130
+
131
+ ### Initialization
132
+
133
+ <table>
134
+ <thead>
135
+ <tr>
136
+ <th align="left">name</th>
137
+ <th align="left" style="width: 25%;">type</th>
138
+ <th align="left">default</th>
139
+ <th align="left">description</th>
140
+ </tr>
141
+ </thead>
142
+ <tbody>
143
+ <tr>
144
+ <td align="left"><code>value</code></td>
145
+ <td align="left" style="width: 25%;">
146
+
147
+ ```python
148
+ list[ChatMessage | ChatFileMessage]
149
+ | Callable
150
+ | None
151
+ ```
152
+
153
+ </td>
154
+ <td align="left"><code>None</code></td>
155
+ <td align="left">Default value to show in chatbot. If callable, the function will be called whenever the app loads to set the initial value of the component.</td>
156
+ </tr>
157
+
158
+ <tr>
159
+ <td align="left"><code>label</code></td>
160
+ <td align="left" style="width: 25%;">
161
+
162
+ ```python
163
+ str | None
164
+ ```
165
+
166
+ </td>
167
+ <td align="left"><code>None</code></td>
168
+ <td align="left">The label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to.</td>
169
+ </tr>
170
+
171
+ <tr>
172
+ <td align="left"><code>every</code></td>
173
+ <td align="left" style="width: 25%;">
174
+
175
+ ```python
176
+ float | None
177
+ ```
178
+
179
+ </td>
180
+ <td align="left"><code>None</code></td>
181
+ <td align="left">If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute.</td>
182
+ </tr>
183
+
184
+ <tr>
185
+ <td align="left"><code>show_label</code></td>
186
+ <td align="left" style="width: 25%;">
187
+
188
+ ```python
189
+ bool | None
190
+ ```
191
+
192
+ </td>
193
+ <td align="left"><code>None</code></td>
194
+ <td align="left">if True, will display label.</td>
195
+ </tr>
196
+
197
+ <tr>
198
+ <td align="left"><code>container</code></td>
199
+ <td align="left" style="width: 25%;">
200
+
201
+ ```python
202
+ bool
203
+ ```
204
+
205
+ </td>
206
+ <td align="left"><code>True</code></td>
207
+ <td align="left">If True, will place the component in a container - providing some extra padding around the border.</td>
208
+ </tr>
209
+
210
+ <tr>
211
+ <td align="left"><code>scale</code></td>
212
+ <td align="left" style="width: 25%;">
213
+
214
+ ```python
215
+ int | None
216
+ ```
217
+
218
+ </td>
219
+ <td align="left"><code>None</code></td>
220
+ <td align="left">relative size compared to adjacent Components. For example if Components A and B are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide as B. Should be an integer. scale applies in Rows, and to top-level Components in Blocks where fill_height=True.</td>
221
+ </tr>
222
+
223
+ <tr>
224
+ <td align="left"><code>min_width</code></td>
225
+ <td align="left" style="width: 25%;">
226
+
227
+ ```python
228
+ int
229
+ ```
230
+
231
+ </td>
232
+ <td align="left"><code>160</code></td>
233
+ <td align="left">minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.</td>
234
+ </tr>
235
+
236
+ <tr>
237
+ <td align="left"><code>visible</code></td>
238
+ <td align="left" style="width: 25%;">
239
+
240
+ ```python
241
+ bool
242
+ ```
243
+
244
+ </td>
245
+ <td align="left"><code>True</code></td>
246
+ <td align="left">If False, component will be hidden.</td>
247
+ </tr>
248
+
249
+ <tr>
250
+ <td align="left"><code>elem_id</code></td>
251
+ <td align="left" style="width: 25%;">
252
+
253
+ ```python
254
+ str | None
255
+ ```
256
+
257
+ </td>
258
+ <td align="left"><code>None</code></td>
259
+ <td align="left">An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.</td>
260
+ </tr>
261
+
262
+ <tr>
263
+ <td align="left"><code>elem_classes</code></td>
264
+ <td align="left" style="width: 25%;">
265
+
266
+ ```python
267
+ list[str] | str | None
268
+ ```
269
+
270
+ </td>
271
+ <td align="left"><code>None</code></td>
272
+ <td align="left">An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.</td>
273
+ </tr>
274
+
275
+ <tr>
276
+ <td align="left"><code>render</code></td>
277
+ <td align="left" style="width: 25%;">
278
+
279
+ ```python
280
+ bool
281
+ ```
282
+
283
+ </td>
284
+ <td align="left"><code>True</code></td>
285
+ <td align="left">If False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later.</td>
286
+ </tr>
287
+
288
+ <tr>
289
+ <td align="left"><code>key</code></td>
290
+ <td align="left" style="width: 25%;">
291
+
292
+ ```python
293
+ int | str | None
294
+ ```
295
+
296
+ </td>
297
+ <td align="left"><code>None</code></td>
298
+ <td align="left">if assigned, will be used to assume identity across a re-render. Components that have the same key across a re-render will have their value preserved.</td>
299
+ </tr>
300
+
301
+ <tr>
302
+ <td align="left"><code>height</code></td>
303
+ <td align="left" style="width: 25%;">
304
+
305
+ ```python
306
+ int | str | None
307
+ ```
308
+
309
+ </td>
310
+ <td align="left"><code>None</code></td>
311
+ <td align="left">The height of the component, specified in pixels if a number is passed, or in CSS units if a string is passed.</td>
312
+ </tr>
313
+
314
+ <tr>
315
+ <td align="left"><code>latex_delimiters</code></td>
316
+ <td align="left" style="width: 25%;">
317
+
318
+ ```python
319
+ list[dict[str, str | bool]] | None
320
+ ```
321
+
322
+ </td>
323
+ <td align="left"><code>None</code></td>
324
+ <td align="left">A list of dicts of the form {"left": open delimiter (str), "right": close delimiter (str), "display": whether to display in newline (bool)} that will be used to render LaTeX expressions. If not provided, `latex_delimiters` is set to `[{ "left": "$$", "right": "$$", "display": True }]`, so only expressions enclosed in $$ delimiters will be rendered as LaTeX, and in a new line. Pass in an empty list to disable LaTeX rendering. For more information, see the [KaTeX documentation](https://katex.org/docs/autorender.html).</td>
325
+ </tr>
326
+
327
+ <tr>
328
+ <td align="left"><code>rtl</code></td>
329
+ <td align="left" style="width: 25%;">
330
+
331
+ ```python
332
+ bool
333
+ ```
334
+
335
+ </td>
336
+ <td align="left"><code>False</code></td>
337
+ <td align="left">If True, sets the direction of the rendered text to right-to-left. Default is False, which renders text left-to-right.</td>
338
+ </tr>
339
+
340
+ <tr>
341
+ <td align="left"><code>show_share_button</code></td>
342
+ <td align="left" style="width: 25%;">
343
+
344
+ ```python
345
+ bool | None
346
+ ```
347
+
348
+ </td>
349
+ <td align="left"><code>None</code></td>
350
+ <td align="left">If True, will show a share icon in the corner of the component that allows user to share outputs to Hugging Face Spaces Discussions. If False, icon does not appear. If set to None (default behavior), then the icon appears if this Gradio app is launched on Spaces, but not otherwise.</td>
351
+ </tr>
352
+
353
+ <tr>
354
+ <td align="left"><code>show_copy_button</code></td>
355
+ <td align="left" style="width: 25%;">
356
+
357
+ ```python
358
+ bool
359
+ ```
360
+
361
+ </td>
362
+ <td align="left"><code>False</code></td>
363
+ <td align="left">If True, will show a copy button for each chatbot message.</td>
364
+ </tr>
365
+
366
+ <tr>
367
+ <td align="left"><code>avatar_images</code></td>
368
+ <td align="left" style="width: 25%;">
369
+
370
+ ```python
371
+ tuple[str | Path | None, str | Path | None] | None
372
+ ```
373
+
374
+ </td>
375
+ <td align="left"><code>None</code></td>
376
+ <td align="left">Tuple of two avatar image paths or URLs for user and bot (in that order). Pass None for either the user or bot image to skip. Must be within the working directory of the Gradio app or an external URL.</td>
377
+ </tr>
378
+
379
+ <tr>
380
+ <td align="left"><code>sanitize_html</code></td>
381
+ <td align="left" style="width: 25%;">
382
+
383
+ ```python
384
+ bool
385
+ ```
386
+
387
+ </td>
388
+ <td align="left"><code>True</code></td>
389
+ <td align="left">If False, will disable HTML sanitization for chatbot messages. This is not recommended, as it can lead to security vulnerabilities.</td>
390
+ </tr>
391
+
392
+ <tr>
393
+ <td align="left"><code>render_markdown</code></td>
394
+ <td align="left" style="width: 25%;">
395
+
396
+ ```python
397
+ bool
398
+ ```
399
+
400
+ </td>
401
+ <td align="left"><code>True</code></td>
402
+ <td align="left">If False, will disable Markdown rendering for chatbot messages.</td>
403
+ </tr>
404
+
405
+ <tr>
406
+ <td align="left"><code>bubble_full_width</code></td>
407
+ <td align="left" style="width: 25%;">
408
+
409
+ ```python
410
+ bool
411
+ ```
412
+
413
+ </td>
414
+ <td align="left"><code>True</code></td>
415
+ <td align="left">If False, the chat bubble will fit to the content of the message. If True (default), the chat bubble will be the full width of the component.</td>
416
+ </tr>
417
+
418
+ <tr>
419
+ <td align="left"><code>line_breaks</code></td>
420
+ <td align="left" style="width: 25%;">
421
+
422
+ ```python
423
+ bool
424
+ ```
425
+
426
+ </td>
427
+ <td align="left"><code>True</code></td>
428
+ <td align="left">If True (default), will enable Github-flavored Markdown line breaks in chatbot messages. If False, single new lines will be ignored. Only applies if `render_markdown` is True.</td>
429
+ </tr>
430
+
431
+ <tr>
432
+ <td align="left"><code>likeable</code></td>
433
+ <td align="left" style="width: 25%;">
434
+
435
+ ```python
436
+ bool
437
+ ```
438
+
439
+ </td>
440
+ <td align="left"><code>False</code></td>
441
+ <td align="left">Whether the chat messages display a like or dislike button. Set automatically by the .like method but has to be present in the signature for it to show up in the config.</td>
442
+ </tr>
443
+
444
+ <tr>
445
+ <td align="left"><code>layout</code></td>
446
+ <td align="left" style="width: 25%;">
447
+
448
+ ```python
449
+ Literal["panel", "bubble"] | None
450
+ ```
451
+
452
+ </td>
453
+ <td align="left"><code>None</code></td>
454
+ <td align="left">If "panel", will display the chatbot in a llm style layout. If "bubble", will display the chatbot with message bubbles, with the user and bot messages on alterating sides. Will default to "bubble".</td>
455
+ </tr>
456
+
457
+ <tr>
458
+ <td align="left"><code>placeholder</code></td>
459
+ <td align="left" style="width: 25%;">
460
+
461
+ ```python
462
+ str | None
463
+ ```
464
+
465
+ </td>
466
+ <td align="left"><code>None</code></td>
467
+ <td align="left">a placeholder message to display in the chatbot when it is empty. Centered vertically and horizontally in the TestChatbot. Supports Markdown and HTML. If None, no placeholder is displayed.</td>
468
+ </tr>
469
+ </tbody></table>
470
+
471
+
472
+ ### Events
473
+
474
+ | name | description |
475
+ |:-----|:------------|
476
+ | `change` | Triggered when the value of the TestChatbot changes either because of user input (e.g. a user types in a textbox) OR because of a function update (e.g. an image receives a value from the output of an event trigger). See `.input()` for a listener that is only triggered by user input. |
src/backend/gradio_agentchatbot/__init__.py ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from .agentchatbot import AgentChatbot, ChatbotData
2
+ from .utils import (
3
+ stream_from_transformers_agent,
4
+ ChatMessage,
5
+ ThoughtMetadata,
6
+ ChatFileMessage,
7
+ Message,
8
+ )
9
+
10
+ __all__ = [
11
+ "AgentChatbot",
12
+ "ChatbotData",
13
+ "stream_from_transformers_agent",
14
+ "ChatMessage",
15
+ "ThoughtMetadata",
16
+ "ChatFileMessage",
17
+ "Message",
18
+ ]
src/backend/gradio_agentchatbot/agentchatbot.py ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """gr.Chatbot() component."""
2
+
3
+ from __future__ import annotations
4
+
5
+ from pathlib import Path
6
+ from typing import Any, Callable, Literal
7
+
8
+ from gradio_client import utils as client_utils
9
+
10
+ from gradio import utils
11
+ from gradio.components.base import Component
12
+ from gradio.events import Events
13
+ from gradio.processing_utils import move_resource_to_block_cache
14
+ from .utils import ChatMessage, ChatbotData, ChatFileMessage, FileData
15
+
16
+
17
+ class AgentChatbot(Component):
18
+ """
19
+ Creates a chatbot that displays user-submitted messages and responses. Supports a subset of Markdown including bold, italics, code, tables.
20
+ Also supports audio/video/image files, which are displayed in the AgentChatbot, and other kinds of files which are displayed as links. This
21
+ component is usually used as an output component.
22
+
23
+ Demos: chatbot_simple, chatbot_multimodal
24
+ Guides: creating-a-chatbot
25
+ """
26
+
27
+ EVENTS = [Events.change]
28
+ data_model = ChatbotData
29
+
30
+ def __init__(
31
+ self,
32
+ value: list[dict[str, Any]] | Callable | None = None,
33
+ *,
34
+ label: str | None = None,
35
+ every: float | None = None,
36
+ show_label: bool | None = None,
37
+ container: bool = True,
38
+ scale: int | None = None,
39
+ min_width: int = 160,
40
+ visible: bool = True,
41
+ elem_id: str | None = None,
42
+ elem_classes: list[str] | str | None = None,
43
+ render: bool = True,
44
+ key: int | str | None = None,
45
+ height: int | str | None = None,
46
+ latex_delimiters: list[dict[str, str | bool]] | None = None,
47
+ rtl: bool = False,
48
+ show_share_button: bool | None = None,
49
+ show_copy_button: bool = False,
50
+ avatar_images: tuple[str | Path | None, str | Path | None] | None = None,
51
+ sanitize_html: bool = True,
52
+ render_markdown: bool = True,
53
+ bubble_full_width: bool = True,
54
+ line_breaks: bool = True,
55
+ likeable: bool = False,
56
+ layout: Literal["panel", "bubble"] | None = None,
57
+ placeholder: str | None = None,
58
+ ):
59
+ """
60
+ Parameters:
61
+ value: Default value to show in chatbot. If callable, the function will be called whenever the app loads to set the initial value of the component.
62
+ label: The label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to.
63
+ every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute.
64
+ show_label: if True, will display label.
65
+ container: If True, will place the component in a container - providing some extra padding around the border.
66
+ scale: relative size compared to adjacent Components. For example if Components A and B are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide as B. Should be an integer. scale applies in Rows, and to top-level Components in Blocks where fill_height=True.
67
+ min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.
68
+ visible: If False, component will be hidden.
69
+ elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
70
+ elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.
71
+ render: If False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later.
72
+ key: if assigned, will be used to assume identity across a re-render. Components that have the same key across a re-render will have their value preserved.
73
+ height: The height of the component, specified in pixels if a number is passed, or in CSS units if a string is passed.
74
+ latex_delimiters: A list of dicts of the form {"left": open delimiter (str), "right": close delimiter (str), "display": whether to display in newline (bool)} that will be used to render LaTeX expressions. If not provided, `latex_delimiters` is set to `[{ "left": "$$", "right": "$$", "display": True }]`, so only expressions enclosed in $$ delimiters will be rendered as LaTeX, and in a new line. Pass in an empty list to disable LaTeX rendering. For more information, see the [KaTeX documentation](https://katex.org/docs/autorender.html).
75
+ rtl: If True, sets the direction of the rendered text to right-to-left. Default is False, which renders text left-to-right.
76
+ show_share_button: If True, will show a share icon in the corner of the component that allows user to share outputs to Hugging Face Spaces Discussions. If False, icon does not appear. If set to None (default behavior), then the icon appears if this Gradio app is launched on Spaces, but not otherwise.
77
+ show_copy_button: If True, will show a copy button for each chatbot message.
78
+ avatar_images: Tuple of two avatar image paths or URLs for user and bot (in that order). Pass None for either the user or bot image to skip. Must be within the working directory of the Gradio app or an external URL.
79
+ sanitize_html: If False, will disable HTML sanitization for chatbot messages. This is not recommended, as it can lead to security vulnerabilities.
80
+ render_markdown: If False, will disable Markdown rendering for chatbot messages.
81
+ bubble_full_width: If False, the chat bubble will fit to the content of the message. If True (default), the chat bubble will be the full width of the component.
82
+ line_breaks: If True (default), will enable Github-flavored Markdown line breaks in chatbot messages. If False, single new lines will be ignored. Only applies if `render_markdown` is True.
83
+ likeable: Whether the chat messages display a like or dislike button. Set automatically by the .like method but has to be present in the signature for it to show up in the config.
84
+ layout: If "panel", will display the chatbot in a llm style layout. If "bubble", will display the chatbot with message bubbles, with the user and bot messages on alterating sides. Will default to "bubble".
85
+ placeholder: a placeholder message to display in the chatbot when it is empty. Centered vertically and horizontally in the AgentChatbot. Supports Markdown and HTML. If None, no placeholder is displayed.
86
+ """
87
+ self.likeable = likeable
88
+ self.height = height
89
+ self.rtl = rtl
90
+ if latex_delimiters is None:
91
+ latex_delimiters = [{"left": "$$", "right": "$$", "display": True}]
92
+ self.latex_delimiters = latex_delimiters
93
+ self.show_share_button = (
94
+ (utils.get_space() is not None)
95
+ if show_share_button is None
96
+ else show_share_button
97
+ )
98
+ self.render_markdown = render_markdown
99
+ self.show_copy_button = show_copy_button
100
+ self.sanitize_html = sanitize_html
101
+ self.bubble_full_width = bubble_full_width
102
+ self.line_breaks = line_breaks
103
+ self.layout = layout
104
+ super().__init__(
105
+ label=label,
106
+ every=every,
107
+ show_label=show_label,
108
+ container=container,
109
+ scale=scale,
110
+ min_width=min_width,
111
+ visible=visible,
112
+ elem_id=elem_id,
113
+ elem_classes=elem_classes,
114
+ render=render,
115
+ key=key,
116
+ value=value,
117
+ )
118
+ self.avatar_images: list[dict | None] = [None, None]
119
+ if avatar_images is None:
120
+ pass
121
+ else:
122
+ self.avatar_images = [
123
+ self.serve_static_file(avatar_images[0]),
124
+ self.serve_static_file(avatar_images[1]),
125
+ ]
126
+ self.placeholder = placeholder
127
+
128
+ def preprocess(
129
+ self,
130
+ payload: ChatbotData | None,
131
+ ) -> list[ChatMessage | ChatFileMessage] | None:
132
+ """
133
+ Parameters:
134
+ payload: data as a ChatbotData object
135
+ Returns:
136
+ Passes the messages in the chatbot as a `list[list[str | None | tuple]]`, i.e. a list of lists. The inner list has 2 elements: the user message and the response message. Each message can be (1) a string in valid Markdown, (2) a tuple if there are displayed files: (a filepath or URL to a file, [optional string alt text]), or (3) None, if there is no message displayed.
137
+ """
138
+ return payload.root
139
+
140
+ def _postprocess_chat_messages(
141
+ self, chat_message: ChatMessage
142
+ ) -> list[ChatMessage]:
143
+ if chat_message is None:
144
+ return None
145
+
146
+ if isinstance(chat_message, ChatFileMessage):
147
+ chat_message.file.path = move_resource_to_block_cache(
148
+ chat_message.file.path, block=self
149
+ )
150
+ return [chat_message]
151
+
152
+ # extract file path from message
153
+ new_messages = []
154
+ for word in chat_message.content.split(" "):
155
+ if (filepath := Path(word)).exists() and filepath.is_file():
156
+ filepath = move_resource_to_block_cache(filepath, block=self)
157
+ mime_type = client_utils.get_mimetype(filepath)
158
+ new_messages.append(
159
+ ChatFileMessage(
160
+ role=chat_message.role,
161
+ thought=chat_message.thought,
162
+ thought_metadata=chat_message.thought_metadata,
163
+ file=FileData(path=filepath, mime_type=mime_type),
164
+ )
165
+ )
166
+
167
+ return [chat_message, *new_messages]
168
+
169
+ def postprocess(
170
+ self,
171
+ value: list[ChatMessage | ChatFileMessage] | None,
172
+ ) -> ChatbotData:
173
+ """
174
+ Parameters:
175
+ value: expects a `list[list[str | None | tuple]]`, i.e. a list of lists. The inner list should have 2 elements: the user message and the response message. The individual messages can be (1) strings in valid Markdown, (2) tuples if sending files: (a filepath or URL to a file, [optional string alt text]) -- if the file is image/video/audio, it is displayed in the AgentChatbot, or (3) None, in which case the message is not displayed.
176
+ Returns:
177
+ an object of type ChatbotData
178
+ """
179
+ if value is None:
180
+ return ChatbotData(root=[])
181
+ processed_messages = [
182
+ msg for message in value for msg in self._postprocess_chat_messages(message)
183
+ ]
184
+ return ChatbotData(root=processed_messages)
185
+
186
+ def example_payload(self) -> Any:
187
+ return [
188
+ ChatMessage(role="user", content="Hello!").model_dump(),
189
+ ChatMessage(role="assistant", content="How can I help you?").model_dump(),
190
+ ]
191
+
192
+ def example_value(self) -> Any:
193
+ return [
194
+ ChatMessage(role="user", content="Hello!"),
195
+ ChatMessage(role="assistant", content="How can I help you?"),
196
+ ]
src/backend/gradio_agentchatbot/templates/component/assets/worker-lPYB70QI.js ADDED
@@ -0,0 +1 @@
 
 
1
+ (function(){"use strict";const R="https://unpkg.com/@ffmpeg/core@0.12.6/dist/umd/ffmpeg-core.js";var E;(function(t){t.LOAD="LOAD",t.EXEC="EXEC",t.WRITE_FILE="WRITE_FILE",t.READ_FILE="READ_FILE",t.DELETE_FILE="DELETE_FILE",t.RENAME="RENAME",t.CREATE_DIR="CREATE_DIR",t.LIST_DIR="LIST_DIR",t.DELETE_DIR="DELETE_DIR",t.ERROR="ERROR",t.DOWNLOAD="DOWNLOAD",t.PROGRESS="PROGRESS",t.LOG="LOG",t.MOUNT="MOUNT",t.UNMOUNT="UNMOUNT"})(E||(E={}));const a=new Error("unknown message type"),f=new Error("ffmpeg is not loaded, call `await ffmpeg.load()` first"),u=new Error("failed to import ffmpeg-core.js");let r;const O=async({coreURL:t,wasmURL:n,workerURL:e})=>{const o=!r;try{t||(t=R),importScripts(t)}catch{if(t||(t=R.replace("/umd/","/esm/")),self.createFFmpegCore=(await import(t)).default,!self.createFFmpegCore)throw u}const s=t,c=n||t.replace(/.js$/g,".wasm"),b=e||t.replace(/.js$/g,".worker.js");return r=await self.createFFmpegCore({mainScriptUrlOrBlob:`${s}#${btoa(JSON.stringify({wasmURL:c,workerURL:b}))}`}),r.setLogger(i=>self.postMessage({type:E.LOG,data:i})),r.setProgress(i=>self.postMessage({type:E.PROGRESS,data:i})),o},l=({args:t,timeout:n=-1})=>{r.setTimeout(n),r.exec(...t);const e=r.ret;return r.reset(),e},m=({path:t,data:n})=>(r.FS.writeFile(t,n),!0),D=({path:t,encoding:n})=>r.FS.readFile(t,{encoding:n}),S=({path:t})=>(r.FS.unlink(t),!0),I=({oldPath:t,newPath:n})=>(r.FS.rename(t,n),!0),L=({path:t})=>(r.FS.mkdir(t),!0),N=({path:t})=>{const n=r.FS.readdir(t),e=[];for(const o of n){const s=r.FS.stat(`${t}/${o}`),c=r.FS.isDir(s.mode);e.push({name:o,isDir:c})}return e},A=({path:t})=>(r.FS.rmdir(t),!0),w=({fsType:t,options:n,mountPoint:e})=>{const o=t,s=r.FS.filesystems[o];return s?(r.FS.mount(s,n,e),!0):!1},k=({mountPoint:t})=>(r.FS.unmount(t),!0);self.onmessage=async({data:{id:t,type:n,data:e}})=>{const o=[];let s;try{if(n!==E.LOAD&&!r)throw f;switch(n){case E.LOAD:s=await O(e);break;case E.EXEC:s=l(e);break;case E.WRITE_FILE:s=m(e);break;case E.READ_FILE:s=D(e);break;case E.DELETE_FILE:s=S(e);break;case E.RENAME:s=I(e);break;case E.CREATE_DIR:s=L(e);break;case E.LIST_DIR:s=N(e);break;case E.DELETE_DIR:s=A(e);break;case E.MOUNT:s=w(e);break;case E.UNMOUNT:s=k(e);break;default:throw a}}catch(c){self.postMessage({id:t,type:E.ERROR,data:c.toString()});return}s instanceof Uint8Array&&o.push(s.buffer),self.postMessage({id:t,type:n,data:s},o)}})();
src/backend/gradio_agentchatbot/templates/component/index.js ADDED
The diff for this file is too large to render. See raw diff
 
src/backend/gradio_agentchatbot/templates/component/style.css ADDED
The diff for this file is too large to render. See raw diff
 
src/backend/gradio_agentchatbot/utils.py ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers.agents import Agent, agent_types
2
+ from pydantic import Field
3
+ from gradio.data_classes import GradioModel, FileData, GradioRootModel
4
+ from typing import Literal, List, Generator, Optional, Union
5
+ from threading import Thread
6
+ import time
7
+
8
+
9
+ class ThoughtMetadata(GradioModel):
10
+ tool_name: Optional[str] = None
11
+ error: bool = False
12
+
13
+
14
+ class Message(GradioModel):
15
+ role: Literal["user", "assistant"]
16
+ thought: bool = False
17
+ thought_metadata: ThoughtMetadata = Field(default_factory=ThoughtMetadata)
18
+
19
+
20
+ class ChatMessage(Message):
21
+ content: str
22
+
23
+
24
+ class ChatFileMessage(Message):
25
+ file: FileData
26
+ alt_text: Optional[str] = None
27
+
28
+
29
+ class ChatbotData(GradioRootModel):
30
+ root: List[Union[ChatMessage, ChatFileMessage]]
31
+
32
+
33
+ def pull_messages(new_messages: List[dict]):
34
+ for message in new_messages:
35
+ if not len(message):
36
+ continue
37
+ if message.get("rationale"):
38
+ yield ChatMessage(
39
+ role="assistant", content=message["rationale"], thought=True
40
+ )
41
+ if message.get("tool_call"):
42
+ used_code = message["tool_call"]["tool_name"] == "code interpreter"
43
+ content = message["tool_call"]["tool_arguments"]
44
+ if used_code:
45
+ content = f"```py\n{content}\n```"
46
+ yield ChatMessage(
47
+ role="assistant",
48
+ thought_metadata=ThoughtMetadata(
49
+ tool_name=message["tool_call"]["tool_name"]
50
+ ),
51
+ content=content,
52
+ thought=True,
53
+ )
54
+ if message.get("observation"):
55
+ yield ChatMessage(
56
+ role="assistant", content=message["observation"], thought=True
57
+ )
58
+ if message.get("error"):
59
+ yield ChatMessage(
60
+ role="assistant",
61
+ content=str(message["error"]),
62
+ thought=True,
63
+ thought_metadata=ThoughtMetadata(error=True),
64
+ )
65
+
66
+
67
+ def stream_from_transformers_agent(
68
+ agent: Agent, prompt: str
69
+ ) -> Generator[ChatMessage, None, None]:
70
+ """Runs an agent with the given prompt and streams the messages from the agent as ChatMessages."""
71
+
72
+ class Output:
73
+ output: agent_types.AgentType | str = None
74
+
75
+ def run_agent():
76
+ output = agent.run(prompt)
77
+ Output.output = output
78
+
79
+ thread = Thread(target=run_agent)
80
+ num_messages = 0
81
+
82
+ # Start thread and pull logs while it runs
83
+ thread.start()
84
+ while thread.is_alive():
85
+ if len(agent.logs) > num_messages:
86
+ new_messages = agent.logs[num_messages:]
87
+ for msg in pull_messages(new_messages):
88
+ yield msg
89
+ num_messages += 1
90
+ time.sleep(0.1)
91
+
92
+ thread.join(0.1)
93
+
94
+ if len(agent.logs) > num_messages:
95
+ new_messages = agent.logs[num_messages:]
96
+ yield from pull_messages(new_messages)
97
+
98
+ if isinstance(Output.output, agent_types.AgentText):
99
+ yield ChatMessage(
100
+ role="assistant", content=Output.output.to_string(), thought=True
101
+ )
102
+ elif isinstance(Output.output, agent_types.AgentImage):
103
+ yield ChatFileMessage(
104
+ role="assistant",
105
+ file=FileData(path=Output.output.to_string(), mime_type="image/png"),
106
+ content="",
107
+ thought=True,
108
+ )
109
+ elif isinstance(Output.output, agent_types.AgentAudio):
110
+ yield ChatFileMessage(
111
+ role="assistant",
112
+ file=FileData(path=Output.output.to_string(), mime_type="audio/wav"),
113
+ content="",
114
+ thought=True,
115
+ )
116
+ else:
117
+ return ChatMessage(role="assistant", content=Output.output, thought=True)
src/demo/.env ADDED
@@ -0,0 +1 @@
 
 
1
+ SERPAPI_API_KEY=3d009ed0a9764853b3e75ee2c70ceb3aabd92b810456307c5af711559ec4aff7
src/demo/__init__.py ADDED
File without changes
src/demo/app.py ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ from transformers import load_tool, ReactCodeAgent, HfEngine, Tool
3
+ from gradio_agentchatbot import (
4
+ AgentChatbot,
5
+ stream_from_transformers_agent,
6
+ ChatMessage,
7
+ )
8
+ from dotenv import load_dotenv
9
+ from langchain.agents import load_tools
10
+ from pathlib import Path
11
+
12
+ current_dir = Path(__file__).parent
13
+
14
+ load_dotenv()
15
+
16
+ # Import tool from Hub
17
+ image_generation_tool = load_tool("m-ric/text-to-image")
18
+
19
+ search_tool = Tool.from_langchain(load_tools(["serpapi"])[0])
20
+
21
+ llm_engine = HfEngine("meta-llama/Meta-Llama-3-70B-Instruct")
22
+ # Initialize the agent with both tools
23
+ agent = ReactCodeAgent(
24
+ tools=[image_generation_tool, search_tool], llm_engine=llm_engine
25
+ )
26
+
27
+
28
+ def interact_with_agent(prompt, messages):
29
+ messages.append(ChatMessage(role="user", content=prompt))
30
+ yield messages
31
+ for msg in stream_from_transformers_agent(agent, prompt):
32
+ messages.append(msg)
33
+ yield messages
34
+ yield messages
35
+
36
+
37
+ with gr.Blocks() as demo:
38
+ with gr.Tabs():
39
+ with gr.Tab("Demo"):
40
+ chatbot = AgentChatbot(
41
+ label="Agent",
42
+ avatar_images=[
43
+ None,
44
+ "https://em-content.zobj.net/source/twitter/53/robot-face_1f916.png",
45
+ ],
46
+ )
47
+ text_input = gr.Textbox(lines=1, label="Chat Message")
48
+ text_input.submit(interact_with_agent, [text_input, chatbot], [chatbot])
49
+ with gr.Tab("Docs"):
50
+ gr.Markdown(Path(current_dir / "docs.md").read_text())
51
+
52
+
53
+ if __name__ == "__main__":
54
+ demo.launch()
src/demo/docs.md ADDED
@@ -0,0 +1,465 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # gradio_agentchatbot
2
+
3
+ ## 🤖 Chat UI for displaying the thoughts of LLM Agents 💭
4
+ <img alt="Static Badge" src="https://img.shields.io/badge/version%20-%200.0.1%20-%20orange">
5
+
6
+ The `gradio_agentchatbot` package introduces the `AgentChatbot` component which can display the thought process and tool usage of an LLM agent. Its message format is compatible with the OpenAI conversation message format.
7
+
8
+ For example usage with transformers agents, please see the [Transformers Usage](#transformers-usage) section.
9
+
10
+ For general usage, see the [General Usage](#general-usage) section.
11
+
12
+ For the API reference, see the [Initialization](#initialization) section.
13
+
14
+ ## Installation
15
+
16
+ ```bash
17
+ pip install gradio_agentchatbot
18
+ ```
19
+
20
+ or add `gradio_agentchatbot` to your `requirements.txt`.
21
+
22
+ ## Transformers Usage
23
+
24
+ For [transformers agents](https://huggingface.co/learn/cookbook/agents), you can use the `stream_from_transformers_agent` function and yield all subsequent messages.
25
+
26
+ ```python
27
+
28
+ import gradio as gr
29
+ from transformers import load_tool, ReactCodeAgent, HfEngine, Tool
30
+ from gradio_agentchatbot import AgentChatbot, stream_from_transformers_agent, ChatMessage
31
+ from dotenv import load_dotenv
32
+ from langchain.agents import load_tools
33
+
34
+ # to load SerpAPI key
35
+ load_dotenv()
36
+
37
+ # Import tool from Hub
38
+ image_generation_tool = load_tool("m-ric/text-to-image")
39
+
40
+ search_tool = Tool.from_langchain(load_tools(["serpapi"])[0])
41
+
42
+ llm_engine = HfEngine("meta-llama/Meta-Llama-3-70B-Instruct")
43
+ # Initialize the agent with both tools
44
+ agent = ReactCodeAgent(tools=[image_generation_tool, search_tool], llm_engine=llm_engine)
45
+
46
+
47
+ def interact_with_agent(prompt, messages):
48
+ messages.append(ChatMessage(role="user", content=prompt))
49
+ yield messages
50
+ for msg in stream_from_transformers_agent(agent, prompt):
51
+ messages.append(msg)
52
+ yield messages
53
+ yield messages
54
+
55
+
56
+ with gr.Blocks() as demo:
57
+ chatbot = AgentChatbot(label="Agent")
58
+ text_input = gr.Textbox(lines=1, label="Chat Message")
59
+ text_input.submit(interact_with_agent, [text_input, chatbot], [chatbot])
60
+
61
+
62
+ if __name__ == "__main__":
63
+ demo.launch()
64
+ ```
65
+
66
+ ![AgentChatbot with transformers](https://gradio-builds.s3.amazonaws.com/demo-files/tf_agent_ui.gif)
67
+
68
+ ## General Usage
69
+
70
+ The `AgentChatbot` is similar to the core `Gradio` `Chatbot` but the key difference is in the expected data format of the `value` property.
71
+
72
+ Instead of a list of tuples, each of which can be either a string or tuple, the value is a list of message instances. Each message can be either a `ChatMessage` or a `ChatFileMessage`. These are pydantic classes that are compatible with the OpenAI [message format](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages). This is how the are defined:
73
+
74
+ ```python
75
+ class ThoughtMetadata(GradioModel):
76
+ tool_name: Optional[str] = None
77
+ error: bool = False
78
+
79
+
80
+ class ChatMessage(GradioModel):
81
+ role: Literal["user", "assistant"]
82
+ content: str
83
+ thought: bool = False
84
+ thought_metadata: ThoughtMetadata = Field(default_factory=ThoughtMetadata)
85
+
86
+
87
+ class ChatFileMessage(GradioModel):
88
+ role: Literal["user", "assistant"]
89
+ file: FileData
90
+ thought: bool = False
91
+ thought_metadata: ThoughtMetadata = Field(default_factory=ThoughtMetadata)
92
+ alt_text: Optional[str] = None
93
+ ```
94
+
95
+ In order to properly display data in `AgentChatbot`, simply return a list of `ChatMessage` or `ChatFileMessage` instances from your python function. For example:
96
+
97
+ ```python
98
+ def chat_echo(prompt: str, messages: List[ChatMessage | ChatFileMessage]) -> List[ChatMessage | ChatFileMessage]:
99
+ messages.append(ChatMessage(role="user", content=prompt))
100
+ messages.append(ChatMessage(role="assistant", content=prompt))
101
+ return messages
102
+ ```
103
+
104
+ ### Why a different data format?
105
+
106
+ The OpenAI data format is the standard format for representing LLM conversations and most API providers have adopted it.
107
+ By using a compliant data format, it should be easier to use `AgentChatbot` with multiple API providers and libraries.
108
+
109
+
110
+ ### Why are pydantic data classes required?
111
+
112
+ It should improve developer experience since your editor will auto-complete the required fields and use smart autocomplete for the `role` class. You will also get an error message if your data does not conform to the data format.
113
+
114
+ I will probably relax this in the future so that a plain python dict can be passed instead of one of the chat classes.
115
+
116
+
117
+
118
+ ## `API Reference`
119
+
120
+ ### Initialization
121
+
122
+ <table>
123
+ <thead>
124
+ <tr>
125
+ <th align="left">name</th>
126
+ <th align="left" style="width: 25%;">type</th>
127
+ <th align="left">default</th>
128
+ <th align="left">description</th>
129
+ </tr>
130
+ </thead>
131
+ <tbody>
132
+ <tr>
133
+ <td align="left"><code>value</code></td>
134
+ <td align="left" style="width: 25%;">
135
+
136
+ ```python
137
+ list[ChatMessage | ChatFileMessage]
138
+ | Callable
139
+ | None
140
+ ```
141
+
142
+ </td>
143
+ <td align="left"><code>None</code></td>
144
+ <td align="left">Default value to show in chatbot. If callable, the function will be called whenever the app loads to set the initial value of the component.</td>
145
+ </tr>
146
+
147
+ <tr>
148
+ <td align="left"><code>label</code></td>
149
+ <td align="left" style="width: 25%;">
150
+
151
+ ```python
152
+ str | None
153
+ ```
154
+
155
+ </td>
156
+ <td align="left"><code>None</code></td>
157
+ <td align="left">The label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to.</td>
158
+ </tr>
159
+
160
+ <tr>
161
+ <td align="left"><code>every</code></td>
162
+ <td align="left" style="width: 25%;">
163
+
164
+ ```python
165
+ float | None
166
+ ```
167
+
168
+ </td>
169
+ <td align="left"><code>None</code></td>
170
+ <td align="left">If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute.</td>
171
+ </tr>
172
+
173
+ <tr>
174
+ <td align="left"><code>show_label</code></td>
175
+ <td align="left" style="width: 25%;">
176
+
177
+ ```python
178
+ bool | None
179
+ ```
180
+
181
+ </td>
182
+ <td align="left"><code>None</code></td>
183
+ <td align="left">if True, will display label.</td>
184
+ </tr>
185
+
186
+ <tr>
187
+ <td align="left"><code>container</code></td>
188
+ <td align="left" style="width: 25%;">
189
+
190
+ ```python
191
+ bool
192
+ ```
193
+
194
+ </td>
195
+ <td align="left"><code>True</code></td>
196
+ <td align="left">If True, will place the component in a container - providing some extra padding around the border.</td>
197
+ </tr>
198
+
199
+ <tr>
200
+ <td align="left"><code>scale</code></td>
201
+ <td align="left" style="width: 25%;">
202
+
203
+ ```python
204
+ int | None
205
+ ```
206
+
207
+ </td>
208
+ <td align="left"><code>None</code></td>
209
+ <td align="left">relative size compared to adjacent Components. For example if Components A and B are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide as B. Should be an integer. scale applies in Rows, and to top-level Components in Blocks where fill_height=True.</td>
210
+ </tr>
211
+
212
+ <tr>
213
+ <td align="left"><code>min_width</code></td>
214
+ <td align="left" style="width: 25%;">
215
+
216
+ ```python
217
+ int
218
+ ```
219
+
220
+ </td>
221
+ <td align="left"><code>160</code></td>
222
+ <td align="left">minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.</td>
223
+ </tr>
224
+
225
+ <tr>
226
+ <td align="left"><code>visible</code></td>
227
+ <td align="left" style="width: 25%;">
228
+
229
+ ```python
230
+ bool
231
+ ```
232
+
233
+ </td>
234
+ <td align="left"><code>True</code></td>
235
+ <td align="left">If False, component will be hidden.</td>
236
+ </tr>
237
+
238
+ <tr>
239
+ <td align="left"><code>elem_id</code></td>
240
+ <td align="left" style="width: 25%;">
241
+
242
+ ```python
243
+ str | None
244
+ ```
245
+
246
+ </td>
247
+ <td align="left"><code>None</code></td>
248
+ <td align="left">An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.</td>
249
+ </tr>
250
+
251
+ <tr>
252
+ <td align="left"><code>elem_classes</code></td>
253
+ <td align="left" style="width: 25%;">
254
+
255
+ ```python
256
+ list[str] | str | None
257
+ ```
258
+
259
+ </td>
260
+ <td align="left"><code>None</code></td>
261
+ <td align="left">An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.</td>
262
+ </tr>
263
+
264
+ <tr>
265
+ <td align="left"><code>render</code></td>
266
+ <td align="left" style="width: 25%;">
267
+
268
+ ```python
269
+ bool
270
+ ```
271
+
272
+ </td>
273
+ <td align="left"><code>True</code></td>
274
+ <td align="left">If False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later.</td>
275
+ </tr>
276
+
277
+ <tr>
278
+ <td align="left"><code>key</code></td>
279
+ <td align="left" style="width: 25%;">
280
+
281
+ ```python
282
+ int | str | None
283
+ ```
284
+
285
+ </td>
286
+ <td align="left"><code>None</code></td>
287
+ <td align="left">if assigned, will be used to assume identity across a re-render. Components that have the same key across a re-render will have their value preserved.</td>
288
+ </tr>
289
+
290
+ <tr>
291
+ <td align="left"><code>height</code></td>
292
+ <td align="left" style="width: 25%;">
293
+
294
+ ```python
295
+ int | str | None
296
+ ```
297
+
298
+ </td>
299
+ <td align="left"><code>None</code></td>
300
+ <td align="left">The height of the component, specified in pixels if a number is passed, or in CSS units if a string is passed.</td>
301
+ </tr>
302
+
303
+ <tr>
304
+ <td align="left"><code>latex_delimiters</code></td>
305
+ <td align="left" style="width: 25%;">
306
+
307
+ ```python
308
+ list[dict[str, str | bool]] | None
309
+ ```
310
+
311
+ </td>
312
+ <td align="left"><code>None</code></td>
313
+ <td align="left">A list of dicts of the form {"left": open delimiter (str), "right": close delimiter (str), "display": whether to display in newline (bool)} that will be used to render LaTeX expressions. If not provided, `latex_delimiters` is set to `[{ "left": "$$", "right": "$$", "display": True }]`, so only expressions enclosed in $$ delimiters will be rendered as LaTeX, and in a new line. Pass in an empty list to disable LaTeX rendering. For more information, see the [KaTeX documentation](https://katex.org/docs/autorender.html).</td>
314
+ </tr>
315
+
316
+ <tr>
317
+ <td align="left"><code>rtl</code></td>
318
+ <td align="left" style="width: 25%;">
319
+
320
+ ```python
321
+ bool
322
+ ```
323
+
324
+ </td>
325
+ <td align="left"><code>False</code></td>
326
+ <td align="left">If True, sets the direction of the rendered text to right-to-left. Default is False, which renders text left-to-right.</td>
327
+ </tr>
328
+
329
+ <tr>
330
+ <td align="left"><code>show_share_button</code></td>
331
+ <td align="left" style="width: 25%;">
332
+
333
+ ```python
334
+ bool | None
335
+ ```
336
+
337
+ </td>
338
+ <td align="left"><code>None</code></td>
339
+ <td align="left">If True, will show a share icon in the corner of the component that allows user to share outputs to Hugging Face Spaces Discussions. If False, icon does not appear. If set to None (default behavior), then the icon appears if this Gradio app is launched on Spaces, but not otherwise.</td>
340
+ </tr>
341
+
342
+ <tr>
343
+ <td align="left"><code>show_copy_button</code></td>
344
+ <td align="left" style="width: 25%;">
345
+
346
+ ```python
347
+ bool
348
+ ```
349
+
350
+ </td>
351
+ <td align="left"><code>False</code></td>
352
+ <td align="left">If True, will show a copy button for each chatbot message.</td>
353
+ </tr>
354
+
355
+ <tr>
356
+ <td align="left"><code>avatar_images</code></td>
357
+ <td align="left" style="width: 25%;">
358
+
359
+ ```python
360
+ tuple[str | Path | None, str | Path | None] | None
361
+ ```
362
+
363
+ </td>
364
+ <td align="left"><code>None</code></td>
365
+ <td align="left">Tuple of two avatar image paths or URLs for user and bot (in that order). Pass None for either the user or bot image to skip. Must be within the working directory of the Gradio app or an external URL.</td>
366
+ </tr>
367
+
368
+ <tr>
369
+ <td align="left"><code>sanitize_html</code></td>
370
+ <td align="left" style="width: 25%;">
371
+
372
+ ```python
373
+ bool
374
+ ```
375
+
376
+ </td>
377
+ <td align="left"><code>True</code></td>
378
+ <td align="left">If False, will disable HTML sanitization for chatbot messages. This is not recommended, as it can lead to security vulnerabilities.</td>
379
+ </tr>
380
+
381
+ <tr>
382
+ <td align="left"><code>render_markdown</code></td>
383
+ <td align="left" style="width: 25%;">
384
+
385
+ ```python
386
+ bool
387
+ ```
388
+
389
+ </td>
390
+ <td align="left"><code>True</code></td>
391
+ <td align="left">If False, will disable Markdown rendering for chatbot messages.</td>
392
+ </tr>
393
+
394
+ <tr>
395
+ <td align="left"><code>bubble_full_width</code></td>
396
+ <td align="left" style="width: 25%;">
397
+
398
+ ```python
399
+ bool
400
+ ```
401
+
402
+ </td>
403
+ <td align="left"><code>True</code></td>
404
+ <td align="left">If False, the chat bubble will fit to the content of the message. If True (default), the chat bubble will be the full width of the component.</td>
405
+ </tr>
406
+
407
+ <tr>
408
+ <td align="left"><code>line_breaks</code></td>
409
+ <td align="left" style="width: 25%;">
410
+
411
+ ```python
412
+ bool
413
+ ```
414
+
415
+ </td>
416
+ <td align="left"><code>True</code></td>
417
+ <td align="left">If True (default), will enable Github-flavored Markdown line breaks in chatbot messages. If False, single new lines will be ignored. Only applies if `render_markdown` is True.</td>
418
+ </tr>
419
+
420
+ <tr>
421
+ <td align="left"><code>likeable</code></td>
422
+ <td align="left" style="width: 25%;">
423
+
424
+ ```python
425
+ bool
426
+ ```
427
+
428
+ </td>
429
+ <td align="left"><code>False</code></td>
430
+ <td align="left">Whether the chat messages display a like or dislike button. Set automatically by the .like method but has to be present in the signature for it to show up in the config.</td>
431
+ </tr>
432
+
433
+ <tr>
434
+ <td align="left"><code>layout</code></td>
435
+ <td align="left" style="width: 25%;">
436
+
437
+ ```python
438
+ Literal["panel", "bubble"] | None
439
+ ```
440
+
441
+ </td>
442
+ <td align="left"><code>None</code></td>
443
+ <td align="left">If "panel", will display the chatbot in a llm style layout. If "bubble", will display the chatbot with message bubbles, with the user and bot messages on alterating sides. Will default to "bubble".</td>
444
+ </tr>
445
+
446
+ <tr>
447
+ <td align="left"><code>placeholder</code></td>
448
+ <td align="left" style="width: 25%;">
449
+
450
+ ```python
451
+ str | None
452
+ ```
453
+
454
+ </td>
455
+ <td align="left"><code>None</code></td>
456
+ <td align="left">a placeholder message to display in the chatbot when it is empty. Centered vertically and horizontally in the TestChatbot. Supports Markdown and HTML. If None, no placeholder is displayed.</td>
457
+ </tr>
458
+ </tbody></table>
459
+
460
+
461
+ ### Events
462
+
463
+ | name | description |
464
+ |:-----|:------------|
465
+ | `change` | Triggered when the value of the TestChatbot changes either because of user input (e.g. a user types in a textbox) OR because of a function update (e.g. an image receives a value from the output of an event trigger). See `.input()` for a listener that is only triggered by user input. |
src/demo/requirements.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ gradio_agentchatbot
2
+ "git+https://github.com/huggingface/transformers.git#egg=transformers[agents]"
3
+ langchain
4
+ sentence-transformers
5
+ faiss-cpu
6
+ serpapi
7
+ google-search-results
src/frontend/Index.svelte ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <script context="module" lang="ts">
2
+ export { default as BaseChatBot } from "./shared/ChatBot.svelte";
3
+ </script>
4
+
5
+ <script lang="ts">
6
+ import type { Gradio, SelectData, LikeData } from "@gradio/utils";
7
+ import Prism from "prismjs";
8
+
9
+ import ChatBot from "./shared/ChatBot.svelte";
10
+ import { Block, BlockLabel } from "@gradio/atoms";
11
+ import type { LoadingStatus } from "@gradio/statustracker";
12
+ import { Chat } from "@gradio/icons";
13
+ import type { FileData } from "@gradio/client";
14
+ import type { ChatMessage, ChatFileMessage } from "./types"
15
+ import { StatusTracker } from "@gradio/statustracker";
16
+
17
+ export let elem_id = "";
18
+ export let elem_classes: string[] = [];
19
+ export let visible = true;
20
+ export let value: (ChatMessage | ChatFileMessage)[] = [];
21
+ export let scale: number | null = null;
22
+ export let min_width: number | undefined = undefined;
23
+ export let label: string;
24
+ export let show_label = true;
25
+ export let root: string;
26
+ export let _selectable = false;
27
+ export let likeable = false;
28
+ export let show_share_button = false;
29
+ export let rtl = false;
30
+ export let show_copy_button = false;
31
+ export let sanitize_html = true;
32
+ export let bubble_full_width = true;
33
+ export let layout: "bubble" | "panel" = "bubble";
34
+ export let render_markdown = true;
35
+ export let line_breaks = true;
36
+ export let latex_delimiters: {
37
+ left: string;
38
+ right: string;
39
+ display: boolean;
40
+ }[];
41
+ export let gradio: Gradio<{
42
+ change: typeof value;
43
+ select: SelectData;
44
+ share: ShareData;
45
+ error: string;
46
+ like: LikeData;
47
+ clear_status: LoadingStatus;
48
+ }>;
49
+ export let avatar_images: [FileData | null, FileData | null] = [null, null];
50
+
51
+ let _value: (ChatMessage | ChatFileMessage)[];
52
+
53
+ const redirect_src_url = (src: string): string =>
54
+ src.replace('src="/file', `src="${root}file`);
55
+
56
+ function normalize_messages(
57
+ message: { file: FileData; alt_text: string | null } | null
58
+ ): { file: FileData; alt_text: string | null } | null {
59
+ if (message === null) {
60
+ return message;
61
+ }
62
+ return {
63
+ file: message?.file as FileData,
64
+ alt_text: message?.alt_text
65
+ };
66
+ }
67
+
68
+ $: _value = value;
69
+
70
+ export let loading_status: LoadingStatus | undefined = undefined;
71
+ export let height = 400;
72
+ export let placeholder: string | null = null;
73
+ </script>
74
+
75
+ <Block
76
+ {elem_id}
77
+ {elem_classes}
78
+ {visible}
79
+ padding={false}
80
+ {scale}
81
+ {min_width}
82
+ {height}
83
+ allow_overflow={false}
84
+ >
85
+ {#if loading_status}
86
+ <StatusTracker
87
+ autoscroll={gradio.autoscroll}
88
+ i18n={gradio.i18n}
89
+ {...loading_status}
90
+ show_progress={loading_status.show_progress === "hidden"
91
+ ? "hidden"
92
+ : "minimal"}
93
+ on:clear_status={() => gradio.dispatch("clear_status", loading_status)}
94
+ />
95
+ {/if}
96
+ <div class="wrapper">
97
+ {#if show_label}
98
+ <BlockLabel
99
+ {show_label}
100
+ Icon={Chat}
101
+ float={false}
102
+ label={label || "Chatbot"}
103
+ />
104
+ {/if}
105
+ <ChatBot
106
+ i18n={gradio.i18n}
107
+ selectable={_selectable}
108
+ {likeable}
109
+ {show_share_button}
110
+ value={_value}
111
+ {latex_delimiters}
112
+ {render_markdown}
113
+ pending_message={loading_status?.status === "pending"}
114
+ {rtl}
115
+ {show_copy_button}
116
+ on:change={() => gradio.dispatch("change", value)}
117
+ on:select={(e) => gradio.dispatch("select", e.detail)}
118
+ on:like={(e) => gradio.dispatch("like", e.detail)}
119
+ on:share={(e) => gradio.dispatch("share", e.detail)}
120
+ on:error={(e) => gradio.dispatch("error", e.detail)}
121
+ {avatar_images}
122
+ {sanitize_html}
123
+ {bubble_full_width}
124
+ {line_breaks}
125
+ {layout}
126
+ {placeholder}
127
+ />
128
+ </div>
129
+ </Block>
130
+
131
+ <style>
132
+ .wrapper {
133
+ display: flex;
134
+ position: relative;
135
+ flex-direction: column;
136
+ align-items: start;
137
+ width: 100%;
138
+ height: 100%;
139
+ }
140
+ </style>
src/frontend/gradio.config.js ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import prismjs from 'vite-plugin-prismjs';
2
+
3
+ export default {
4
+ plugins: [
5
+ prismjs({
6
+ languages: 'all',
7
+ })
8
+ ],
9
+ svelte: {
10
+ preprocess: [],
11
+ },
12
+ };
src/frontend/main.ts ADDED
@@ -0,0 +1 @@
 
 
1
+ import 'prismjs';
src/frontend/package-lock.json ADDED
The diff for this file is too large to render. See raw diff
 
src/frontend/package.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "name": "gradio_agentchatbot",
3
+ "version": "0.10.3",
4
+ "description": "Gradio UI packages",
5
+ "type": "module",
6
+ "author": "",
7
+ "license": "ISC",
8
+ "private": false,
9
+ "dependencies": {
10
+ "@gradio/atoms": "0.7.3",
11
+ "@gradio/audio": "0.11.3",
12
+ "@gradio/client": "0.19.2",
13
+ "@gradio/icons": "0.4.1",
14
+ "@gradio/image": "0.11.3",
15
+ "@gradio/markdown": "0.7.3",
16
+ "@gradio/statustracker": "0.5.3",
17
+ "@gradio/theme": "0.2.2",
18
+ "@gradio/upload": "0.10.3",
19
+ "@gradio/utils": "0.4.1",
20
+ "@gradio/video": "0.8.3",
21
+ "@types/dompurify": "^3.0.2",
22
+ "@types/katex": "^0.16.0",
23
+ "@types/prismjs": "^1.26.4",
24
+ "dequal": "^2.0.2",
25
+ "prismjs": "^1.29.0",
26
+ "vite": "^5.2.11",
27
+ "vite-plugin-prismjs": "^0.0.11"
28
+ },
29
+ "devDependencies": {
30
+ "@gradio/preview": "0.9.0"
31
+ },
32
+ "main_changeset": true,
33
+ "main": "./Index.svelte",
34
+ "exports": {
35
+ ".": "./Index.svelte",
36
+ "./package.json": "./package.json"
37
+ }
38
+ }
src/frontend/shared/ChatBot.svelte ADDED
@@ -0,0 +1,644 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <script lang="ts">
2
+ import { format_chat_for_sharing } from "./utils";
3
+ import { copy } from "@gradio/utils";
4
+
5
+ import { dequal } from "dequal/lite";
6
+ import { beforeUpdate, afterUpdate, createEventDispatcher } from "svelte";
7
+ import { ShareButton } from "@gradio/atoms";
8
+ import { Audio } from "@gradio/audio/shared";
9
+ import { Image } from "@gradio/image/shared";
10
+ import { Video } from "@gradio/video/shared";
11
+ import type { SelectData, LikeData } from "@gradio/utils";
12
+ import type { ChatMessage, ChatFileMessage, Message } from "../types";
13
+ import { MarkdownCode as Markdown } from "@gradio/markdown";
14
+ import { FileData } from "@gradio/client";
15
+ import Copy from "./Copy.svelte";
16
+ import type { I18nFormatter } from "js/app/src/gradio_helper";
17
+ import LikeDislike from "./LikeDislike.svelte";
18
+ import Pending from "./Pending.svelte";
19
+ import ToolMessage from "./ToolMessage.svelte";
20
+ import ErrorMessage from "./ErrorMessage.svelte";
21
+
22
+ export let value: (ChatMessage | ChatFileMessage)[] = [];
23
+ let old_value: (ChatMessage | ChatFileMessage)[] | null = null;
24
+ export let latex_delimiters: {
25
+ left: string;
26
+ right: string;
27
+ display: boolean;
28
+ }[];
29
+ export let pending_message = false;
30
+ export let selectable = false;
31
+ export let likeable = false;
32
+ export let show_share_button = false;
33
+ export let rtl = false;
34
+ export let show_copy_button = false;
35
+ export let avatar_images: [FileData | null, FileData | null] = [null, null];
36
+ export let sanitize_html = true;
37
+ export let bubble_full_width = true;
38
+ export let render_markdown = true;
39
+ export let line_breaks = true;
40
+ export let i18n: I18nFormatter;
41
+ export let layout: "bubble" | "panel" = "bubble";
42
+ export let placeholder: string | null = null;
43
+
44
+ let div: HTMLDivElement;
45
+ let autoscroll: boolean;
46
+
47
+ $: adjust_text_size = () => {
48
+ let style = getComputedStyle(document.body);
49
+ let body_text_size = style.getPropertyValue("--body-text-size");
50
+ let updated_text_size;
51
+
52
+ switch (body_text_size) {
53
+ case "13px":
54
+ updated_text_size = 14;
55
+ break;
56
+ case "14px":
57
+ updated_text_size = 16;
58
+ break;
59
+ case "16px":
60
+ updated_text_size = 20;
61
+ break;
62
+ default:
63
+ updated_text_size = 14;
64
+ break;
65
+ }
66
+
67
+ document.body.style.setProperty(
68
+ "--chatbot-body-text-size",
69
+ updated_text_size + "px"
70
+ );
71
+ };
72
+
73
+ $: adjust_text_size();
74
+
75
+ const dispatch = createEventDispatcher<{
76
+ change: undefined;
77
+ select: SelectData;
78
+ like: LikeData;
79
+ }>();
80
+
81
+ beforeUpdate(() => {
82
+ autoscroll =
83
+ div && div.offsetHeight + div.scrollTop > div.scrollHeight - 100;
84
+ });
85
+
86
+ const scroll = (): void => {
87
+ if (autoscroll) {
88
+ div.scrollTo(0, div.scrollHeight);
89
+ }
90
+ };
91
+ afterUpdate(() => {
92
+ if (autoscroll) {
93
+ scroll();
94
+ div.querySelectorAll("img").forEach((n) => {
95
+ n.addEventListener("load", () => {
96
+ scroll();
97
+ });
98
+ });
99
+ }
100
+ });
101
+
102
+ $: {
103
+ if (!dequal(value, old_value)) {
104
+ old_value = value;
105
+ dispatch("change");
106
+ }
107
+ }
108
+
109
+ function handle_select(
110
+ i: number,
111
+ message: Message
112
+ ): void {
113
+ dispatch("select", {
114
+ index: i,
115
+ value: (message as ChatMessage).content || (message as ChatFileMessage).file?.url
116
+ });
117
+ }
118
+
119
+ function handle_like(
120
+ i: number,
121
+ message: Message | null,
122
+ selected: string | null
123
+ ): void {
124
+ dispatch("like", {
125
+ index: i,
126
+ value: (message as ChatMessage).content || (message as ChatFileMessage).file?.url,
127
+ liked: selected === "like"
128
+ });
129
+ }
130
+
131
+ function isFileMessage(
132
+ message: ChatMessage | ChatFileMessage
133
+ ): message is ChatFileMessage {
134
+ return "file" in message;
135
+ }
136
+
137
+ function groupMessages(messages: (ChatMessage | ChatFileMessage)[]): (ChatMessage | ChatFileMessage)[][] {
138
+ const groupedMessages: (ChatFileMessage | ChatMessage)[][] = [];
139
+ let currentGroup: (ChatFileMessage | ChatMessage)[] = [];
140
+
141
+ for (const message of messages) {
142
+ if (message.thought) {
143
+ currentGroup.push(message);
144
+ } else {
145
+ if (currentGroup.length > 0) {
146
+ groupedMessages.push(currentGroup);
147
+ currentGroup = [];
148
+ }
149
+ groupedMessages.push([message]);
150
+ }
151
+ }
152
+
153
+ if (currentGroup.length > 0) {
154
+ groupedMessages.push(currentGroup);
155
+ }
156
+
157
+ return groupedMessages;
158
+ }
159
+
160
+ </script>
161
+
162
+ {#if show_share_button && value !== null && value.length > 0}
163
+ <div class="share-button">
164
+ <ShareButton
165
+ {i18n}
166
+ on:error
167
+ on:share
168
+ formatter={format_chat_for_sharing}
169
+ {value}
170
+ />
171
+ </div>
172
+ {/if}
173
+
174
+ <div
175
+ class={layout === "bubble" ? "bubble-wrap" : "panel-wrap"}
176
+ class:placeholder-container={value === null || value.length === 0}
177
+ bind:this={div}
178
+ role="log"
179
+ aria-label="chatbot conversation"
180
+ aria-live="polite"
181
+ >
182
+ <div class="message-wrap" class:bubble-gap={layout === "bubble"} use:copy>
183
+ {#if value !== null && value.length > 0}
184
+ {@const groupedMessages = groupMessages(value)}
185
+ {#each groupedMessages as messages, i}
186
+ {#if messages.length}
187
+ {@const role = messages[0].role === "user" ? 'user' : 'bot'}
188
+ {@const avatar_img = avatar_images[role === "user" ? 0 : 1]}
189
+ <div class="message-row {layout} {role === "user" ? 'user-row' : 'bot-row'}">
190
+ {#if avatar_img}
191
+ <div class="avatar-container">
192
+ <Image
193
+ class="avatar-image"
194
+ src={avatar_img.url}
195
+ alt="{role} avatar"
196
+ />
197
+ </div>
198
+ {/if}
199
+ <div
200
+ class="message {role === "user" ? 'user' : 'bot'}"
201
+ class:message-fit={layout === "bubble" && !bubble_full_width}
202
+ class:panel-full-width={layout === "panel"}
203
+ class:message-bubble-border={layout === "bubble"}
204
+ class:message-markdown-disabled={!render_markdown}
205
+ style:text-align={rtl && role == 'bot' ? "left" : "right"}
206
+ >
207
+ <button
208
+ data-testid={role}
209
+ class:latest={i === groupedMessages.length - 1}
210
+ class:message-markdown-disabled={!render_markdown}
211
+ style:user-select="text"
212
+ class:selectable
213
+ style:text-align={rtl ? "right" : "left"}
214
+ on:click={() => handle_select(i, messages[0])}
215
+ on:keydown={(e) => {
216
+ if (e.key === "Enter") {
217
+ handle_select(i, messages[0]);
218
+ }
219
+ }}
220
+ dir={rtl ? "rtl" : "ltr"}
221
+ >
222
+ {#each messages as message, thought_index}
223
+
224
+ {#if !isFileMessage(message)}
225
+ <div class:thought={message.thought && thought_index > 0}>
226
+ {#if message.thought_metadata.tool_name}
227
+ <ToolMessage
228
+ title={`Used tool ${message.thought_metadata.tool_name}`}
229
+ >
230
+ <!-- {message.content} -->
231
+ <Markdown
232
+ message={message.content}
233
+ {latex_delimiters}
234
+ {sanitize_html}
235
+ {render_markdown}
236
+ {line_breaks}
237
+ on:load={scroll}
238
+ />
239
+ </ToolMessage>
240
+ {:else if message.thought_metadata.error}
241
+ <ErrorMessage
242
+ >
243
+ <!-- {message.content} -->
244
+ <Markdown
245
+ message={message.content}
246
+ {latex_delimiters}
247
+ {sanitize_html}
248
+ {render_markdown}
249
+ {line_breaks}
250
+ on:load={scroll}
251
+ />
252
+ </ErrorMessage>
253
+ {:else}
254
+ <!-- {message.content} -->
255
+ <Markdown
256
+ message={message.content}
257
+ {latex_delimiters}
258
+ {sanitize_html}
259
+ {render_markdown}
260
+ {line_breaks}
261
+ on:load={scroll}
262
+ />
263
+ {/if}
264
+ </div>
265
+ {:else}
266
+ {#if message.file.mime_type?.includes("audio")}
267
+ <Audio
268
+ data-testid="chatbot-audio"
269
+ controls
270
+ preload="metadata"
271
+ src={message.file?.url}
272
+ title={message.alt_text}
273
+ on:play
274
+ on:pause
275
+ on:ended
276
+ />
277
+ {:else if message !== null && message.file?.mime_type?.includes("video")}
278
+ <Video
279
+ data-testid="chatbot-video"
280
+ controls
281
+ src={message.file?.url}
282
+ title={message.alt_text}
283
+ preload="auto"
284
+ on:play
285
+ on:pause
286
+ on:ended
287
+ >
288
+ <track kind="captions" />
289
+ </Video>
290
+ {:else if message !== null && message.file?.mime_type?.includes("image")}
291
+ <Image
292
+ data-testid="chatbot-image"
293
+ src={message.file?.url}
294
+ alt={message.alt_text}
295
+ />
296
+ {:else if message !== null && message.file?.url !== null}
297
+ <a
298
+ data-testid="chatbot-file"
299
+ href={message.file?.url}
300
+ target="_blank"
301
+ download={window.__is_colab__
302
+ ? null
303
+ : message.file?.orig_name || message.file?.path}
304
+ >
305
+ {message.file?.orig_name || message.file?.path}
306
+ </a>
307
+ {/if}
308
+ {/if}
309
+ {/each}
310
+ </button>
311
+ </div>
312
+ <!-- {#if (likeable && role === 'bot') || (show_copy_button && message && typeof message === "string")}
313
+ <div
314
+ class="message-buttons-{role} message-buttons-{layout} {avatar_images[j] !==
315
+ null && 'with-avatar'}"
316
+ class:message-buttons-fit={layout === "bubble" &&
317
+ !bubble_full_width}
318
+ class:bubble-buttons-user={layout === "bubble"}
319
+ >
320
+ {#if likeable && role === 'bot'}
321
+ <LikeDislike
322
+ handle_action={(selected) =>
323
+ handle_like(i, message, selected)}
324
+ />
325
+ {/if}
326
+ {#if show_copy_button && message && typeof message === "string"}
327
+ <Copy value={message} />
328
+ {/if}
329
+ </div>
330
+ {/if} -->
331
+ </div>
332
+ {/if}
333
+ {/each}
334
+ {#if pending_message}
335
+ <Pending {layout} />
336
+ {/if}
337
+ {:else if placeholder !== null}
338
+ <center>
339
+ <Markdown message={placeholder} {latex_delimiters} />
340
+ </center>
341
+ {/if}
342
+ </div>
343
+ </div>
344
+
345
+ <style>
346
+ .placeholder-container {
347
+ display: flex;
348
+ justify-content: center;
349
+ align-items: center;
350
+ height: 100%;
351
+ }
352
+ .bubble-wrap {
353
+ padding: var(--block-padding);
354
+ width: 100%;
355
+ overflow-y: auto;
356
+ }
357
+
358
+ .panel-wrap {
359
+ width: 100%;
360
+ overflow-y: auto;
361
+ }
362
+
363
+ .message-wrap {
364
+ display: flex;
365
+ flex-direction: column;
366
+ justify-content: space-between;
367
+ }
368
+
369
+ .bubble-gap {
370
+ gap: calc(var(--spacing-xxl) + var(--spacing-lg));
371
+ }
372
+
373
+ .message-wrap > div :not(.avatar-container) :global(img) {
374
+ border-radius: 13px;
375
+ margin: var(--size-2);
376
+ width: 400px;
377
+ max-width: 30vw;
378
+ max-height: auto;
379
+ }
380
+
381
+ .message-wrap > div :global(p:not(:first-child)) {
382
+ margin-top: var(--spacing-xxl);
383
+ }
384
+
385
+ .message {
386
+ position: relative;
387
+ display: flex;
388
+ flex-direction: column;
389
+ align-self: flex-end;
390
+ background: var(--background-fill-secondary);
391
+ width: calc(100% - var(--spacing-xxl));
392
+ color: var(--body-text-color);
393
+ font-size: var(--chatbot-body-text-size);
394
+ overflow-wrap: break-word;
395
+ overflow-x: hidden;
396
+ padding-right: calc(var(--spacing-xxl) + var(--spacing-md));
397
+ padding: calc(var(--spacing-xxl) + var(--spacing-sm));
398
+ }
399
+
400
+ .thought {
401
+ margin-top: var(--spacing-xxl);
402
+ }
403
+
404
+ .message :global(.prose) {
405
+ font-size: var(--chatbot-body-text-size);
406
+ }
407
+
408
+ .message-bubble-border {
409
+ border-width: 1px;
410
+ border-radius: var(--radius-xxl);
411
+ }
412
+
413
+ .message-fit {
414
+ width: fit-content !important;
415
+ }
416
+
417
+ .panel-full-width {
418
+ padding: calc(var(--spacing-xxl) * 2);
419
+ width: 100%;
420
+ }
421
+ .message-markdown-disabled {
422
+ white-space: pre-line;
423
+ }
424
+
425
+ @media (max-width: 480px) {
426
+ .panel-full-width {
427
+ padding: calc(var(--spacing-xxl) * 2);
428
+ }
429
+ }
430
+
431
+ .user {
432
+ align-self: flex-start;
433
+ border-bottom-right-radius: 0;
434
+ text-align: right;
435
+ }
436
+ .bot {
437
+ border-bottom-left-radius: 0;
438
+ text-align: left;
439
+ }
440
+
441
+ /* Colors */
442
+ .bot {
443
+ border-color: var(--border-color-primary);
444
+ background: var(--background-fill-secondary);
445
+ }
446
+
447
+ .user {
448
+ border-color: var(--border-color-accent-subdued);
449
+ background-color: var(--color-accent-soft);
450
+ }
451
+ .message-row {
452
+ display: flex;
453
+ flex-direction: row;
454
+ position: relative;
455
+ }
456
+
457
+ .message-row.panel.user-row {
458
+ background: var(--color-accent-soft);
459
+ }
460
+
461
+ .message-row.panel.bot-row {
462
+ background: var(--background-fill-secondary);
463
+ }
464
+
465
+ .message-row:last-of-type {
466
+ margin-bottom: var(--spacing-xxl);
467
+ }
468
+
469
+ .user-row.bubble {
470
+ flex-direction: row;
471
+ justify-content: flex-end;
472
+ }
473
+ @media (max-width: 480px) {
474
+ .user-row.bubble {
475
+ align-self: flex-end;
476
+ }
477
+
478
+ .bot-row.bubble {
479
+ align-self: flex-start;
480
+ }
481
+ .message {
482
+ width: auto;
483
+ }
484
+ }
485
+ .avatar-container {
486
+ align-self: flex-end;
487
+ position: relative;
488
+ justify-content: center;
489
+ width: 35px;
490
+ height: 35px;
491
+ flex-shrink: 0;
492
+ bottom: 0;
493
+ }
494
+ .user-row.bubble > .avatar-container {
495
+ order: 2;
496
+ margin-left: 10px;
497
+ }
498
+ .bot-row.bubble > .avatar-container {
499
+ margin-right: 10px;
500
+ }
501
+
502
+ .panel > .avatar-container {
503
+ margin-left: 25px;
504
+ align-self: center;
505
+ }
506
+
507
+ .avatar-container :global(img) {
508
+ width: 100%;
509
+ height: 100%;
510
+ object-fit: cover;
511
+ border-radius: 50%;
512
+ }
513
+
514
+ .message-buttons-user,
515
+ .message-buttons-bot {
516
+ border-radius: var(--radius-md);
517
+ display: flex;
518
+ align-items: center;
519
+ bottom: 0;
520
+ height: var(--size-7);
521
+ align-self: self-end;
522
+ position: absolute;
523
+ bottom: -15px;
524
+ margin: 2px;
525
+ padding-left: 5px;
526
+ z-index: 1;
527
+ }
528
+ .message-buttons-bot {
529
+ left: 10px;
530
+ }
531
+ .message-buttons-user {
532
+ right: 5px;
533
+ }
534
+
535
+ .message-buttons-bot.message-buttons-bubble.with-avatar {
536
+ left: 50px;
537
+ }
538
+ .message-buttons-user.message-buttons-bubble.with-avatar {
539
+ right: 50px;
540
+ }
541
+
542
+ .message-buttons-bubble {
543
+ border: 1px solid var(--border-color-accent);
544
+ background: var(--background-fill-secondary);
545
+ }
546
+
547
+ .message-buttons-panel {
548
+ left: unset;
549
+ right: 0px;
550
+ top: 0px;
551
+ }
552
+
553
+ .share-button {
554
+ position: absolute;
555
+ top: 4px;
556
+ right: 6px;
557
+ }
558
+
559
+ .selectable {
560
+ cursor: pointer;
561
+ }
562
+
563
+ @keyframes dot-flashing {
564
+ 0% {
565
+ opacity: 0.8;
566
+ }
567
+ 50% {
568
+ opacity: 0.5;
569
+ }
570
+ 100% {
571
+ opacity: 0.8;
572
+ }
573
+ }
574
+
575
+ .message-wrap .message :global(a) {
576
+ color: var(--color-text-link);
577
+ text-decoration: underline;
578
+ }
579
+
580
+ .message-wrap .bot :global(table),
581
+ .message-wrap .bot :global(tr),
582
+ .message-wrap .bot :global(td),
583
+ .message-wrap .bot :global(th) {
584
+ border: 1px solid var(--border-color-primary);
585
+ }
586
+
587
+ .message-wrap .user :global(table),
588
+ .message-wrap .user :global(tr),
589
+ .message-wrap .user :global(td),
590
+ .message-wrap .user :global(th) {
591
+ border: 1px solid var(--border-color-accent);
592
+ }
593
+
594
+ /* Lists */
595
+ .message-wrap :global(ol),
596
+ .message-wrap :global(ul) {
597
+ padding-inline-start: 2em;
598
+ }
599
+
600
+ /* KaTeX */
601
+ .message-wrap :global(span.katex) {
602
+ font-size: var(--text-lg);
603
+ direction: ltr;
604
+ }
605
+
606
+ /* Copy button */
607
+ .message-wrap :global(div[class*="code_wrap"] > button) {
608
+ position: absolute;
609
+ top: var(--spacing-md);
610
+ right: var(--spacing-md);
611
+ z-index: 1;
612
+ cursor: pointer;
613
+ border-bottom-left-radius: var(--radius-sm);
614
+ padding: 5px;
615
+ padding: var(--spacing-md);
616
+ width: 25px;
617
+ height: 25px;
618
+ }
619
+
620
+ .message-wrap :global(code > button > span) {
621
+ position: absolute;
622
+ top: var(--spacing-md);
623
+ right: var(--spacing-md);
624
+ width: 12px;
625
+ height: 12px;
626
+ }
627
+ .message-wrap :global(.check) {
628
+ position: absolute;
629
+ top: 0;
630
+ right: 0;
631
+ opacity: 0;
632
+ z-index: var(--layer-top);
633
+ transition: opacity 0.2s;
634
+ background: var(--background-fill-primary);
635
+ padding: var(--size-1);
636
+ width: 100%;
637
+ height: 100%;
638
+ color: var(--body-text-color);
639
+ }
640
+
641
+ .message-wrap :global(pre) {
642
+ position: relative;
643
+ }
644
+ </style>
src/frontend/shared/Copy.svelte ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <script lang="ts">
2
+ import { onDestroy } from "svelte";
3
+ import { Copy, Check } from "@gradio/icons";
4
+
5
+ let copied = false;
6
+ export let value: string;
7
+ let timer: NodeJS.Timeout;
8
+
9
+ function copy_feedback(): void {
10
+ copied = true;
11
+ if (timer) clearTimeout(timer);
12
+ timer = setTimeout(() => {
13
+ copied = false;
14
+ }, 2000);
15
+ }
16
+
17
+ async function handle_copy(): Promise<void> {
18
+ if ("clipboard" in navigator) {
19
+ await navigator.clipboard.writeText(value);
20
+ copy_feedback();
21
+ } else {
22
+ const textArea = document.createElement("textarea");
23
+ textArea.value = value;
24
+
25
+ textArea.style.position = "absolute";
26
+ textArea.style.left = "-999999px";
27
+
28
+ document.body.prepend(textArea);
29
+ textArea.select();
30
+
31
+ try {
32
+ document.execCommand("copy");
33
+ copy_feedback();
34
+ } catch (error) {
35
+ console.error(error);
36
+ } finally {
37
+ textArea.remove();
38
+ }
39
+ }
40
+ }
41
+
42
+ onDestroy(() => {
43
+ if (timer) clearTimeout(timer);
44
+ });
45
+ </script>
46
+
47
+ <button
48
+ on:click={handle_copy}
49
+ class="action"
50
+ title="copy"
51
+ aria-label={copied ? "Copied message" : "Copy message"}
52
+ >
53
+ {#if !copied}
54
+ <Copy />
55
+ {/if}
56
+ {#if copied}
57
+ <Check />
58
+ {/if}
59
+ </button>
60
+
61
+ <style>
62
+ button {
63
+ position: relative;
64
+ top: 0;
65
+ right: 0;
66
+ cursor: pointer;
67
+ color: var(--body-text-color-subdued);
68
+ margin-right: 5px;
69
+ }
70
+
71
+ button:hover {
72
+ color: var(--body-text-color);
73
+ }
74
+
75
+ .action {
76
+ width: 15px;
77
+ height: 14px;
78
+ }
79
+ </style>
src/frontend/shared/ErrorMessage.svelte ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <script lang="ts">
2
+ let expanded = false;
3
+
4
+ function toggleExpanded() {
5
+ expanded = !expanded;
6
+ }
7
+ </script>
8
+
9
+ <!-- svelte-ignore a11y-click-events-have-key-events -->
10
+ <!-- svelte-ignore a11y-no-static-element-interactions -->
11
+ <div class="box" on:click={toggleExpanded}>
12
+ <div class="title">
13
+ <div class="icon">
14
+ 💥
15
+ </div>
16
+ <span>{'Error'}</span>
17
+ </div>
18
+ {#if expanded}
19
+ <div class="content">
20
+ <slot></slot>
21
+ </div>
22
+ {/if}
23
+ </div>
24
+
25
+ <style>
26
+ .box {
27
+ border-radius: 4px;
28
+ cursor: pointer;
29
+ max-width: max-content;
30
+ background: var(--color-accent-soft)
31
+ }
32
+
33
+ .title {
34
+ display: flex;
35
+ align-items: center;
36
+ padding: 8px;
37
+ color: var(--color-red-50);
38
+ }
39
+
40
+ .icon {
41
+ width: 18px;
42
+ height: 18px;
43
+ margin-right: 8px;
44
+ }
45
+
46
+ .content {
47
+ padding: 8px;
48
+ }
49
+ </style>
src/frontend/shared/LikeDislike.svelte ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <script lang="ts">
2
+ import { Like } from "@gradio/icons";
3
+ import { Dislike } from "@gradio/icons";
4
+
5
+ export let handle_action: (selected: string | null) => void;
6
+
7
+ let selected: "like" | "dislike" | null = null;
8
+ </script>
9
+
10
+ <button
11
+ on:click={() => {
12
+ selected = "like";
13
+ handle_action(selected);
14
+ }}
15
+ aria-label={selected === "like" ? "clicked like" : "like"}
16
+ >
17
+ <Like selected={selected === "like"} />
18
+ </button>
19
+
20
+ <button
21
+ on:click={() => {
22
+ selected = "dislike";
23
+ handle_action(selected);
24
+ }}
25
+ aria-label={selected === "dislike" ? "clicked dislike" : "dislike"}
26
+ >
27
+ <Dislike selected={selected === "dislike"} />
28
+ </button>
29
+
30
+ <style>
31
+ button {
32
+ position: relative;
33
+ top: 0;
34
+ right: 0;
35
+ cursor: pointer;
36
+ color: var(--body-text-color-subdued);
37
+ width: 17px;
38
+ height: 17px;
39
+ margin-right: 5px;
40
+ }
41
+
42
+ button:hover,
43
+ button:focus {
44
+ color: var(--body-text-color);
45
+ }
46
+ </style>
src/frontend/shared/Pending.svelte ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <script lang="ts">
2
+ export let layout = "bubble";
3
+ </script>
4
+
5
+ <div
6
+ class="message pending"
7
+ role="status"
8
+ aria-label="Loading response"
9
+ aria-live="polite"
10
+ style:border-radius={layout === "bubble" ? "var(--radius-xxl)" : "none"}
11
+ >
12
+ <span class="sr-only">Loading content</span>
13
+ <div class="dot-flashing" />
14
+ &nbsp;
15
+ <div class="dot-flashing" />
16
+ &nbsp;
17
+ <div class="dot-flashing" />
18
+ </div>
19
+
20
+ <style>
21
+ .pending {
22
+ background: var(--color-accent-soft);
23
+ display: flex;
24
+ flex-direction: row;
25
+ justify-content: center;
26
+ align-items: center;
27
+ align-self: center;
28
+ gap: 2px;
29
+ width: 100%;
30
+ height: var(--size-16);
31
+ }
32
+ .dot-flashing {
33
+ animation: flash 1s infinite ease-in-out;
34
+ border-radius: 5px;
35
+ background-color: var(--body-text-color);
36
+ width: 7px;
37
+ height: 7px;
38
+ color: var(--body-text-color);
39
+ }
40
+ @keyframes flash {
41
+ 0%,
42
+ 100% {
43
+ opacity: 0;
44
+ }
45
+ 50% {
46
+ opacity: 1;
47
+ }
48
+ }
49
+
50
+ .dot-flashing:nth-child(1) {
51
+ animation-delay: 0s;
52
+ }
53
+
54
+ .dot-flashing:nth-child(2) {
55
+ animation-delay: 0.33s;
56
+ }
57
+ .dot-flashing:nth-child(3) {
58
+ animation-delay: 0.66s;
59
+ }
60
+ </style>
src/frontend/shared/Tool.svelte ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!-- <svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="1.5" stroke-linecap="round" stroke-linejoin="round" class="feather feather-file"><path d="M13 2H6a2 2 0 0 0-2 2v16a2 2 0 0 0 2 2h12a2 2 0 0 0 2-2V9z"></path><polyline points="13 2 13 9 20 9"></polyline></svg> -->
2
+ <!-- <svg width="16" height="16" viewBox="0 0 24 24" fill="none" stroke="currentColor"> -->
3
+ <path d="M478.409,116.617c-0.368-4.271-3.181-7.94-7.2-9.403c-4.029-1.472-8.539-0.47-11.57,2.556l-62.015,62.011l-68.749-21.768
4
+ l-21.768-68.748l62.016-62.016c3.035-3.032,4.025-7.543,2.563-11.565c-1.477-4.03-5.137-6.837-9.417-7.207
5
+ c-37.663-3.245-74.566,10.202-101.247,36.887c-36.542,36.545-46.219,89.911-29.083,135.399c-1.873,1.578-3.721,3.25-5.544,5.053
6
+ L19.386,373.152c-0.073,0.071-0.145,0.149-0.224,0.219c-24.345,24.346-24.345,63.959,0,88.309
7
+ c24.349,24.344,63.672,24.048,88.013-0.298c0.105-0.098,0.201-0.196,0.297-0.305l193.632-208.621
8
+ c1.765-1.773,3.404-3.628,4.949-5.532c45.5,17.167,98.9,7.513,135.474-29.056C468.202,191.181,481.658,154.275,478.409,116.617z
9
+ M75.98,435.38c-8.971,8.969-23.5,8.963-32.47,0c-8.967-8.961-8.967-23.502,0-32.466c8.97-8.963,23.499-8.963,32.47,0
10
+ C84.947,411.878,84.947,426.419,75.98,435.38z"/>
11
+ <!-- </svg> -->
src/frontend/shared/ToolMessage.svelte ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <script lang="ts">
2
+ import Tool from "./Tool.svelte";
3
+
4
+ export let title: string;
5
+ let expanded = false;
6
+
7
+ function toggleExpanded() {
8
+ expanded = !expanded;
9
+ }
10
+ </script>
11
+
12
+ <!-- svelte-ignore a11y-click-events-have-key-events -->
13
+ <!-- svelte-ignore a11y-no-static-element-interactions -->
14
+ <div class="box" on:click={toggleExpanded}>
15
+ <div class="title">
16
+ <div class="wrench-icon">
17
+ 🛠️
18
+ </div>
19
+ <span>{title}</span>
20
+ </div>
21
+ {#if expanded}
22
+ <div class="content">
23
+ <slot></slot>
24
+ </div>
25
+ {/if}
26
+ </div>
27
+
28
+ <style>
29
+ .box {
30
+ border-radius: 4px;
31
+ cursor: pointer;
32
+ max-width: max-content;
33
+ background: var(--color-accent-soft)
34
+ }
35
+
36
+ .title {
37
+ display: flex;
38
+ align-items: center;
39
+ padding: 8px;
40
+ }
41
+
42
+ .wrench-icon {
43
+ width: 18px;
44
+ height: 18px;
45
+ margin-right: 8px;
46
+ }
47
+
48
+ .icon {
49
+ width: 0;
50
+ height: 0;
51
+ border-left: 5px solid transparent;
52
+ border-right: 5px solid transparent;
53
+ border-top: 5px solid purple;
54
+ margin-left: auto;
55
+ transition: transform 0.3s;
56
+ }
57
+
58
+ .icon.expanded {
59
+ transform: rotate(180deg);
60
+ }
61
+
62
+ .content {
63
+ padding: 8px;
64
+ }
65
+ </style>
src/frontend/shared/autorender.d.ts ADDED
@@ -0,0 +1 @@
 
 
1
+ declare module "katex/dist/contrib/auto-render.js";
src/frontend/shared/utils.ts ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import type { FileData } from "@gradio/client";
2
+ import { uploadToHuggingFace } from "@gradio/utils";
3
+
4
+ export const format_chat_for_sharing = async (
5
+ chat: [string | FileData | null, string | FileData | null][]
6
+ ): Promise<string> => {
7
+ let messages = await Promise.all(
8
+ chat.map(async (message_pair) => {
9
+ return await Promise.all(
10
+ message_pair.map(async (message, i) => {
11
+ if (message === null) return "";
12
+ let speaker_emoji = i === 0 ? "😃" : "🤖";
13
+ let html_content = "";
14
+
15
+ if (typeof message === "string") {
16
+ const regexPatterns = {
17
+ audio: /<audio.*?src="(\/file=.*?)"/g,
18
+ video: /<video.*?src="(\/file=.*?)"/g,
19
+ image: /<img.*?src="(\/file=.*?)".*?\/>|!\[.*?\]\((\/file=.*?)\)/g
20
+ };
21
+
22
+ html_content = message;
23
+
24
+ for (let [_, regex] of Object.entries(regexPatterns)) {
25
+ let match;
26
+
27
+ while ((match = regex.exec(message)) !== null) {
28
+ const fileUrl = match[1] || match[2];
29
+ const newUrl = await uploadToHuggingFace(fileUrl, "url");
30
+ html_content = html_content.replace(fileUrl, newUrl);
31
+ }
32
+ }
33
+ } else {
34
+ if (!message?.url) return "";
35
+ const file_url = await uploadToHuggingFace(message.url, "url");
36
+ if (message.mime_type?.includes("audio")) {
37
+ html_content = `<audio controls src="${file_url}"></audio>`;
38
+ } else if (message.mime_type?.includes("video")) {
39
+ html_content = file_url;
40
+ } else if (message.mime_type?.includes("image")) {
41
+ html_content = `<img src="${file_url}" />`;
42
+ }
43
+ }
44
+
45
+ return `${speaker_emoji}: ${html_content}`;
46
+ })
47
+ );
48
+ })
49
+ );
50
+ return messages
51
+ .map((message_pair) =>
52
+ message_pair.join(
53
+ message_pair[0] !== "" && message_pair[1] !== "" ? "\n" : ""
54
+ )
55
+ )
56
+ .join("\n");
57
+ };
src/frontend/types.ts ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import type { FileData } from "@gradio/client";
2
+
3
+ type MessageRole = "system" | "user" | "assistant" | "tool";
4
+
5
+
6
+ export interface ThoughtMetadata {
7
+ error: boolean;
8
+ tool_name: string;
9
+ }
10
+
11
+ export interface Message {
12
+ role: MessageRole;
13
+ thought_metadata: ThoughtMetadata;
14
+ thought: boolean;
15
+ }
16
+
17
+ export interface ChatMessage extends Message {
18
+ content: string;
19
+ }
20
+
21
+ export interface ChatFileMessage extends Message {
22
+ file: FileData;
23
+ alt_text?: string | null;
24
+ }
src/pyproject.toml ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [build-system]
2
+ requires = [
3
+ "hatchling",
4
+ "hatch-requirements-txt",
5
+ "hatch-fancy-pypi-readme>=22.5.0",
6
+ ]
7
+ build-backend = "hatchling.build"
8
+
9
+ [project]
10
+ name = "gradio_agentchatbot"
11
+ version = "0.0.1"
12
+ description = "Chat with agents 🤖 and see the tools they use 🛠️"
13
+ readme = "README.md"
14
+ license = "apache-2.0"
15
+ requires-python = ">=3.8"
16
+ authors = [{ name = "Freddy Boulton", email = "alfonsoboulton@gmail.com" }]
17
+ keywords = ["gradio-custom-component", "gradio-template-Chatbot", "chatbot", "agents", "streaming", "tools"]
18
+ # Add dependencies here
19
+ dependencies = ["gradio>=4.0,<5.0"]
20
+ classifiers = [
21
+ 'Development Status :: 3 - Alpha',
22
+ 'Operating System :: OS Independent',
23
+ 'Programming Language :: Python :: 3',
24
+ 'Programming Language :: Python :: 3 :: Only',
25
+ 'Programming Language :: Python :: 3.8',
26
+ 'Programming Language :: Python :: 3.9',
27
+ 'Programming Language :: Python :: 3.10',
28
+ 'Programming Language :: Python :: 3.11',
29
+ 'Topic :: Scientific/Engineering',
30
+ 'Topic :: Scientific/Engineering :: Artificial Intelligence',
31
+ 'Topic :: Scientific/Engineering :: Visualization',
32
+ ]
33
+
34
+ # The repository and space URLs are optional, but recommended.
35
+ # Adding a repository URL will create a badge in the auto-generated README that links to the repository.
36
+ # Adding a space URL will create a badge in the auto-generated README that links to the space.
37
+ # This will make it easy for people to find your deployed demo or source code when they
38
+ # encounter your project in the wild.
39
+
40
+ # [project.urls]
41
+ # repository = "your github repository"
42
+ # space = "your space url"
43
+
44
+ [project.optional-dependencies]
45
+ dev = ["build", "twine"]
46
+
47
+ [tool.hatch.build]
48
+ artifacts = ["/backend/gradio_agentchatbot/templates", "*.pyi"]
49
+
50
+ [tool.hatch.build.targets.wheel]
51
+ packages = ["/backend/gradio_agentchatbot"]
src/scripts/ExtractStreamingLogs.ipynb ADDED
@@ -0,0 +1,253 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 1,
6
+ "metadata": {},
7
+ "outputs": [],
8
+ "source": [
9
+ "import os\n",
10
+ "\n",
11
+ "os.environ[\"SERPAPI_API_KEY\"] = \"3d009ed0a9764853b3e75ee2c70ceb3aabd92b810456307c5af711559ec4aff7\""
12
+ ]
13
+ },
14
+ {
15
+ "cell_type": "code",
16
+ "execution_count": 2,
17
+ "metadata": {},
18
+ "outputs": [
19
+ {
20
+ "name": "stderr",
21
+ "output_type": "stream",
22
+ "text": [
23
+ "/Users/freddy/sources/components/chatbot-with-tools/.venv/lib/python3.10/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html\n",
24
+ " from .autonotebook import tqdm as notebook_tqdm\n",
25
+ "You're loading a tool from the Hub from None. Please make sure this is a source that you trust as the code within that tool will be executed on your machine. Always verify the code of the tools that you load. We recommend specifying a `revision` to ensure you're loading the code that you have checked.\n"
26
+ ]
27
+ },
28
+ {
29
+ "ename": "ImportError",
30
+ "evalue": "Could not import serpapi python package. Please install it with `pip install google-search-results`.",
31
+ "output_type": "error",
32
+ "traceback": [
33
+ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
34
+ "\u001b[0;31mImportError\u001b[0m Traceback (most recent call last)",
35
+ "File \u001b[0;32m~/sources/components/chatbot-with-tools/.venv/lib/python3.10/site-packages/langchain_community/utilities/serpapi.py:68\u001b[0m, in \u001b[0;36mSerpAPIWrapper.validate_environment\u001b[0;34m(cls, values)\u001b[0m\n\u001b[1;32m 67\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m---> 68\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mserpapi\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m GoogleSearch\n\u001b[1;32m 70\u001b[0m values[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124msearch_engine\u001b[39m\u001b[38;5;124m\"\u001b[39m] \u001b[38;5;241m=\u001b[39m GoogleSearch\n",
36
+ "\u001b[0;31mImportError\u001b[0m: cannot import name 'GoogleSearch' from 'serpapi' (/Users/freddy/sources/components/chatbot-with-tools/.venv/lib/python3.10/site-packages/serpapi/__init__.py)",
37
+ "\nDuring handling of the above exception, another exception occurred:\n",
38
+ "\u001b[0;31mImportError\u001b[0m Traceback (most recent call last)",
39
+ "Cell \u001b[0;32mIn[2], line 9\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[38;5;66;03m# Import tool from Hub\u001b[39;00m\n\u001b[1;32m 6\u001b[0m image_generation_tool \u001b[38;5;241m=\u001b[39m load_tool(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mm-ric/text-to-image\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m----> 9\u001b[0m search_tool \u001b[38;5;241m=\u001b[39m Tool\u001b[38;5;241m.\u001b[39mfrom_langchain(\u001b[43mload_tools\u001b[49m\u001b[43m(\u001b[49m\u001b[43m[\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43mserpapi\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m]\u001b[49m\u001b[43m)\u001b[49m[\u001b[38;5;241m0\u001b[39m])\n\u001b[1;32m 11\u001b[0m llm_engine \u001b[38;5;241m=\u001b[39m HfEngine(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mmeta-llama/Meta-Llama-3-70B-Instruct\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m 12\u001b[0m \u001b[38;5;66;03m# Initialize the agent with both tools\u001b[39;00m\n",
40
+ "File \u001b[0;32m~/sources/components/chatbot-with-tools/.venv/lib/python3.10/site-packages/langchain_community/agent_toolkits/load_tools.py:726\u001b[0m, in \u001b[0;36mload_tools\u001b[0;34m(tool_names, llm, callbacks, allow_dangerous_tools, **kwargs)\u001b[0m\n\u001b[1;32m 724\u001b[0m _get_tool_func, extra_keys \u001b[38;5;241m=\u001b[39m _EXTRA_OPTIONAL_TOOLS[name]\n\u001b[1;32m 725\u001b[0m sub_kwargs \u001b[38;5;241m=\u001b[39m {k: kwargs[k] \u001b[38;5;28;01mfor\u001b[39;00m k \u001b[38;5;129;01min\u001b[39;00m extra_keys \u001b[38;5;28;01mif\u001b[39;00m k \u001b[38;5;129;01min\u001b[39;00m kwargs}\n\u001b[0;32m--> 726\u001b[0m tool \u001b[38;5;241m=\u001b[39m \u001b[43m_get_tool_func\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43msub_kwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 727\u001b[0m tools\u001b[38;5;241m.\u001b[39mappend(tool)\n\u001b[1;32m 728\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n",
41
+ "File \u001b[0;32m~/sources/components/chatbot-with-tools/.venv/lib/python3.10/site-packages/langchain_community/agent_toolkits/load_tools.py:380\u001b[0m, in \u001b[0;36m_get_serpapi\u001b[0;34m(**kwargs)\u001b[0m\n\u001b[1;32m 376\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21m_get_serpapi\u001b[39m(\u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs: Any) \u001b[38;5;241m-\u001b[39m\u001b[38;5;241m>\u001b[39m BaseTool:\n\u001b[1;32m 377\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m Tool(\n\u001b[1;32m 378\u001b[0m name\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mSearch\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[1;32m 379\u001b[0m description\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mA search engine. Useful for when you need to answer questions about current events. Input should be a search query.\u001b[39m\u001b[38;5;124m\"\u001b[39m,\n\u001b[0;32m--> 380\u001b[0m func\u001b[38;5;241m=\u001b[39m\u001b[43mSerpAPIWrapper\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\u001b[38;5;241m.\u001b[39mrun,\n\u001b[1;32m 381\u001b[0m coroutine\u001b[38;5;241m=\u001b[39mSerpAPIWrapper(\u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\u001b[38;5;241m.\u001b[39marun,\n\u001b[1;32m 382\u001b[0m )\n",
42
+ "File \u001b[0;32m~/sources/components/chatbot-with-tools/.venv/lib/python3.10/site-packages/pydantic/v1/main.py:339\u001b[0m, in \u001b[0;36mBaseModel.__init__\u001b[0;34m(__pydantic_self__, **data)\u001b[0m\n\u001b[1;32m 333\u001b[0m \u001b[38;5;250m\u001b[39m\u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m 334\u001b[0m \u001b[38;5;124;03mCreate a new model by parsing and validating input data from keyword arguments.\u001b[39;00m\n\u001b[1;32m 335\u001b[0m \n\u001b[1;32m 336\u001b[0m \u001b[38;5;124;03mRaises ValidationError if the input data cannot be parsed to form a valid model.\u001b[39;00m\n\u001b[1;32m 337\u001b[0m \u001b[38;5;124;03m\"\"\"\u001b[39;00m\n\u001b[1;32m 338\u001b[0m \u001b[38;5;66;03m# Uses something other than `self` the first arg to allow \"self\" as a settable attribute\u001b[39;00m\n\u001b[0;32m--> 339\u001b[0m values, fields_set, validation_error \u001b[38;5;241m=\u001b[39m \u001b[43mvalidate_model\u001b[49m\u001b[43m(\u001b[49m\u001b[43m__pydantic_self__\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[38;5;18;43m__class__\u001b[39;49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mdata\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 340\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m validation_error:\n\u001b[1;32m 341\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m validation_error\n",
43
+ "File \u001b[0;32m~/sources/components/chatbot-with-tools/.venv/lib/python3.10/site-packages/pydantic/v1/main.py:1100\u001b[0m, in \u001b[0;36mvalidate_model\u001b[0;34m(model, input_data, cls)\u001b[0m\n\u001b[1;32m 1098\u001b[0m \u001b[38;5;28;01mcontinue\u001b[39;00m\n\u001b[1;32m 1099\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m-> 1100\u001b[0m values \u001b[38;5;241m=\u001b[39m \u001b[43mvalidator\u001b[49m\u001b[43m(\u001b[49m\u001b[43mcls_\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mvalues\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 1101\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m (\u001b[38;5;167;01mValueError\u001b[39;00m, \u001b[38;5;167;01mTypeError\u001b[39;00m, \u001b[38;5;167;01mAssertionError\u001b[39;00m) \u001b[38;5;28;01mas\u001b[39;00m exc:\n\u001b[1;32m 1102\u001b[0m errors\u001b[38;5;241m.\u001b[39mappend(ErrorWrapper(exc, loc\u001b[38;5;241m=\u001b[39mROOT_KEY))\n",
44
+ "File \u001b[0;32m~/sources/components/chatbot-with-tools/.venv/lib/python3.10/site-packages/langchain_community/utilities/serpapi.py:72\u001b[0m, in \u001b[0;36mSerpAPIWrapper.validate_environment\u001b[0;34m(cls, values)\u001b[0m\n\u001b[1;32m 70\u001b[0m values[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124msearch_engine\u001b[39m\u001b[38;5;124m\"\u001b[39m] \u001b[38;5;241m=\u001b[39m GoogleSearch\n\u001b[1;32m 71\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mImportError\u001b[39;00m:\n\u001b[0;32m---> 72\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mImportError\u001b[39;00m(\n\u001b[1;32m 73\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mCould not import serpapi python package. \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 74\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mPlease install it with `pip install google-search-results`.\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 75\u001b[0m )\n\u001b[1;32m 76\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m values\n",
45
+ "\u001b[0;31mImportError\u001b[0m: Could not import serpapi python package. Please install it with `pip install google-search-results`."
46
+ ]
47
+ }
48
+ ],
49
+ "source": [
50
+ "from transformers import Tool, load_tool, ReactCodeAgent, HfEngine\n",
51
+ "# Import tool from LangChain\n",
52
+ "from langchain.agents import load_tools\n",
53
+ "\n",
54
+ "# Import tool from Hub\n",
55
+ "image_generation_tool = load_tool(\"m-ric/text-to-image\")\n",
56
+ "\n",
57
+ "\n",
58
+ "search_tool = Tool.from_langchain(load_tools([\"serpapi\"])[0])\n",
59
+ "\n",
60
+ "llm_engine = HfEngine(\"meta-llama/Meta-Llama-3-70B-Instruct\")\n",
61
+ "# Initialize the agent with both tools\n",
62
+ "agent = ReactCodeAgent(tools=[image_generation_tool, search_tool], llm_engine=llm_engine)\n",
63
+ "\n",
64
+ "# Run it!\n",
65
+ "#agent.run(\"Generate me a photo of the car that James bond drove in the latest movie.\")"
66
+ ]
67
+ },
68
+ {
69
+ "cell_type": "code",
70
+ "execution_count": null,
71
+ "metadata": {},
72
+ "outputs": [],
73
+ "source": [
74
+ "import json\n",
75
+ "\n",
76
+ "json.dump(agent.logs, open(\"logs.json\", \"w\"), indent=2)"
77
+ ]
78
+ },
79
+ {
80
+ "cell_type": "code",
81
+ "execution_count": null,
82
+ "metadata": {},
83
+ "outputs": [],
84
+ "source": [
85
+ "from transformers.agents import Agent\n",
86
+ "from gradio.data_classes import GradioModel\n",
87
+ "from typing import Literal, List, Generator, Optional\n",
88
+ "from threading import Thread\n",
89
+ "import time\n",
90
+ "\n",
91
+ "class OpenAIMessage(GradioModel):\n",
92
+ " role: Literal[\"system\", \"user\", \"assistant\", \"tool\"]\n",
93
+ " content: str\n",
94
+ " reasoning: bool = False\n",
95
+ " tool_name: Optional[str] = None\n"
96
+ ]
97
+ },
98
+ {
99
+ "cell_type": "code",
100
+ "execution_count": null,
101
+ "metadata": {},
102
+ "outputs": [],
103
+ "source": [
104
+ "def stream_from_agent(\n",
105
+ " agent: Agent,\n",
106
+ " prompt: str\n",
107
+ ") -> Generator[List[OpenAIMessage], None, None]:\n",
108
+ " \"\"\"Run Python code in a process and capture logs in real-time to yield them.\"\"\"\n",
109
+ "\n",
110
+ " thread = Thread(target=agent.run, args=(prompt,))\n",
111
+ " num_messages = 0\n",
112
+ "\n",
113
+ " # Start process and pull logs while it runs\n",
114
+ " thread.start()\n",
115
+ " while thread.is_alive():\n",
116
+ " if len(agent.logs) > num_messages:\n",
117
+ " new_messages = agent.logs[num_messages:]\n",
118
+ " for message in new_messages:\n",
119
+ " if not len(message):\n",
120
+ " continue\n",
121
+ " if message.get(\"rationale\"):\n",
122
+ " yield OpenAIMessage(\n",
123
+ " role=\"assistant\",\n",
124
+ " content=message[\"rationale\"],\n",
125
+ " reasoning=True\n",
126
+ " )\n",
127
+ " if message.get(\"tool_call\"):\n",
128
+ " yield OpenAIMessage(\n",
129
+ " role=\"tool\",\n",
130
+ " tool_name=message[\"tool_call\"][\"tool_name\"],\n",
131
+ " content=message['tool_call'][\"tool_arguments\"],\n",
132
+ " reasoning=True\n",
133
+ " )\n",
134
+ " num_messages = len(agent.logs)\n",
135
+ " time.sleep(0.1)\n",
136
+ "\n",
137
+ " thread.join(0.1)\n",
138
+ "\n",
139
+ " if len(agent.logs) > num_messages:\n",
140
+ " new_messages = agent.logs[num_messages:]\n",
141
+ " for message in new_messages:\n",
142
+ " if message.get(\"rationale\"):\n",
143
+ " yield OpenAIMessage(\n",
144
+ " role=\"assistant\",\n",
145
+ " content=message[\"rationale\"],\n",
146
+ " reasoning=True\n",
147
+ " )\n",
148
+ " if message.get(\"tool_call\"):\n",
149
+ " yield OpenAIMessage(\n",
150
+ " role=\"tool\",\n",
151
+ " tool_name=message[\"tool_call\"][\"tool_name\"],\n",
152
+ " content=message.get(\"tool_arguments\", \"\"),\n",
153
+ " )\n",
154
+ " num_messages = len(agent.logs)"
155
+ ]
156
+ },
157
+ {
158
+ "cell_type": "code",
159
+ "execution_count": null,
160
+ "metadata": {},
161
+ "outputs": [],
162
+ "source": [
163
+ "messages = []\n",
164
+ "for msg in stream_from_agent(agent, \"Generate me a photo of a cartoon cat.\"):\n",
165
+ " messages.append(msg)\n",
166
+ " print(\"MSG\", msg)\n",
167
+ "print(messages)"
168
+ ]
169
+ },
170
+ {
171
+ "cell_type": "code",
172
+ "execution_count": null,
173
+ "metadata": {},
174
+ "outputs": [],
175
+ "source": [
176
+ "agent.outputs"
177
+ ]
178
+ },
179
+ {
180
+ "cell_type": "code",
181
+ "execution_count": null,
182
+ "metadata": {},
183
+ "outputs": [],
184
+ "source": [
185
+ "agent.logs[2]"
186
+ ]
187
+ },
188
+ {
189
+ "cell_type": "code",
190
+ "execution_count": null,
191
+ "metadata": {},
192
+ "outputs": [],
193
+ "source": [
194
+ "agent.logs[3]"
195
+ ]
196
+ },
197
+ {
198
+ "cell_type": "code",
199
+ "execution_count": null,
200
+ "metadata": {},
201
+ "outputs": [],
202
+ "source": [
203
+ "agent.run(\"What kind of car is this?\")"
204
+ ]
205
+ },
206
+ {
207
+ "cell_type": "code",
208
+ "execution_count": null,
209
+ "metadata": {},
210
+ "outputs": [],
211
+ "source": [
212
+ "agent.logs"
213
+ ]
214
+ },
215
+ {
216
+ "cell_type": "code",
217
+ "execution_count": null,
218
+ "metadata": {},
219
+ "outputs": [],
220
+ "source": [
221
+ "agent.run??"
222
+ ]
223
+ },
224
+ {
225
+ "cell_type": "code",
226
+ "execution_count": null,
227
+ "metadata": {},
228
+ "outputs": [],
229
+ "source": []
230
+ }
231
+ ],
232
+ "metadata": {
233
+ "kernelspec": {
234
+ "display_name": ".venv",
235
+ "language": "python",
236
+ "name": "python3"
237
+ },
238
+ "language_info": {
239
+ "codemirror_mode": {
240
+ "name": "ipython",
241
+ "version": 3
242
+ },
243
+ "file_extension": ".py",
244
+ "mimetype": "text/x-python",
245
+ "name": "python",
246
+ "nbconvert_exporter": "python",
247
+ "pygments_lexer": "ipython3",
248
+ "version": "3.10.13"
249
+ }
250
+ },
251
+ "nbformat": 4,
252
+ "nbformat_minor": 2
253
+ }
src/scripts/logs.json ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "system_prompt": "You will be given a task to solve as best you can.\nYou have access to the following tools:\n\n- image_generator: This is a tool that creates an image according to a prompt, which is a text description.\n Takes inputs: {'prompt': {'type': 'text', 'description': \"The image generator prompt. Don't hesitate to add details in the prompt to make the image look better, like 'high-res, photorealistic', etc.\"}}\n\n- final_answer: Provides a final answer to the given problem\n Takes inputs: {'answer': {'type': 'text', 'description': 'The final answer to the problem'}}\n\nTo solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Code:', and 'Observation:' sequences.\n\nAt each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task, then the tools that you want to use.\nThen in the 'Code:' sequence, you shold write the code in simple Python. The code sequence must end with '/End code' sequence.\nDuring each intermediate step, you can use 'print()' to save whatever important information you will then need.\nThese print outputs will then be available in the 'Observation:' field, for using this information as input for the next step.\n\nIn the end you have to return a final answer using the `final_answer` tool.\n\nHere are a few examples using notional tools:\n---\nTask: \"Generate an image of the oldest person in this document.\"\n\nThought: I will proceed step by step and use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer.\nCode:\n```py\nanswer = document_qa(document=document, question=\"Who is the oldest person mentioned?\")\nprint(answer)\n```<end_code>\nObservation: \"The oldest person in the document is John Doe, a 55 year old lumberjack living in Newfoundland.\"\n\nThought: I will now generate an image showcasing the oldest person.\n\nCode:\n```py\nimage = image_generator(\"A portrait of John Doe, a 55-year-old man living in Canada.\")\nfinal_answer(image)\n```<end_code>\n\n---\nTask: \"What is the result of the following operation: 5 + 3 + 1294.678?\"\n\nThought: I will use python code to compute the result of the operation and then return the final answer using the `final_answer` tool\n\nCode:\n```py\nresult = 5 + 3 + 1294.678\nfinal_answer(result)\n```<end_code>\n\n---\nTask: \"Which city has the highest population , Guangzhou or Shanghai?\"\n\nThought: I need to get the populations for both cities and compare them: I will use the tool `search` to get the population of both cities.\nCode:\n```py\npopulation_guangzhou = search(\"Guangzhou population\")\nprint(\"Population Guangzhou:\", population_guangzhou)\npopulation_shanghai = search(\"Shanghai population\")\nprint(\"Population Shanghai:\", population_shanghai)\n```<end_code>\nObservation:\nPopulation Guangzhou: ['Guangzhou has a population of 15 million inhabitants as of 2021.']\nPopulation Shanghai: '26 million (2019)'\n\nThought: Now I know that Shanghai has the highest population.\nCode:\n```py\nfinal_answer(\"Shanghai\")\n```<end_code>\n\n---\nTask: \"What is the current age of the pope, raised to the power 0.36?\"\n\nThought: I will use the tool `search` to get the age of the pope, then raise it to the power 0.36.\nCode:\n```py\npope_age = search(query=\"current pope age\")\nprint(\"Pope age:\", pope_age)\n```<end_code>\nObservation:\nPope age: \"The pope Francis is currently 85 years old.\"\n\nThought: I know that the pope is 85 years old. Let's compute the result using python code.\nCode:\n```py\npope_current_age = 85 ** 0.36\nfinal_answer(pope_current_age)\n```<end_code>\n\n\nAbove example were using notional tools that might not exist for you. You only have acces to those tools:\n'image_generator', 'final_answer'\nYou also can perform computations in the python code you generate.\n\nAlways provide a 'Thought:' and a 'Code:\n```py' sequence ending with '```<end_code>' sequence. You MUST provide at least the 'Code:' sequence to move forward.\n\nRemember to not perform too many operations in a single code block! You should split the task into intermediate code blocks.\nPrint results at the end of each step to save the intermediate results. Then use final_answer() to return the final result.\n\nRemember to make sure that variables you use are all defined.\nDO NOT pass the arguments as a dict as in 'answer = ask_search_agent({'query': \"What is the place where James Bond lives?\"})', but use the arguments directly as in 'answer = ask_search_agent(query=\"What is the place where James Bond lives?\")'.\n\nNow Begin!\n",
4
+ "task": "Generate me a photo of the car that James bond drove in the latest movie.",
5
+ "agent_memory": [
6
+ {
7
+ "role": "system",
8
+ "content": "You will be given a task to solve as best you can.\nYou have access to the following tools:\n\n- image_generator: This is a tool that creates an image according to a prompt, which is a text description.\n Takes inputs: {'prompt': {'type': 'text', 'description': \"The image generator prompt. Don't hesitate to add details in the prompt to make the image look better, like 'high-res, photorealistic', etc.\"}}\n\n- final_answer: Provides a final answer to the given problem\n Takes inputs: {'answer': {'type': 'text', 'description': 'The final answer to the problem'}}\n\nTo solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Code:', and 'Observation:' sequences.\n\nAt each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task, then the tools that you want to use.\nThen in the 'Code:' sequence, you shold write the code in simple Python. The code sequence must end with '/End code' sequence.\nDuring each intermediate step, you can use 'print()' to save whatever important information you will then need.\nThese print outputs will then be available in the 'Observation:' field, for using this information as input for the next step.\n\nIn the end you have to return a final answer using the `final_answer` tool.\n\nHere are a few examples using notional tools:\n---\nTask: \"Generate an image of the oldest person in this document.\"\n\nThought: I will proceed step by step and use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer.\nCode:\n```py\nanswer = document_qa(document=document, question=\"Who is the oldest person mentioned?\")\nprint(answer)\n```<end_code>\nObservation: \"The oldest person in the document is John Doe, a 55 year old lumberjack living in Newfoundland.\"\n\nThought: I will now generate an image showcasing the oldest person.\n\nCode:\n```py\nimage = image_generator(\"A portrait of John Doe, a 55-year-old man living in Canada.\")\nfinal_answer(image)\n```<end_code>\n\n---\nTask: \"What is the result of the following operation: 5 + 3 + 1294.678?\"\n\nThought: I will use python code to compute the result of the operation and then return the final answer using the `final_answer` tool\n\nCode:\n```py\nresult = 5 + 3 + 1294.678\nfinal_answer(result)\n```<end_code>\n\n---\nTask: \"Which city has the highest population , Guangzhou or Shanghai?\"\n\nThought: I need to get the populations for both cities and compare them: I will use the tool `search` to get the population of both cities.\nCode:\n```py\npopulation_guangzhou = search(\"Guangzhou population\")\nprint(\"Population Guangzhou:\", population_guangzhou)\npopulation_shanghai = search(\"Shanghai population\")\nprint(\"Population Shanghai:\", population_shanghai)\n```<end_code>\nObservation:\nPopulation Guangzhou: ['Guangzhou has a population of 15 million inhabitants as of 2021.']\nPopulation Shanghai: '26 million (2019)'\n\nThought: Now I know that Shanghai has the highest population.\nCode:\n```py\nfinal_answer(\"Shanghai\")\n```<end_code>\n\n---\nTask: \"What is the current age of the pope, raised to the power 0.36?\"\n\nThought: I will use the tool `search` to get the age of the pope, then raise it to the power 0.36.\nCode:\n```py\npope_age = search(query=\"current pope age\")\nprint(\"Pope age:\", pope_age)\n```<end_code>\nObservation:\nPope age: \"The pope Francis is currently 85 years old.\"\n\nThought: I know that the pope is 85 years old. Let's compute the result using python code.\nCode:\n```py\npope_current_age = 85 ** 0.36\nfinal_answer(pope_current_age)\n```<end_code>\n\n\nAbove example were using notional tools that might not exist for you. You only have acces to those tools:\n'image_generator', 'final_answer'\nYou also can perform computations in the python code you generate.\n\nAlways provide a 'Thought:' and a 'Code:\n```py' sequence ending with '```<end_code>' sequence. You MUST provide at least the 'Code:' sequence to move forward.\n\nRemember to not perform too many operations in a single code block! You should split the task into intermediate code blocks.\nPrint results at the end of each step to save the intermediate results. Then use final_answer() to return the final result.\n\nRemember to make sure that variables you use are all defined.\nDO NOT pass the arguments as a dict as in 'answer = ask_search_agent({'query': \"What is the place where James Bond lives?\"})', but use the arguments directly as in 'answer = ask_search_agent(query=\"What is the place where James Bond lives?\")'.\n\nNow Begin!\n"
9
+ },
10
+ {
11
+ "role": "user",
12
+ "content": "Task: Generate me a photo of the car that James bond drove in the latest movie."
13
+ }
14
+ ]
15
+ },
16
+ {
17
+ "llm_output": "Thought: To solve this task, I will first use the `image_generator` tool to generate an image of the car driven by James Bond in the latest movie. However, I need to know what car James Bond drove in the latest movie. I don't have any information about the latest James Bond movie, so I will assume that the latest movie is \"No Time to Die\" and James Bond drove an Aston Martin DB5.\n\nCode:\n```py\nprint(\"Assuming the latest James Bond movie is 'No Time to Die' and James Bond drove an Aston Martin DB5.\")\n```<end_code>\n\n",
18
+ "rationale": "Thought: To solve this task, I will first use the `image_generator` tool to generate an image of the car driven by James Bond in the latest movie. However, I need to know what car James Bond drove in the latest movie. I don't have any information about the latest James Bond movie, so I will assume that the latest movie is \"No Time to Die\" and James Bond drove an Aston Martin DB5.\n\n",
19
+ "tool_call": {
20
+ "tool_name": "code interpreter",
21
+ "tool_arguments": "print(\"Assuming the latest James Bond movie is 'No Time to Die' and James Bond drove an Aston Martin DB5.\")"
22
+ },
23
+ "observation": "Assuming the latest James Bond movie is 'No Time to Die' and James Bond drove an Aston Martin DB5.\n",
24
+ "agent_memory": [
25
+ {
26
+ "role": "system",
27
+ "content": "You will be given a task to solve as best you can.\nYou have access to the following tools:\n\n- image_generator: This is a tool that creates an image according to a prompt, which is a text description.\n Takes inputs: {'prompt': {'type': 'text', 'description': \"The image generator prompt. Don't hesitate to add details in the prompt to make the image look better, like 'high-res, photorealistic', etc.\"}}\n\n- final_answer: Provides a final answer to the given problem\n Takes inputs: {'answer': {'type': 'text', 'description': 'The final answer to the problem'}}\n\nTo solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Code:', and 'Observation:' sequences.\n\nAt each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task, then the tools that you want to use.\nThen in the 'Code:' sequence, you shold write the code in simple Python. The code sequence must end with '/End code' sequence.\nDuring each intermediate step, you can use 'print()' to save whatever important information you will then need.\nThese print outputs will then be available in the 'Observation:' field, for using this information as input for the next step.\n\nIn the end you have to return a final answer using the `final_answer` tool.\n\nHere are a few examples using notional tools:\n---\nTask: \"Generate an image of the oldest person in this document.\"\n\nThought: I will proceed step by step and use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer.\nCode:\n```py\nanswer = document_qa(document=document, question=\"Who is the oldest person mentioned?\")\nprint(answer)\n```<end_code>\nObservation: \"The oldest person in the document is John Doe, a 55 year old lumberjack living in Newfoundland.\"\n\nThought: I will now generate an image showcasing the oldest person.\n\nCode:\n```py\nimage = image_generator(\"A portrait of John Doe, a 55-year-old man living in Canada.\")\nfinal_answer(image)\n```<end_code>\n\n---\nTask: \"What is the result of the following operation: 5 + 3 + 1294.678?\"\n\nThought: I will use python code to compute the result of the operation and then return the final answer using the `final_answer` tool\n\nCode:\n```py\nresult = 5 + 3 + 1294.678\nfinal_answer(result)\n```<end_code>\n\n---\nTask: \"Which city has the highest population , Guangzhou or Shanghai?\"\n\nThought: I need to get the populations for both cities and compare them: I will use the tool `search` to get the population of both cities.\nCode:\n```py\npopulation_guangzhou = search(\"Guangzhou population\")\nprint(\"Population Guangzhou:\", population_guangzhou)\npopulation_shanghai = search(\"Shanghai population\")\nprint(\"Population Shanghai:\", population_shanghai)\n```<end_code>\nObservation:\nPopulation Guangzhou: ['Guangzhou has a population of 15 million inhabitants as of 2021.']\nPopulation Shanghai: '26 million (2019)'\n\nThought: Now I know that Shanghai has the highest population.\nCode:\n```py\nfinal_answer(\"Shanghai\")\n```<end_code>\n\n---\nTask: \"What is the current age of the pope, raised to the power 0.36?\"\n\nThought: I will use the tool `search` to get the age of the pope, then raise it to the power 0.36.\nCode:\n```py\npope_age = search(query=\"current pope age\")\nprint(\"Pope age:\", pope_age)\n```<end_code>\nObservation:\nPope age: \"The pope Francis is currently 85 years old.\"\n\nThought: I know that the pope is 85 years old. Let's compute the result using python code.\nCode:\n```py\npope_current_age = 85 ** 0.36\nfinal_answer(pope_current_age)\n```<end_code>\n\n\nAbove example were using notional tools that might not exist for you. You only have acces to those tools:\n'image_generator', 'final_answer'\nYou also can perform computations in the python code you generate.\n\nAlways provide a 'Thought:' and a 'Code:\n```py' sequence ending with '```<end_code>' sequence. You MUST provide at least the 'Code:' sequence to move forward.\n\nRemember to not perform too many operations in a single code block! You should split the task into intermediate code blocks.\nPrint results at the end of each step to save the intermediate results. Then use final_answer() to return the final result.\n\nRemember to make sure that variables you use are all defined.\nDO NOT pass the arguments as a dict as in 'answer = ask_search_agent({'query': \"What is the place where James Bond lives?\"})', but use the arguments directly as in 'answer = ask_search_agent(query=\"What is the place where James Bond lives?\")'.\n\nNow Begin!\n"
28
+ },
29
+ {
30
+ "role": "user",
31
+ "content": "Task: Generate me a photo of the car that James bond drove in the latest movie."
32
+ },
33
+ {
34
+ "role": "assistant",
35
+ "content": "Thought: To solve this task, I will first use the `image_generator` tool to generate an image of the car driven by James Bond in the latest movie. However, I need to know what car James Bond drove in the latest movie. I don't have any information about the latest James Bond movie, so I will assume that the latest movie is \"No Time to Die\" and James Bond drove an Aston Martin DB5.\n\nCode:\n```py\nprint(\"Assuming the latest James Bond movie is 'No Time to Die' and James Bond drove an Aston Martin DB5.\")\n```<end_code>\n\n\n"
36
+ },
37
+ {
38
+ "role": "tool-response",
39
+ "content": "Observation: Assuming the latest James Bond movie is 'No Time to Die' and James Bond drove an Aston Martin DB5.\n"
40
+ }
41
+ ]
42
+ },
43
+ {
44
+ "llm_output": "Thought: Now that I have the information about the car, I can use the `image_generator` tool to generate an image of the Aston Martin DB5.\n\nCode:\n```py\nimage = image_generator(prompt=\"A high-res, photorealistic image of the Aston Martin DB5 driven by James Bond in No Time to Die.\")\nprint(\"Image generated.\")\n```",
45
+ "rationale": "Thought: Now that I have the information about the car, I can use the `image_generator` tool to generate an image of the Aston Martin DB5.\n\n",
46
+ "tool_call": {
47
+ "tool_name": "code interpreter",
48
+ "tool_arguments": "image = image_generator(prompt=\"A high-res, photorealistic image of the Aston Martin DB5 driven by James Bond in No Time to Die.\")\nprint(\"Image generated.\")"
49
+ },
50
+ "observation": "Image generated.\n",
51
+ "agent_memory": [
52
+ {
53
+ "role": "system",
54
+ "content": "You will be given a task to solve as best you can.\nYou have access to the following tools:\n\n- image_generator: This is a tool that creates an image according to a prompt, which is a text description.\n Takes inputs: {'prompt': {'type': 'text', 'description': \"The image generator prompt. Don't hesitate to add details in the prompt to make the image look better, like 'high-res, photorealistic', etc.\"}}\n\n- final_answer: Provides a final answer to the given problem\n Takes inputs: {'answer': {'type': 'text', 'description': 'The final answer to the problem'}}\n\nTo solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Code:', and 'Observation:' sequences.\n\nAt each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task, then the tools that you want to use.\nThen in the 'Code:' sequence, you shold write the code in simple Python. The code sequence must end with '/End code' sequence.\nDuring each intermediate step, you can use 'print()' to save whatever important information you will then need.\nThese print outputs will then be available in the 'Observation:' field, for using this information as input for the next step.\n\nIn the end you have to return a final answer using the `final_answer` tool.\n\nHere are a few examples using notional tools:\n---\nTask: \"Generate an image of the oldest person in this document.\"\n\nThought: I will proceed step by step and use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer.\nCode:\n```py\nanswer = document_qa(document=document, question=\"Who is the oldest person mentioned?\")\nprint(answer)\n```<end_code>\nObservation: \"The oldest person in the document is John Doe, a 55 year old lumberjack living in Newfoundland.\"\n\nThought: I will now generate an image showcasing the oldest person.\n\nCode:\n```py\nimage = image_generator(\"A portrait of John Doe, a 55-year-old man living in Canada.\")\nfinal_answer(image)\n```<end_code>\n\n---\nTask: \"What is the result of the following operation: 5 + 3 + 1294.678?\"\n\nThought: I will use python code to compute the result of the operation and then return the final answer using the `final_answer` tool\n\nCode:\n```py\nresult = 5 + 3 + 1294.678\nfinal_answer(result)\n```<end_code>\n\n---\nTask: \"Which city has the highest population , Guangzhou or Shanghai?\"\n\nThought: I need to get the populations for both cities and compare them: I will use the tool `search` to get the population of both cities.\nCode:\n```py\npopulation_guangzhou = search(\"Guangzhou population\")\nprint(\"Population Guangzhou:\", population_guangzhou)\npopulation_shanghai = search(\"Shanghai population\")\nprint(\"Population Shanghai:\", population_shanghai)\n```<end_code>\nObservation:\nPopulation Guangzhou: ['Guangzhou has a population of 15 million inhabitants as of 2021.']\nPopulation Shanghai: '26 million (2019)'\n\nThought: Now I know that Shanghai has the highest population.\nCode:\n```py\nfinal_answer(\"Shanghai\")\n```<end_code>\n\n---\nTask: \"What is the current age of the pope, raised to the power 0.36?\"\n\nThought: I will use the tool `search` to get the age of the pope, then raise it to the power 0.36.\nCode:\n```py\npope_age = search(query=\"current pope age\")\nprint(\"Pope age:\", pope_age)\n```<end_code>\nObservation:\nPope age: \"The pope Francis is currently 85 years old.\"\n\nThought: I know that the pope is 85 years old. Let's compute the result using python code.\nCode:\n```py\npope_current_age = 85 ** 0.36\nfinal_answer(pope_current_age)\n```<end_code>\n\n\nAbove example were using notional tools that might not exist for you. You only have acces to those tools:\n'image_generator', 'final_answer'\nYou also can perform computations in the python code you generate.\n\nAlways provide a 'Thought:' and a 'Code:\n```py' sequence ending with '```<end_code>' sequence. You MUST provide at least the 'Code:' sequence to move forward.\n\nRemember to not perform too many operations in a single code block! You should split the task into intermediate code blocks.\nPrint results at the end of each step to save the intermediate results. Then use final_answer() to return the final result.\n\nRemember to make sure that variables you use are all defined.\nDO NOT pass the arguments as a dict as in 'answer = ask_search_agent({'query': \"What is the place where James Bond lives?\"})', but use the arguments directly as in 'answer = ask_search_agent(query=\"What is the place where James Bond lives?\")'.\n\nNow Begin!\n"
55
+ },
56
+ {
57
+ "role": "user",
58
+ "content": "Task: Generate me a photo of the car that James bond drove in the latest movie."
59
+ },
60
+ {
61
+ "role": "assistant",
62
+ "content": "Thought: To solve this task, I will first use the `image_generator` tool to generate an image of the car driven by James Bond in the latest movie. However, I need to know what car James Bond drove in the latest movie. I don't have any information about the latest James Bond movie, so I will assume that the latest movie is \"No Time to Die\" and James Bond drove an Aston Martin DB5.\n\nCode:\n```py\nprint(\"Assuming the latest James Bond movie is 'No Time to Die' and James Bond drove an Aston Martin DB5.\")\n```<end_code>\n\n\n"
63
+ },
64
+ {
65
+ "role": "tool-response",
66
+ "content": "Observation: Assuming the latest James Bond movie is 'No Time to Die' and James Bond drove an Aston Martin DB5.\n"
67
+ },
68
+ {
69
+ "role": "assistant",
70
+ "content": "Thought: Now that I have the information about the car, I can use the `image_generator` tool to generate an image of the Aston Martin DB5.\n\nCode:\n```py\nimage = image_generator(prompt=\"A high-res, photorealistic image of the Aston Martin DB5 driven by James Bond in No Time to Die.\")\nprint(\"Image generated.\")\n```\n"
71
+ },
72
+ {
73
+ "role": "tool-response",
74
+ "content": "Observation: Image generated.\n"
75
+ },
76
+ {
77
+ "role": "user",
78
+ "content": "Reminder: you are working towards solving the following task: Generate me a photo of the car that James bond drove in the latest movie.\nHere is a summary of your past tool calls and their results:\nStep 1\nStep 2\nTool call:{'tool_name': 'code interpreter', 'tool_arguments': 'print(\"Assuming the latest James Bond movie is \\'No Time to Die\\' and James Bond drove an Aston Martin DB5.\")'}"
79
+ }
80
+ ]
81
+ },
82
+ {
83
+ "llm_output": "Thought: I have generated the image of the Aston Martin DB5 using the `image_generator` tool. Now, I need to return this image as the final answer to the task.\n\nCode:\n```py\nfinal_answer(image)\n```",
84
+ "rationale": "Thought: I have generated the image of the Aston Martin DB5 using the `image_generator` tool. Now, I need to return this image as the final answer to the task.\n\n",
85
+ "tool_call": {
86
+ "tool_name": "code interpreter",
87
+ "tool_arguments": "final_answer(image)"
88
+ },
89
+ "observation": ""
90
+ }
91
+ ]