Draichi commited on
Commit
c7963a5
1 Parent(s): 9b29612

chore: move briefing-generator to project root

Browse files
README.md CHANGED
@@ -17,7 +17,7 @@ short_description: Chat with the telemetry databases from Formula 1 races
17
 
18
  FastF1 Predictions is an innovative project aimed at predicting lap times in Formula 1 using telemetry data. Accurate lap time predictions are crucial in Formula 1 as they can influence strategic decisions during races, qualifying, and practice sessions. By leveraging advanced machine learning algorithms, FastF1 Predictions seeks to provide precise and reliable predictions to enhance the competitive edge of teams and drivers.
19
 
20
- ![image](./notebooks/images/sector-time.png)
21
 
22
  ## Purpose
23
 
@@ -28,7 +28,7 @@ The primary goal of FastF1 Predictions is to utilize historical race data and te
28
  Currently, the project includes:
29
 
30
  - **notebooks/**: A directory containing Jupyter notebooks.
31
- - **[sector-3-time-prediction.ipynb](./notebooks/sector-3-time-prediction.ipynb)**: A demonstration notebook showcasing how to use the algorithms to predict lap times based on historical telemetry data. There's also a [Kaggle notebook](https://www.kaggle.com/code/lucasdraichi/hamilton-lap-time-prediction)
32
 
33
  Future updates will expand the repository with more resources, including additional notebooks, scripts, and enhanced functionalities.
34
 
 
17
 
18
  FastF1 Predictions is an innovative project aimed at predicting lap times in Formula 1 using telemetry data. Accurate lap time predictions are crucial in Formula 1 as they can influence strategic decisions during races, qualifying, and practice sessions. By leveraging advanced machine learning algorithms, FastF1 Predictions seeks to provide precise and reliable predictions to enhance the competitive edge of teams and drivers.
19
 
20
+ ![image](./assets/sector-time.png)
21
 
22
  ## Purpose
23
 
 
28
  Currently, the project includes:
29
 
30
  - **notebooks/**: A directory containing Jupyter notebooks.
31
+ - **[sector-3-time-prediction.ipynb](./regression-models/sector-3-time-prediction.ipynb)**: A demonstration notebook showcasing how to use the algorithms to predict lap times based on historical telemetry data. There's also a [Kaggle notebook](https://www.kaggle.com/code/lucasdraichi/hamilton-lap-time-prediction)
32
 
33
  Future updates will expand the repository with more resources, including additional notebooks, scripts, and enhanced functionalities.
34
 
app.py CHANGED
@@ -1,11 +1,13 @@
1
  import os
2
  import gradio as gr
3
  from dotenv import load_dotenv
 
4
  from langchain_community.utilities import SQLDatabase
5
  from langchain_community.agent_toolkits import SQLDatabaseToolkit
6
  from langchain_core.messages import SystemMessage, HumanMessage, ToolMessage
7
  from langgraph.prebuilt import create_react_agent
8
  from langchain.schema import AIMessage
 
9
  from langchain_google_genai import ChatGoogleGenerativeAI
10
  from langchain.agents.agent_toolkits import create_retriever_tool
11
  from langchain_community.vectorstores import FAISS
@@ -14,12 +16,23 @@ import ast
14
  from gradio import ChatMessage
15
  import re
16
 
 
 
17
  os.environ['LANGCHAIN_PROJECT'] = 'gradio-test'
18
 
 
19
  load_dotenv()
20
 
 
 
 
 
 
 
21
  db = SQLDatabase.from_uri("sqlite:///../db/Bahrain_2023_Q.db")
22
 
 
 
23
  llm = ChatGoogleGenerativeAI(
24
  model="gemini-1.5-flash",
25
  temperature=0.7,
@@ -92,8 +105,12 @@ async def interact_with_agent(message, history):
92
  role="assistant", content=msg.content, metadata={"title": f"🛠️ Used tool {msg.name}"}))
93
  yield history
94
 
 
 
95
  if "agent" in chunk:
96
  messages = chunk["agent"]["messages"]
 
 
97
  for msg in messages:
98
  if isinstance(msg, AIMessage):
99
  if msg.content:
 
1
  import os
2
  import gradio as gr
3
  from dotenv import load_dotenv
4
+ from langchain_openai import ChatOpenAI
5
  from langchain_community.utilities import SQLDatabase
6
  from langchain_community.agent_toolkits import SQLDatabaseToolkit
7
  from langchain_core.messages import SystemMessage, HumanMessage, ToolMessage
8
  from langgraph.prebuilt import create_react_agent
9
  from langchain.schema import AIMessage
10
+ from rich.console import Console
11
  from langchain_google_genai import ChatGoogleGenerativeAI
12
  from langchain.agents.agent_toolkits import create_retriever_tool
13
  from langchain_community.vectorstores import FAISS
 
16
  from gradio import ChatMessage
17
  import re
18
 
19
+ console = Console(style="chartreuse1 on grey7")
20
+
21
  os.environ['LANGCHAIN_PROJECT'] = 'gradio-test'
22
 
23
+ # Load environment variables
24
  load_dotenv()
25
 
26
+ # # Ensure required environment variables are set
27
+ # if not os.environ.get("OPENAI_API_KEY"):
28
+ # raise EnvironmentError(
29
+ # "OPENAI_API_KEY not found in environment variables.")
30
+
31
+ # Initialize database connection
32
  db = SQLDatabase.from_uri("sqlite:///../db/Bahrain_2023_Q.db")
33
 
34
+ # Initialize LLM
35
+ # llm = ChatOpenAI(model="gpt-4-0125-preview")
36
  llm = ChatGoogleGenerativeAI(
37
  model="gemini-1.5-flash",
38
  temperature=0.7,
 
105
  role="assistant", content=msg.content, metadata={"title": f"🛠️ Used tool {msg.name}"}))
106
  yield history
107
 
108
+ console.print(f"\nchunk:")
109
+ console.print(chunk)
110
  if "agent" in chunk:
111
  messages = chunk["agent"]["messages"]
112
+ console.print(f"\nmessages:")
113
+ console.print(messages)
114
  for msg in messages:
115
  if isinstance(msg, AIMessage):
116
  if msg.content:
briefing_generator/README.md DELETED
@@ -1,73 +0,0 @@
1
- # F1 Text-to-SQL Query Generator
2
-
3
- This project demonstrates a text-to-SQL query generator for Formula 1 data using LangChain and OpenAI's language models. It allows users to ask natural language questions about F1 races and drivers, which are then converted into SQL queries and executed against a SQLite database.
4
-
5
- ## Features
6
-
7
- - Natural language to SQL query conversion
8
- - SQL query validation and error correction
9
- - Proper noun lookup for accurate driver names
10
- - Integration with OpenAI's GPT models
11
- - SQLite database interaction
12
- - Retrieval-based tool for driver name suggestions
13
-
14
- ## Prerequisites
15
-
16
- - Python 3.11 or higher
17
- - Conda (for managing Python environments)
18
- - Poetry (for dependency management)
19
- - OpenAI API key
20
- - LangChain API key (optional, for LangSmith integration)
21
-
22
- ## Installation
23
-
24
- 1. Clone the repository: `git clone https://github.com/your-username/f1-text-to-sql.git
25
- cd f1-text-to-sql `
26
-
27
- 2. Create a new Conda environment: `conda create -n f1-text-to-sql python=3.11
28
- conda activate f1-text-to-sql `
29
-
30
- 3. Install Poetry: `curl -sSL https://install.python-poetry.org | python3 - `
31
-
32
- 4. Install project dependencies using Poetry: `poetry install .`
33
-
34
- 5. Create a `.env` file in the project root and add your API keys: `OPENAI_API_KEY=your_openai_api_key_here
35
- LANGCHAIN_API_KEY=your_langchain_api_key_here `
36
-
37
- ## Usage
38
-
39
- 1. Ensure you have the F1 SQLite database file (`Bahrain_2023_Q.db`) in the `db` directory.
40
-
41
- 2. Run Jupyter Notebook: `poetry run jupyter notebook `
42
-
43
- 3. Open the `langchain_text2SQL.ipynb` notebook and run the cells to interact with the F1 data using natural language queries.
44
-
45
- ### Gradio Interface
46
-
47
- ```
48
- poetry run python gradio_sql_agent.py
49
-
50
- # dev mode
51
- pymon gradio_sql_agent.py
52
- ```
53
-
54
- ## Project Structure
55
-
56
- - `langchain_text2SQL.ipynb`: Main Jupyter notebook containing the text-to-SQL implementation
57
- - `db/Bahrain_2023_Q.db`: SQLite database with F1 race data
58
- - `README.md`: Project documentation (this file)
59
- - `pyproject.toml`: Poetry configuration file for dependency management
60
- - `.env`: Environment file for storing API keys (not tracked in version control)
61
-
62
- ## Example Queries
63
-
64
- - "Who is the fastest driver?"
65
- - "How many fastest laps did Lewis Hamilton achieve?"
66
-
67
- ## Contributing
68
-
69
- Contributions are welcome! Please feel free to submit a Pull Request.
70
-
71
- ## License
72
-
73
- This project is licensed under the MIT License - see the LICENSE file for details.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
briefing_generator/gradio_langchain_agent.py DELETED
@@ -1,183 +0,0 @@
1
- import os
2
- from click import progressbar
3
- import gradio as gr
4
- from dotenv import load_dotenv
5
- from langchain_openai import ChatOpenAI
6
- from langchain_community.utilities import SQLDatabase
7
- from langchain_community.agent_toolkits import SQLDatabaseToolkit
8
- from langchain_core.messages import SystemMessage, HumanMessage, ToolMessage
9
- from langgraph.prebuilt import create_react_agent
10
- from langchain.schema import AIMessage
11
- from rich.console import Console
12
- from langchain_google_genai import ChatGoogleGenerativeAI
13
- from langchain.agents.agent_toolkits import create_retriever_tool
14
- from langchain_community.vectorstores import FAISS
15
- from langchain_openai import OpenAIEmbeddings
16
- import ast
17
- from gradio import ChatMessage
18
- import re
19
-
20
- console = Console(style="chartreuse1 on grey7")
21
-
22
- os.environ['LANGCHAIN_PROJECT'] = 'gradio-test'
23
-
24
- # Load environment variables
25
- load_dotenv()
26
-
27
- # # Ensure required environment variables are set
28
- # if not os.environ.get("OPENAI_API_KEY"):
29
- # raise EnvironmentError(
30
- # "OPENAI_API_KEY not found in environment variables.")
31
-
32
- # Initialize database connection
33
- db = SQLDatabase.from_uri("sqlite:///../db/Bahrain_2023_Q.db")
34
-
35
- # Initialize LLM
36
- # llm = ChatOpenAI(model="gpt-4-0125-preview")
37
- llm = ChatGoogleGenerativeAI(
38
- model="gemini-1.5-flash",
39
- temperature=0.7,
40
- max_tokens=None,
41
- timeout=None,
42
- max_retries=2,
43
- )
44
-
45
- # Create SQL toolkit and tools
46
- toolkit = SQLDatabaseToolkit(db=db, llm=llm)
47
- tools = toolkit.get_tools()
48
-
49
-
50
- def query_as_list(db, query):
51
- res = db.run(query)
52
- res = [el for sub in ast.literal_eval(res) for el in sub if el]
53
- res = [re.sub(r"\b\d+\b", "", string).strip() for string in res]
54
- return list(set(res))
55
-
56
-
57
- drivers = query_as_list(db, "SELECT driver_name FROM Drivers")
58
-
59
- # todo: Use here the queries Views saved in the db as a retriever named "search_telemetry_data"
60
-
61
- vector_db = FAISS.from_texts(drivers, OpenAIEmbeddings())
62
- retriever = vector_db.as_retriever(search_kwargs={"k": 5})
63
- description = """Use to look up values to filter on. Input is an approximate spelling of the proper noun, output is \
64
- valid proper nouns. Use the noun most similar to the search."""
65
-
66
- retriever_tool = create_retriever_tool(
67
- retriever,
68
- name="search_proper_nouns",
69
- description=description,
70
- )
71
- tools.append(retriever_tool)
72
-
73
- # Define system message
74
- system = """You are an agent designed to interact with a SQL database.
75
- Given an input question, create a syntactically correct SQLite query to run, then look at the results of the query and return the answer.
76
- Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most 5 results.
77
- You can order the results by a relevant column to return the most interesting examples in the database.
78
- Never query for all the columns from a specific table, only ask for the relevant columns given the question.
79
- You have access to tools for interacting with the database.
80
- Only use the given tools. Only use the information returned by the tools to construct your final answer.
81
- You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.
82
-
83
- DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.
84
-
85
- You have access to the following tables: {table_names}
86
-
87
- If you need to filter on a proper noun, you must ALWAYS first look up the filter value using the "search_proper_nouns" tool!
88
- Do not try to guess at the proper name - use this function to find similar ones.""".format(
89
- table_names=db.get_usable_table_names()
90
- )
91
-
92
-
93
- system_message = SystemMessage(content=system)
94
-
95
- # Create agent
96
- agent = create_react_agent(llm, tools, state_modifier=system_message)
97
-
98
-
99
- def predict(message, _):
100
- response = agent.invoke({"messages": [HumanMessage(content=message)]})
101
- console.print("\nresponse:")
102
- console.print(response)
103
-
104
- for msg in reversed(response["messages"]):
105
- if isinstance(msg, AIMessage):
106
- console.print("\nmsg:")
107
- console.print(msg)
108
- return msg.content
109
-
110
-
111
- async def interact_with_agent(message, history):
112
- history.append(ChatMessage(role="user", content=message))
113
- yield history
114
- async for chunk in agent.astream({"messages": [HumanMessage(content=message)]}):
115
-
116
- if "tools" in chunk:
117
- messages = chunk["tools"]["messages"]
118
- for msg in messages:
119
- if isinstance(msg, ToolMessage):
120
- history.append(ChatMessage(
121
- role="assistant", content=msg.content, metadata={"title": f"🛠️ Used tool {msg.name}"}))
122
- yield history
123
-
124
- console.print(f"\nchunk:")
125
- console.print(chunk)
126
- if "agent" in chunk:
127
- messages = chunk["agent"]["messages"]
128
- console.print(f"\nmessages:")
129
- console.print(messages)
130
- for msg in messages:
131
- if isinstance(msg, AIMessage):
132
- if msg.content:
133
- history.append(ChatMessage(
134
- role="assistant", content=msg.content, metadata={"title": "💬 Assistant"}))
135
- yield history
136
-
137
- theme = gr.themes.Ocean(
138
- # primary_hue="sky",
139
- # secondary_hue="pink",
140
- )
141
-
142
- with gr.Blocks(theme=theme, fill_height=True) as demo:
143
- gr.Markdown("# Formula 1 Briefing Generator")
144
- chatbot = gr.Chatbot(
145
- type="messages",
146
- label="Agent interaction",
147
- avatar_images=(
148
- "https://upload.wikimedia.org/wikipedia/en/c/c6/NeoTheMatrix.jpg",
149
- "https://em-content.zobj.net/source/twitter/141/parrot_1f99c.png",
150
- ),
151
- scale=1,
152
- placeholder="Ask me any question about the 2023 Bahrain Grand Prix",
153
- layout="panel"
154
- )
155
- input = gr.Textbox(
156
- lines=1, label="Ask me any question about the 2023 Bahrain Grand Prix")
157
- input.submit(interact_with_agent, [
158
- input, chatbot], [chatbot])
159
- examples = gr.Examples(examples=[
160
- "How many fastest laps did Verstappen achieve?",
161
- "How many pit stops did Hamilton make?"
162
- ], inputs=input)
163
- btn = gr.Button("Submit", variant="primary")
164
- btn.click(fn=interact_with_agent, inputs=[input, chatbot], outputs=chatbot)
165
- btn.click(lambda x: gr.update(value=''), [], [input])
166
- input.submit(lambda x: gr.update(value=''), [], [input])
167
-
168
-
169
- demo.launch()
170
-
171
-
172
- # gr.ChatInterface(interact_with_agent,
173
- # type="messages",
174
- # chatbot=gr.Chatbot(type="messages",
175
- # value=[{"role": "user", "content": "What is the weather in San Francisco?"},
176
- # {"role": "assistant", "content": "I need to use the weather API tool",
177
- # "metadata": {"title": "🧠 Thinking"}}]
178
- # ),
179
- # # textbox=gr.Textbox(
180
- # # placeholder="Ask me any question about the 2023 Bahrain Grand Prix", container=False, scale=7),
181
- # title="foo bar",
182
- # theme="soft",
183
- # ).launch(debug=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
briefing_generator/gradio_sql_agent.py DELETED
@@ -1,102 +0,0 @@
1
- import os
2
- import gradio as gr
3
- from dotenv import load_dotenv
4
- from langchain_openai import ChatOpenAI
5
- from langchain_community.utilities import SQLDatabase
6
- from langchain_community.agent_toolkits import SQLDatabaseToolkit
7
- from langchain_core.messages import SystemMessage, HumanMessage
8
- from langgraph.prebuilt import create_react_agent
9
- from langchain.schema import AIMessage
10
- from rich.console import Console
11
- from langchain_google_genai import ChatGoogleGenerativeAI
12
-
13
- console = Console(style="chartreuse1 on grey7")
14
-
15
- os.environ['LANGCHAIN_PROJECT'] = 'briefing-generator'
16
-
17
- # Load environment variables
18
- load_dotenv()
19
-
20
- # Ensure required environment variables are set
21
- if not os.environ.get("OPENAI_API_KEY"):
22
- raise EnvironmentError(
23
- "OPENAI_API_KEY not found in environment variables.")
24
-
25
- # Initialize database connection
26
- db = SQLDatabase.from_uri("sqlite:///../db/Bahrain_2023_Q.db")
27
-
28
- # Initialize LLM
29
- # llm = ChatOpenAI(model="gpt-4-0125-preview")
30
- llm = ChatGoogleGenerativeAI(
31
- model="gemini-1.5-flash",
32
- temperature=0.7,
33
- max_tokens=None,
34
- timeout=None,
35
- max_retries=2,
36
- )
37
-
38
- # Create SQL toolkit and tools
39
- toolkit = SQLDatabaseToolkit(db=db, llm=llm)
40
- tools = toolkit.get_tools()
41
-
42
-
43
- def query_as_list(db, query):
44
- res = db.run(query)
45
- res = [el for sub in ast.literal_eval(res) for el in sub if el]
46
- res = [re.sub(r"\b\d+\b", "", string).strip() for string in res]
47
- return list(set(res))
48
-
49
-
50
- drivers = query_as_list(db, "SELECT driver_name FROM Drivers")
51
-
52
- # todo: Use here the queries Views saved in the db as a retriever named "search_telemetry_data"
53
-
54
- vector_db = FAISS.from_texts(drivers, OpenAIEmbeddings())
55
- retriever = vector_db.as_retriever(search_kwargs={"k": 5})
56
- description = """Use to look up values to filter on. Input is an approximate spelling of the proper noun, output is \
57
- valid proper nouns. Use the noun most similar to the search."""
58
-
59
- retriever_tool = create_retriever_tool(
60
- retriever,
61
- name="search_proper_nouns",
62
- description=description,
63
- )
64
- tools.append(retriever_tool)
65
-
66
- # Define system message
67
- system = """You are an agent designed to interact with a SQL database.
68
- Given an input question, create a syntactically correct SQLite query to run, then look at the results of the query and return the answer.
69
- Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most 5 results.
70
- You can order the results by a relevant column to return the most interesting examples in the database.
71
- Never query for all the columns from a specific table, only ask for the relevant columns given the question.
72
- You have access to tools for interacting with the database.
73
- Only use the below tools. Only use the information returned by the below tools to construct your final answer.
74
- You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.
75
-
76
- DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.
77
-
78
- To start you should ALWAYS look at the tables in the database to see what you can query.
79
- Do NOT skip this step.
80
- Then you should query the schema of the most relevant tables.""".format(
81
- table_names=db.get_usable_table_names()
82
- )
83
-
84
- system_message = SystemMessage(content=system)
85
-
86
- # Create agent
87
- agent = create_react_agent(llm, tools, state_modifier=system_message)
88
-
89
-
90
- def predict(message, _):
91
- response = agent.invoke({"messages": [HumanMessage(content=message)]})
92
- console.print("\nresponse:")
93
- console.print(response)
94
-
95
- for msg in reversed(response["messages"]):
96
- if isinstance(msg, AIMessage):
97
- console.print("\nmsg:")
98
- console.print(msg)
99
- return msg.content
100
-
101
-
102
- gr.ChatInterface(predict, type="messages").launch()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
briefing_generator/langchain_text2SQL.ipynb DELETED
@@ -1,716 +0,0 @@
1
- {
2
- "cells": [
3
- {
4
- "cell_type": "code",
5
- "execution_count": 2,
6
- "metadata": {},
7
- "outputs": [],
8
- "source": [
9
- "import os\n",
10
- "from dotenv import load_dotenv\n",
11
- "\n",
12
- "load_dotenv()\n",
13
- "\n",
14
- "if not os.environ.get(\"OPENAI_API_KEY\"):\n",
15
- " raise EnvironmentError(\n",
16
- " \"OPENAI_API_KEY not found in environment variables.\")\n",
17
- "\n",
18
- "# Comment out the below to opt-out of using LangSmith in this notebook. Not required.\n",
19
- "os.environ[\"LANGCHAIN_PROJECT\"] = \"briefing_generator\"\n",
20
- "if not os.environ.get(\"LANGCHAIN_API_KEY\"):\n",
21
- " raise EnvironmentError(\n",
22
- " \"LANGCHAIN_API_KEY not found in environment variables.\")"
23
- ]
24
- },
25
- {
26
- "cell_type": "code",
27
- "execution_count": 4,
28
- "metadata": {},
29
- "outputs": [
30
- {
31
- "name": "stdout",
32
- "output_type": "stream",
33
- "text": [
34
- "sqlite\n",
35
- "['Drivers', 'Event', 'Laps', 'Sessions', 'Telemetry', 'Tracks', 'Weather']\n"
36
- ]
37
- },
38
- {
39
- "data": {
40
- "text/plain": [
41
- "\"[(1, 1271, 'VER', 80.0, 7087, 2, 26.0, 0, 8, -89.39, 4453.24, 0.0, 0, '2023-03-04 15:04:00.840000'), (2, 1271, 'VER', 82.0, 7112, 2, 24.0, 0, 8, -87.0, 4498.0, 0.0, 0, '2023-03-04 15:04:01.039000'), (3, 1271, 'VER', 85.0, 7401, 2, 25.0, 0, 8, -85.0, 4564.0, 0.0, 0, '2023-03-04 15:04:01.318000'), (4, 1271, 'VER', 90.0, 7908, 2, 31.0, 0, 8, -83.0, 4608.0, 0.0, 0, '2023-03-04 15:04:01.498000'), (5, 1271, 'VER', 92.0, 8132, 2, 35.0, 0, 8, -81.84, 4636.77, 0.0, 0, '2023-03-04 15:04:01.604000'), (6, 1271, 'VER', 95.0, 8378, 2, 36.0, 0, 8, -80.0, 4681.0, 0.0, 0, '2023-03-04 15:04:01.759000'), (7, 1271, 'VER', 99.0, 8625, 2, 37.0, 0, 8, -78.7, 4704.75, 0.0, 0, '2023-03-04 15:04:01.844000'), (8, 1271, 'VER', 101.0, 8925, 2, 40.0, 0, 8, -77.0, 4736.0, 0.0, 0, '2023-03-04 15:04:01.959000'), (9, 1271, 'VER', 103.0, 9225, 2, 43.0, 0, 8, -75.0, 4793.0, 0.0, 0, '2023-03-04 15:04:02.159000'), (10, 1271, 'VER', 105.0, 9526, 2, 46.0, 0, 8, -74.3, 4810.45, 0.0, 0, '2023-03-04 15:04:02.204000')]\""
42
- ]
43
- },
44
- "execution_count": 4,
45
- "metadata": {},
46
- "output_type": "execute_result"
47
- }
48
- ],
49
- "source": [
50
- "from langchain_community.utilities import SQLDatabase\n",
51
- "\n",
52
- "db = SQLDatabase.from_uri(\"sqlite:///../db/Bahrain_2023_Q.db\")\n",
53
- "print(db.dialect)\n",
54
- "print(db.get_usable_table_names())\n",
55
- "db.run(\"SELECT * FROM Telemetry LIMIT 10;\")"
56
- ]
57
- },
58
- {
59
- "cell_type": "code",
60
- "execution_count": 5,
61
- "metadata": {},
62
- "outputs": [],
63
- "source": [
64
- "from langchain_openai import ChatOpenAI\n",
65
- "\n",
66
- "llm = ChatOpenAI(model=\"gpt-4o-mini\")"
67
- ]
68
- },
69
- {
70
- "cell_type": "code",
71
- "execution_count": 6,
72
- "metadata": {},
73
- "outputs": [
74
- {
75
- "data": {
76
- "text/plain": [
77
- "'```sql\\nSQLQuery: SELECT \"driver_name\", MIN(\"lap_time_in_seconds\") AS \"fastest_lap_time\" FROM \"Laps\" GROUP BY \"driver_name\" ORDER BY \"fastest_lap_time\" LIMIT 1\\n```'"
78
- ]
79
- },
80
- "execution_count": 6,
81
- "metadata": {},
82
- "output_type": "execute_result"
83
- }
84
- ],
85
- "source": [
86
- "from langchain.chains import create_sql_query_chain\n",
87
- "\n",
88
- "chain = create_sql_query_chain(llm, db, )\n",
89
- "response = chain.invoke({\"question\": \"Who is the fastest driver?\"})\n",
90
- "response"
91
- ]
92
- },
93
- {
94
- "cell_type": "markdown",
95
- "metadata": {},
96
- "source": [
97
- "## Fixing SQL errors\n",
98
- "\n",
99
- "We will use a validation chain to fix the SQL errors like this ``sql ... `\n"
100
- ]
101
- },
102
- {
103
- "cell_type": "code",
104
- "execution_count": 7,
105
- "metadata": {},
106
- "outputs": [],
107
- "source": [
108
- "from langchain_core.output_parsers import StrOutputParser\n",
109
- "from langchain_core.prompts import ChatPromptTemplate\n",
110
- "\n",
111
- "system = \"\"\"Double check the user's {dialect} query for common mistakes, including:\n",
112
- "- Only return SQL Query not anything else like ```sql ... ```\n",
113
- "- Using NOT IN with NULL values\n",
114
- "- Using UNION when UNION ALL should have been used\n",
115
- "- Using BETWEEN for exclusive ranges\n",
116
- "- Data type mismatch in predicates\\\n",
117
- "- Using the correct number of arguments for functions\n",
118
- "- Casting to the correct data type\n",
119
- "- Using the proper columns for joins\n",
120
- "\n",
121
- "If there are any of the above mistakes, rewrite the query.\n",
122
- "If there are no mistakes, just reproduce the original query with no further commentary.\n",
123
- "\n",
124
- "Output the final SQL query only.\"\"\"\n",
125
- "prompt = ChatPromptTemplate.from_messages(\n",
126
- " [(\"system\", system), (\"human\", \"{query}\")]\n",
127
- ").partial(dialect=db.dialect)\n",
128
- "validation_chain = prompt | llm | StrOutputParser()\n",
129
- "\n",
130
- "full_chain = {\"query\": chain} | validation_chain"
131
- ]
132
- },
133
- {
134
- "cell_type": "code",
135
- "execution_count": 8,
136
- "metadata": {},
137
- "outputs": [
138
- {
139
- "name": "stdout",
140
- "output_type": "stream",
141
- "text": [
142
- "SELECT \"driver_name\", MIN(\"lap_time_in_seconds\") AS \"fastest_lap_time\"\n",
143
- "FROM \"Laps\"\n",
144
- "GROUP BY \"driver_name\"\n",
145
- "ORDER BY \"fastest_lap_time\" ASC\n",
146
- "LIMIT 1;\n"
147
- ]
148
- }
149
- ],
150
- "source": [
151
- "query = full_chain.invoke(\n",
152
- " {\n",
153
- " \"question\": \"Who is the fastest driver?\"\n",
154
- " }\n",
155
- ")\n",
156
- "print(query)"
157
- ]
158
- },
159
- {
160
- "cell_type": "code",
161
- "execution_count": 9,
162
- "metadata": {},
163
- "outputs": [
164
- {
165
- "data": {
166
- "text/plain": [
167
- "\"[('VER', 89.708)]\""
168
- ]
169
- },
170
- "execution_count": 9,
171
- "metadata": {},
172
- "output_type": "execute_result"
173
- }
174
- ],
175
- "source": [
176
- "db.run(query)"
177
- ]
178
- },
179
- {
180
- "cell_type": "code",
181
- "execution_count": 10,
182
- "metadata": {},
183
- "outputs": [
184
- {
185
- "name": "stdout",
186
- "output_type": "stream",
187
- "text": [
188
- "You are a SQLite expert. Given an input question, first create a syntactically correct SQLite query to run, then look at the results of the query and return the answer to the input question.\n",
189
- "Unless the user specifies in the question a specific number of examples to obtain, query for at most 5 results using the LIMIT clause as per SQLite. You can order the results to return the most informative data in the database.\n",
190
- "Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (\") to denote them as delimited identifiers.\n",
191
- "Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n",
192
- "Pay attention to use date('now') function to get the current date, if the question involves \"today\".\n",
193
- "\n",
194
- "Use the following format:\n",
195
- "\n",
196
- "Question: Question here\n",
197
- "SQLQuery: SQL Query to run\n",
198
- "SQLResult: Result of the SQLQuery\n",
199
- "Answer: Final answer here\n",
200
- "\n",
201
- "Only use the following tables:\n",
202
- "\u001b[33;1m\u001b[1;3m{table_info}\u001b[0m\n",
203
- "\n",
204
- "Question: \u001b[33;1m\u001b[1;3m{input}\u001b[0m\n"
205
- ]
206
- }
207
- ],
208
- "source": [
209
- "chain.get_prompts()[0].pretty_print()"
210
- ]
211
- },
212
- {
213
- "cell_type": "code",
214
- "execution_count": 11,
215
- "metadata": {},
216
- "outputs": [
217
- {
218
- "name": "stdout",
219
- "output_type": "stream",
220
- "text": [
221
- "You are a SQLite expert. Given an input question, first create a syntactically correct SQLite query to run, then look at the results of the query and return the answer to the input question.\n",
222
- "Unless the user specifies in the question a specific number of examples to obtain, query for at most 5 results using the LIMIT clause as per SQLite. You can order the results to return the most informative data in the database.\n",
223
- "Never query for all columns from a table. You must query only the columns that are needed to answer the question. Wrap each column name in double quotes (\") to denote them as delimited identifiers.\n",
224
- "Pay attention to use only the column names you can see in the tables below. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.\n",
225
- "Pay attention to use date('now') function to get the current date, if the question involves \"today\".\n",
226
- "\n",
227
- "Use the following format:\n",
228
- "\n",
229
- "Question: Question here\n",
230
- "SQLQuery: SQL Query to run\n",
231
- "SQLResult: Result of the SQLQuery\n",
232
- "Answer: Final answer here\n",
233
- "\n",
234
- "Only use the following tables:\n",
235
- "\u001b[33;1m\u001b[1;3m{table_info}\u001b[0m\n",
236
- "\n",
237
- "Question: \u001b[33;1m\u001b[1;3m{input}\u001b[0m\n"
238
- ]
239
- }
240
- ],
241
- "source": [
242
- "full_chain.get_prompts()[0].pretty_print()"
243
- ]
244
- },
245
- {
246
- "cell_type": "code",
247
- "execution_count": 12,
248
- "metadata": {},
249
- "outputs": [
250
- {
251
- "data": {
252
- "text/plain": [
253
- "\"[('VER', 89.708)]\""
254
- ]
255
- },
256
- "execution_count": 12,
257
- "metadata": {},
258
- "output_type": "execute_result"
259
- }
260
- ],
261
- "source": [
262
- "from langchain_community.tools.sql_database.tool import QuerySQLDataBaseTool\n",
263
- "\n",
264
- "execute_query = QuerySQLDataBaseTool(db=db)\n",
265
- "write_query = create_sql_query_chain(llm, db)\n",
266
- "chain = write_query | validation_chain | execute_query\n",
267
- "chain.invoke({\"question\": \"Who is the fastest driver?\"})"
268
- ]
269
- },
270
- {
271
- "cell_type": "markdown",
272
- "metadata": {},
273
- "source": [
274
- "## Answer the question\n"
275
- ]
276
- },
277
- {
278
- "cell_type": "code",
279
- "execution_count": 13,
280
- "metadata": {},
281
- "outputs": [
282
- {
283
- "data": {
284
- "text/plain": [
285
- "'The fastest driver is VER, with a lap time of 89.708 seconds.'"
286
- ]
287
- },
288
- "execution_count": 13,
289
- "metadata": {},
290
- "output_type": "execute_result"
291
- }
292
- ],
293
- "source": [
294
- "from operator import itemgetter\n",
295
- "\n",
296
- "from langchain_core.output_parsers import StrOutputParser\n",
297
- "from langchain_core.prompts import PromptTemplate\n",
298
- "from langchain_core.runnables import RunnablePassthrough\n",
299
- "\n",
300
- "answer_prompt = PromptTemplate.from_template(\n",
301
- " \"\"\"Given the following user question, corresponding SQL query, and SQL result, answer the user question.\n",
302
- "\n",
303
- "Question: {question}\n",
304
- "SQL Query: {query}\n",
305
- "SQL Result: {result}\n",
306
- "Answer: \"\"\"\n",
307
- ")\n",
308
- "\n",
309
- "chain = (\n",
310
- " RunnablePassthrough.assign(query=write_query).assign(\n",
311
- " result=itemgetter(\"query\") | validation_chain | execute_query\n",
312
- " )\n",
313
- " | answer_prompt\n",
314
- " | llm\n",
315
- " | StrOutputParser()\n",
316
- ")\n",
317
- "\n",
318
- "chain.invoke({\"question\": \"Who is the fastest driver?\"})"
319
- ]
320
- },
321
- {
322
- "cell_type": "markdown",
323
- "metadata": {},
324
- "source": [
325
- "## Agents\n"
326
- ]
327
- },
328
- {
329
- "cell_type": "code",
330
- "execution_count": 14,
331
- "metadata": {},
332
- "outputs": [
333
- {
334
- "data": {
335
- "text/plain": [
336
- "[QuerySQLDataBaseTool(description=\"Input to this tool is a detailed and correct SQL query, output is a result from the database. If the query is not correct, an error message will be returned. If an error is returned, rewrite the query, check the query, and try again. If you encounter an issue with Unknown column 'xxxx' in 'field list', use sql_db_schema to query the correct table fields.\", db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x106a960d0>),\n",
337
- " InfoSQLDatabaseTool(description='Input to this tool is a comma-separated list of tables, output is the schema and sample rows for those tables. Be sure that the tables actually exist by calling sql_db_list_tables first! Example Input: table1, table2, table3', db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x106a960d0>),\n",
338
- " ListSQLDatabaseTool(db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x106a960d0>),\n",
339
- " QuerySQLCheckerTool(description='Use this tool to double check if your query is correct before executing it. Always use this tool before executing a query with sql_db_query!', db=<langchain_community.utilities.sql_database.SQLDatabase object at 0x106a960d0>, llm=ChatOpenAI(client=<openai.resources.chat.completions.Completions object at 0x11c49ec90>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x11c4cb410>, root_client=<openai.OpenAI object at 0x11c4cb8d0>, root_async_client=<openai.AsyncOpenAI object at 0x11c47fa90>, model_name='gpt-4o-mini', model_kwargs={}, openai_api_key=SecretStr('**********')), llm_chain=LLMChain(verbose=False, prompt=PromptTemplate(input_variables=['dialect', 'query'], input_types={}, partial_variables={}, template='\\n{query}\\nDouble check the {dialect} query above for common mistakes, including:\\n- Using NOT IN with NULL values\\n- Using UNION when UNION ALL should have been used\\n- Using BETWEEN for exclusive ranges\\n- Data type mismatch in predicates\\n- Properly quoting identifiers\\n- Using the correct number of arguments for functions\\n- Casting to the correct data type\\n- Using the proper columns for joins\\n\\nIf there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.\\n\\nOutput the final SQL query only.\\n\\nSQL Query: '), llm=ChatOpenAI(client=<openai.resources.chat.completions.Completions object at 0x11c49ec90>, async_client=<openai.resources.chat.completions.AsyncCompletions object at 0x11c4cb410>, root_client=<openai.OpenAI object at 0x11c4cb8d0>, root_async_client=<openai.AsyncOpenAI object at 0x11c47fa90>, model_name='gpt-4o-mini', model_kwargs={}, openai_api_key=SecretStr('**********')), output_parser=StrOutputParser(), llm_kwargs={}))]"
340
- ]
341
- },
342
- "execution_count": 14,
343
- "metadata": {},
344
- "output_type": "execute_result"
345
- }
346
- ],
347
- "source": [
348
- "from langchain_community.agent_toolkits import SQLDatabaseToolkit\n",
349
- "\n",
350
- "toolkit = SQLDatabaseToolkit(db=db, llm=llm)\n",
351
- "\n",
352
- "tools = toolkit.get_tools()\n",
353
- "\n",
354
- "tools"
355
- ]
356
- },
357
- {
358
- "cell_type": "markdown",
359
- "metadata": {},
360
- "source": [
361
- "### System Prompt\n"
362
- ]
363
- },
364
- {
365
- "cell_type": "code",
366
- "execution_count": 15,
367
- "metadata": {},
368
- "outputs": [
369
- {
370
- "data": {
371
- "text/plain": [
372
- "SystemMessage(content='You are an agent designed to interact with a SQL database.\\nGiven an input question, create a syntactically correct SQLite query to run, then look at the results of the query and return the answer.\\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most 5 results.\\nYou can order the results by a relevant column to return the most interesting examples in the database.\\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\\nYou have access to tools for interacting with the database.\\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\\n\\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\\n\\nTo start you should ALWAYS look at the tables in the database to see what you can query.\\nDo NOT skip this step.\\nThen you should query the schema of the most relevant tables.', additional_kwargs={}, response_metadata={})"
373
- ]
374
- },
375
- "execution_count": 15,
376
- "metadata": {},
377
- "output_type": "execute_result"
378
- }
379
- ],
380
- "source": [
381
- "from langchain_core.messages import SystemMessage\n",
382
- "\n",
383
- "SQL_PREFIX = \"\"\"You are an agent designed to interact with a SQL database.\n",
384
- "Given an input question, create a syntactically correct SQLite query to run, then look at the results of the query and return the answer.\n",
385
- "Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most 5 results.\n",
386
- "You can order the results by a relevant column to return the most interesting examples in the database.\n",
387
- "Never query for all the columns from a specific table, only ask for the relevant columns given the question.\n",
388
- "You have access to tools for interacting with the database.\n",
389
- "Only use the below tools. Only use the information returned by the below tools to construct your final answer.\n",
390
- "You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n",
391
- "\n",
392
- "DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n",
393
- "\n",
394
- "To start you should ALWAYS look at the tables in the database to see what you can query.\n",
395
- "Do NOT skip this step.\n",
396
- "Then you should query the schema of the most relevant tables.\"\"\"\n",
397
- "\n",
398
- "system_message = SystemMessage(content=SQL_PREFIX)\n",
399
- "system_message"
400
- ]
401
- },
402
- {
403
- "cell_type": "markdown",
404
- "metadata": {},
405
- "source": [
406
- "We will use a prebuilt LangGraph agent to build our agent\n"
407
- ]
408
- },
409
- {
410
- "cell_type": "code",
411
- "execution_count": 16,
412
- "metadata": {},
413
- "outputs": [
414
- {
415
- "name": "stderr",
416
- "output_type": "stream",
417
- "text": [
418
- "/var/folders/09/6cwl3kq50bv50t8lrvqxmmfc0000gn/T/ipykernel_86930/2283420487.py:4: LangGraphDeprecationWarning: Parameter 'messages_modifier' in function 'create_react_agent' is deprecated as of version 0.1.9 and will be removed in version 0.3.0. Use 'state_modifier' parameter instead.\n",
419
- " agent_executor = create_react_agent(\n"
420
- ]
421
- },
422
- {
423
- "name": "stdout",
424
- "output_type": "stream",
425
- "text": [
426
- "{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_oy9eFT0bKTRTlyByDdybkvDw', 'function': {'arguments': '{}', 'name': 'sql_db_list_tables'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 12, 'prompt_tokens': 547, 'total_tokens': 559, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_482c22a7bc', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-85d5fbe0-5a67-49e2-85a2-f6be2dfe5b17-0', tool_calls=[{'name': 'sql_db_list_tables', 'args': {}, 'id': 'call_oy9eFT0bKTRTlyByDdybkvDw', 'type': 'tool_call'}], usage_metadata={'input_tokens': 547, 'output_tokens': 12, 'total_tokens': 559, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}})]}}\n",
427
- "----\n",
428
- "{'tools': {'messages': [ToolMessage(content='Drivers, Event, Laps, Sessions, Telemetry, Tracks, Weather', name='sql_db_list_tables', id='6595a9fb-d4d6-4ba3-8b1c-e153d5b0daad', tool_call_id='call_oy9eFT0bKTRTlyByDdybkvDw')]}}\n",
429
- "----\n",
430
- "{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_u4GgVmf1WbLtt3Suy4nYTZG3', 'function': {'arguments': '{\"table_names\": \"Drivers\"}', 'name': 'sql_db_schema'}, 'type': 'function'}, {'id': 'call_VPcwJvUKTHLgPcxlKVCgsIsZ', 'function': {'arguments': '{\"table_names\": \"Laps\"}', 'name': 'sql_db_schema'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 49, 'prompt_tokens': 584, 'total_tokens': 633, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_482c22a7bc', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-dc918370-84c0-476d-9b59-dc13a4de2439-0', tool_calls=[{'name': 'sql_db_schema', 'args': {'table_names': 'Drivers'}, 'id': 'call_u4GgVmf1WbLtt3Suy4nYTZG3', 'type': 'tool_call'}, {'name': 'sql_db_schema', 'args': {'table_names': 'Laps'}, 'id': 'call_VPcwJvUKTHLgPcxlKVCgsIsZ', 'type': 'tool_call'}], usage_metadata={'input_tokens': 584, 'output_tokens': 49, 'total_tokens': 633, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}})]}}\n",
431
- "----\n",
432
- "{'tools': {'messages': [ToolMessage(content='\\nCREATE TABLE \"Drivers\" (\\n\\tdriver_id INTEGER, \\n\\tdriver_name TEXT NOT NULL, \\n\\tteam TEXT NOT NULL, \\n\\tPRIMARY KEY (driver_id)\\n)\\n\\n/*\\n3 rows from Drivers table:\\ndriver_id\\tdriver_name\\tteam\\n1\\tMax Verstappen\\tRed Bull Racing\\n2\\tSergio Perez\\tRed Bull Racing\\n3\\tCharles Leclerc\\tFerrari\\n*/', name='sql_db_schema', id='77207876-0f98-4dd8-8ecb-ffd10b51217b', tool_call_id='call_u4GgVmf1WbLtt3Suy4nYTZG3'), ToolMessage(content='\\nCREATE TABLE \"Laps\" (\\n\\tlap_id INTEGER, \\n\\tsession_id INTEGER, \\n\\tdriver_name TEXT NOT NULL, \\n\\tlap_number INTEGER NOT NULL, \\n\\tstint INTEGER, \\n\\tsector_1_speed_trap_in_km REAL, \\n\\tsector_2_speed_trap_in_km REAL, \\n\\tfinish_line_speed_trap_in_km REAL, \\n\\tlongest_strait_speed_trap_in_km REAL, \\n\\tis_personal_best BOOLEAN, \\n\\ttyre_compound TEXT, \\n\\ttyre_life_in_laps INTEGER, \\n\\tis_fresh_tyre BOOLEAN, \\n\\tposition INTEGER, \\n\\tlap_time_in_seconds REAL, \\n\\tsector_1_time_in_seconds REAL, \\n\\tsector_2_time_in_seconds REAL, \\n\\tsector_3_time_in_seconds REAL, \\n\\tlap_start_time_in_datetime DATETIME, \\n\\tpin_in_time_in_datetime DATETIME, \\n\\tpin_out_time_in_datetime DATETIME, \\n\\tPRIMARY KEY (lap_id), \\n\\tFOREIGN KEY(session_id) REFERENCES \"Sessions\" (session_id), \\n\\tUNIQUE (session_id, driver_name, lap_number)\\n)\\n\\n/*\\n3 rows from Laps table:\\nlap_id\\tsession_id\\tdriver_name\\tlap_number\\tstint\\tsector_1_speed_trap_in_km\\tsector_2_speed_trap_in_km\\tfinish_line_speed_trap_in_km\\tlongest_strait_speed_trap_in_km\\tis_personal_best\\ttyre_compound\\ttyre_life_in_laps\\tis_fresh_tyre\\tposition\\tlap_time_in_seconds\\tsector_1_time_in_seconds\\tsector_2_time_in_seconds\\tsector_3_time_in_seconds\\tlap_start_time_in_datetime\\tpin_in_time_in_datetime\\tpin_out_time_in_datetime\\n1\\t1\\tVER\\t1\\tNone\\tNone\\tNone\\tNone\\t178.0\\tFalse\\tSOFT\\t1\\tTrue\\tNone\\tNone\\tNone\\tNone\\tNone\\t2023-03-04 15:04:00.840000\\t2023-03-04 15:06:07.931000\\t2023-03-04 15:04:00.840000\\n2\\t1\\tVER\\t2\\tNone\\tNone\\tNone\\t288.0\\t141.0\\tFalse\\tSOFT\\t2\\tFalse\\tNone\\tNone\\tNone\\t53.666\\t38.509\\t2023-03-04 15:12:21.456000\\tNone\\t2023-03-04 15:12:21.456000\\n3\\t1\\tVER\\t3\\tNone\\tNone\\tNone\\t288.0\\t322.0\\tTrue\\tSOFT\\t3\\tFalse\\tNone\\t91.295\\t29.152\\t39.195\\t22.948\\t2023-03-04 15:14:33.391000\\tNone\\tNone\\n*/', name='sql_db_schema', id='b19bb039-85b0-4893-a826-d2d0288134f6', tool_call_id='call_VPcwJvUKTHLgPcxlKVCgsIsZ')]}}\n",
433
- "----\n",
434
- "{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_5MdS7VtbFs4MrJ5ksZMlF2wN', 'function': {'arguments': '{\"query\":\"SELECT driver_name, MIN(lap_time_in_seconds) AS fastest_lap FROM Laps GROUP BY driver_name ORDER BY fastest_lap LIMIT 5;\"}', 'name': 'sql_db_query_checker'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 46, 'prompt_tokens': 1387, 'total_tokens': 1433, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_482c22a7bc', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-09cb8288-2acb-48b2-9ed3-879719cd3f6e-0', tool_calls=[{'name': 'sql_db_query_checker', 'args': {'query': 'SELECT driver_name, MIN(lap_time_in_seconds) AS fastest_lap FROM Laps GROUP BY driver_name ORDER BY fastest_lap LIMIT 5;'}, 'id': 'call_5MdS7VtbFs4MrJ5ksZMlF2wN', 'type': 'tool_call'}], usage_metadata={'input_tokens': 1387, 'output_tokens': 46, 'total_tokens': 1433, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}})]}}\n",
435
- "----\n",
436
- "{'tools': {'messages': [ToolMessage(content='```sql\\nSELECT driver_name, MIN(lap_time_in_seconds) AS fastest_lap FROM Laps GROUP BY driver_name ORDER BY fastest_lap LIMIT 5;\\n```', name='sql_db_query_checker', id='da986a31-e492-4dee-9dfe-528f5063b97b', tool_call_id='call_5MdS7VtbFs4MrJ5ksZMlF2wN')]}}\n",
437
- "----\n",
438
- "{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_rgS5p53gmkoIkAzHzrV1Hv3o', 'function': {'arguments': '{\"query\":\"SELECT driver_name, MIN(lap_time_in_seconds) AS fastest_lap FROM Laps GROUP BY driver_name ORDER BY fastest_lap LIMIT 5;\"}', 'name': 'sql_db_query'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 45, 'prompt_tokens': 1478, 'total_tokens': 1523, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 1280}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_482c22a7bc', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-04a34824-f5a6-4623-8078-7954ea60a6f6-0', tool_calls=[{'name': 'sql_db_query', 'args': {'query': 'SELECT driver_name, MIN(lap_time_in_seconds) AS fastest_lap FROM Laps GROUP BY driver_name ORDER BY fastest_lap LIMIT 5;'}, 'id': 'call_rgS5p53gmkoIkAzHzrV1Hv3o', 'type': 'tool_call'}], usage_metadata={'input_tokens': 1478, 'output_tokens': 45, 'total_tokens': 1523, 'input_token_details': {'cache_read': 1280}, 'output_token_details': {'reasoning': 0}})]}}\n",
439
- "----\n",
440
- "{'tools': {'messages': [ToolMessage(content=\"[('VER', 89.708), ('PER', 89.846), ('LEC', 90.0), ('SAI', 90.154), ('ALO', 90.336)]\", name='sql_db_query', id='f1e96f9f-540a-49db-b63d-7fdd1b0d3998', tool_call_id='call_rgS5p53gmkoIkAzHzrV1Hv3o')]}}\n",
441
- "----\n",
442
- "{'agent': {'messages': [AIMessage(content='The fastest drivers based on their best lap times are:\\n\\n1. Max Verstappen (VER) - 89.708 seconds\\n2. Sergio Perez (PER) - 89.846 seconds\\n3. Charles Leclerc (LEC) - 90.0 seconds\\n4. Carlos Sainz (SAI) - 90.154 seconds\\n5. Fernando Alonso (ALO) - 90.336 seconds', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 87, 'prompt_tokens': 1575, 'total_tokens': 1662, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 1408}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_482c22a7bc', 'finish_reason': 'stop', 'logprobs': None}, id='run-7bff3793-8bc0-4eae-9c76-efc6559efcfc-0', usage_metadata={'input_tokens': 1575, 'output_tokens': 87, 'total_tokens': 1662, 'input_token_details': {'cache_read': 1408}, 'output_token_details': {'reasoning': 0}})]}}\n",
443
- "----\n"
444
- ]
445
- }
446
- ],
447
- "source": [
448
- "from langchain_core.messages import HumanMessage\n",
449
- "from langgraph.prebuilt import create_react_agent\n",
450
- "\n",
451
- "agent_executor = create_react_agent(\n",
452
- " llm, tools, messages_modifier=system_message)\n",
453
- "\n",
454
- "for s in agent_executor.stream(\n",
455
- " {\"messages\": [HumanMessage(content=\"Who is the fastest driver?\")]}\n",
456
- "):\n",
457
- " print(s)\n",
458
- " print(\"----\")"
459
- ]
460
- },
461
- {
462
- "cell_type": "markdown",
463
- "metadata": {},
464
- "source": [
465
- "## Building a retriever tool\n"
466
- ]
467
- },
468
- {
469
- "cell_type": "code",
470
- "execution_count": 17,
471
- "metadata": {},
472
- "outputs": [
473
- {
474
- "data": {
475
- "text/plain": [
476
- "['Lewis Hamilton',\n",
477
- " 'Charles Leclerc',\n",
478
- " 'Lando Norris',\n",
479
- " 'Nico Hulkenberg',\n",
480
- " 'Yuki Tsunoda']"
481
- ]
482
- },
483
- "execution_count": 17,
484
- "metadata": {},
485
- "output_type": "execute_result"
486
- }
487
- ],
488
- "source": [
489
- "import ast\n",
490
- "import re\n",
491
- "\n",
492
- "\n",
493
- "def query_as_list(db, query):\n",
494
- " res = db.run(query)\n",
495
- " res = [el for sub in ast.literal_eval(res) for el in sub if el]\n",
496
- " res = [re.sub(r\"\\b\\d+\\b\", \"\", string).strip() for string in res]\n",
497
- " return list(set(res))\n",
498
- "\n",
499
- "\n",
500
- "drivers = query_as_list(db, \"SELECT driver_name FROM Drivers\")\n",
501
- "drivers[:5]"
502
- ]
503
- },
504
- {
505
- "cell_type": "code",
506
- "execution_count": 18,
507
- "metadata": {},
508
- "outputs": [],
509
- "source": [
510
- "from langchain.agents.agent_toolkits import create_retriever_tool\n",
511
- "from langchain_community.vectorstores import FAISS\n",
512
- "from langchain_openai import OpenAIEmbeddings\n",
513
- "\n",
514
- "vector_db = FAISS.from_texts(drivers, OpenAIEmbeddings())\n",
515
- "retriever = vector_db.as_retriever(search_kwargs={\"k\": 5})\n",
516
- "description = \"\"\"Use to look up values to filter on. Input is an approximate spelling of the proper noun, output is \\\n",
517
- "valid proper nouns. Use the noun most similar to the search.\"\"\"\n",
518
- "retriever_tool = create_retriever_tool(\n",
519
- " retriever,\n",
520
- " name=\"search_proper_nouns\",\n",
521
- " description=description,\n",
522
- ")"
523
- ]
524
- },
525
- {
526
- "cell_type": "markdown",
527
- "metadata": {},
528
- "source": [
529
- "Let's try it out\n"
530
- ]
531
- },
532
- {
533
- "cell_type": "code",
534
- "execution_count": 19,
535
- "metadata": {},
536
- "outputs": [
537
- {
538
- "name": "stdout",
539
- "output_type": "stream",
540
- "text": [
541
- "Oscar Piastri\n",
542
- "\n",
543
- "Fernando Alonso\n",
544
- "\n",
545
- "Pierre Gasly\n",
546
- "\n",
547
- "Sergio Perez\n",
548
- "\n",
549
- "George Russell\n"
550
- ]
551
- }
552
- ],
553
- "source": [
554
- "print(retriever_tool.invoke(\"PIA\"))"
555
- ]
556
- },
557
- {
558
- "cell_type": "markdown",
559
- "metadata": {},
560
- "source": [
561
- "This way, if the agent determines it needs to write a filter based on an artist along the lines of \"PIA\", it can first use the retriever tool to observe relevant values of a column.\n",
562
- "\n",
563
- "Putting this together:\n"
564
- ]
565
- },
566
- {
567
- "cell_type": "code",
568
- "execution_count": 20,
569
- "metadata": {},
570
- "outputs": [
571
- {
572
- "name": "stderr",
573
- "output_type": "stream",
574
- "text": [
575
- "/var/folders/09/6cwl3kq50bv50t8lrvqxmmfc0000gn/T/ipykernel_86930/3211409952.py:23: LangGraphDeprecationWarning: Parameter 'messages_modifier' in function 'create_react_agent' is deprecated as of version 0.1.9 and will be removed in version 0.3.0. Use 'state_modifier' parameter instead.\n",
576
- " agent = create_react_agent(llm, tools, messages_modifier=system_message)\n"
577
- ]
578
- }
579
- ],
580
- "source": [
581
- "system = \"\"\"You are an agent designed to interact with a SQL database.\n",
582
- "Given an input question, create a syntactically correct SQLite query to run, then look at the results of the query and return the answer.\n",
583
- "Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most 5 results.\n",
584
- "You can order the results by a relevant column to return the most interesting examples in the database.\n",
585
- "Never query for all the columns from a specific table, only ask for the relevant columns given the question.\n",
586
- "You have access to tools for interacting with the database.\n",
587
- "Only use the given tools. Only use the information returned by the tools to construct your final answer.\n",
588
- "You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n",
589
- "\n",
590
- "DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n",
591
- "\n",
592
- "You have access to the following tables: {table_names}\n",
593
- "\n",
594
- "If you need to filter on a proper noun, you must ALWAYS first look up the filter value using the \"search_proper_nouns\" tool!\n",
595
- "Do not try to guess at the proper name - use this function to find similar ones.\"\"\".format(\n",
596
- " table_names=db.get_usable_table_names()\n",
597
- ")\n",
598
- "\n",
599
- "system_message = SystemMessage(content=system)\n",
600
- "\n",
601
- "tools.append(retriever_tool)\n",
602
- "\n",
603
- "agent = create_react_agent(llm, tools, messages_modifier=system_message)"
604
- ]
605
- },
606
- {
607
- "cell_type": "code",
608
- "execution_count": 21,
609
- "metadata": {},
610
- "outputs": [
611
- {
612
- "name": "stdout",
613
- "output_type": "stream",
614
- "text": [
615
- "{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_V2mYVxQ9Tbe8nklNbMCEpyPX', 'function': {'arguments': '{\"query\":\"Lewis Hamilton\"}', 'name': 'search_proper_nouns'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 18, 'prompt_tokens': 649, 'total_tokens': 667, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_482c22a7bc', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-d435d5b9-fd52-46ea-bc9a-e25cee3cc02a-0', tool_calls=[{'name': 'search_proper_nouns', 'args': {'query': 'Lewis Hamilton'}, 'id': 'call_V2mYVxQ9Tbe8nklNbMCEpyPX', 'type': 'tool_call'}], usage_metadata={'input_tokens': 649, 'output_tokens': 18, 'total_tokens': 667, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}})]}}\n",
616
- "----\n",
617
- "{'tools': {'messages': [ToolMessage(content='Lewis Hamilton\\n\\nLando Norris\\n\\nNico Hulkenberg\\n\\nValtteri Bottas\\n\\nFernando Alonso', name='search_proper_nouns', id='e442b4ac-e0b1-49de-aa79-6b920abf7ee7', tool_call_id='call_V2mYVxQ9Tbe8nklNbMCEpyPX')]}}\n",
618
- "----\n",
619
- "{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_vQBxLBhzI0H8uGWMadgizqt2', 'function': {'arguments': '{\"query\":\"SELECT COUNT(*) AS FastestLaps FROM Laps WHERE Driver = \\'Lewis Hamilton\\' AND FastestLap = 1\"}', 'name': 'sql_db_query_checker'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 40, 'prompt_tokens': 699, 'total_tokens': 739, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_482c22a7bc', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-1708a779-f9d3-4dde-b6bf-824cc41dd6f7-0', tool_calls=[{'name': 'sql_db_query_checker', 'args': {'query': \"SELECT COUNT(*) AS FastestLaps FROM Laps WHERE Driver = 'Lewis Hamilton' AND FastestLap = 1\"}, 'id': 'call_vQBxLBhzI0H8uGWMadgizqt2', 'type': 'tool_call'}], usage_metadata={'input_tokens': 699, 'output_tokens': 40, 'total_tokens': 739, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}})]}}\n",
620
- "----\n",
621
- "{'tools': {'messages': [ToolMessage(content=\"```sql\\nSELECT COUNT(*) AS FastestLaps FROM Laps WHERE Driver = 'Lewis Hamilton' AND FastestLap = 1\\n```\", name='sql_db_query_checker', id='c026375c-7c11-423d-95cf-6ff56b53e53a', tool_call_id='call_vQBxLBhzI0H8uGWMadgizqt2')]}}\n",
622
- "----\n",
623
- "{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_WpFTMmqGVLpwHzxnyGgr2b11', 'function': {'arguments': '{\"query\":\"SELECT COUNT(*) AS FastestLaps FROM Laps WHERE Driver = \\'Lewis Hamilton\\' AND FastestLap = 1\"}', 'name': 'sql_db_query'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 39, 'prompt_tokens': 779, 'total_tokens': 818, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_482c22a7bc', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-74afbb9c-9cb9-4661-a63e-d97cb44e09c5-0', tool_calls=[{'name': 'sql_db_query', 'args': {'query': \"SELECT COUNT(*) AS FastestLaps FROM Laps WHERE Driver = 'Lewis Hamilton' AND FastestLap = 1\"}, 'id': 'call_WpFTMmqGVLpwHzxnyGgr2b11', 'type': 'tool_call'}], usage_metadata={'input_tokens': 779, 'output_tokens': 39, 'total_tokens': 818, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}})]}}\n",
624
- "----\n",
625
- "{'tools': {'messages': [ToolMessage(content=\"Error: (sqlite3.OperationalError) no such column: Driver\\n[SQL: SELECT COUNT(*) AS FastestLaps FROM Laps WHERE Driver = 'Lewis Hamilton' AND FastestLap = 1]\\n(Background on this error at: https://sqlalche.me/e/20/e3q8)\", name='sql_db_query', id='9f2ad90b-32de-4f30-ab5d-bbdadfb6e7ac', tool_call_id='call_WpFTMmqGVLpwHzxnyGgr2b11')]}}\n",
626
- "----\n",
627
- "{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_2M65yPLFFCjHsTc2rLUobtt4', 'function': {'arguments': '{\"table_names\":\"Laps\"}', 'name': 'sql_db_schema'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 892, 'total_tokens': 909, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_482c22a7bc', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-ab8e7251-4de5-4539-a5ca-e66af17016dc-0', tool_calls=[{'name': 'sql_db_schema', 'args': {'table_names': 'Laps'}, 'id': 'call_2M65yPLFFCjHsTc2rLUobtt4', 'type': 'tool_call'}], usage_metadata={'input_tokens': 892, 'output_tokens': 17, 'total_tokens': 909, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}})]}}\n",
628
- "----\n",
629
- "{'tools': {'messages': [ToolMessage(content='\\nCREATE TABLE \"Laps\" (\\n\\tlap_id INTEGER, \\n\\tsession_id INTEGER, \\n\\tdriver_name TEXT NOT NULL, \\n\\tlap_number INTEGER NOT NULL, \\n\\tstint INTEGER, \\n\\tsector_1_speed_trap_in_km REAL, \\n\\tsector_2_speed_trap_in_km REAL, \\n\\tfinish_line_speed_trap_in_km REAL, \\n\\tlongest_strait_speed_trap_in_km REAL, \\n\\tis_personal_best BOOLEAN, \\n\\ttyre_compound TEXT, \\n\\ttyre_life_in_laps INTEGER, \\n\\tis_fresh_tyre BOOLEAN, \\n\\tposition INTEGER, \\n\\tlap_time_in_seconds REAL, \\n\\tsector_1_time_in_seconds REAL, \\n\\tsector_2_time_in_seconds REAL, \\n\\tsector_3_time_in_seconds REAL, \\n\\tlap_start_time_in_datetime DATETIME, \\n\\tpin_in_time_in_datetime DATETIME, \\n\\tpin_out_time_in_datetime DATETIME, \\n\\tPRIMARY KEY (lap_id), \\n\\tFOREIGN KEY(session_id) REFERENCES \"Sessions\" (session_id), \\n\\tUNIQUE (session_id, driver_name, lap_number)\\n)\\n\\n/*\\n3 rows from Laps table:\\nlap_id\\tsession_id\\tdriver_name\\tlap_number\\tstint\\tsector_1_speed_trap_in_km\\tsector_2_speed_trap_in_km\\tfinish_line_speed_trap_in_km\\tlongest_strait_speed_trap_in_km\\tis_personal_best\\ttyre_compound\\ttyre_life_in_laps\\tis_fresh_tyre\\tposition\\tlap_time_in_seconds\\tsector_1_time_in_seconds\\tsector_2_time_in_seconds\\tsector_3_time_in_seconds\\tlap_start_time_in_datetime\\tpin_in_time_in_datetime\\tpin_out_time_in_datetime\\n1\\t1\\tVER\\t1\\tNone\\tNone\\tNone\\tNone\\t178.0\\tFalse\\tSOFT\\t1\\tTrue\\tNone\\tNone\\tNone\\tNone\\tNone\\t2023-03-04 15:04:00.840000\\t2023-03-04 15:06:07.931000\\t2023-03-04 15:04:00.840000\\n2\\t1\\tVER\\t2\\tNone\\tNone\\tNone\\t288.0\\t141.0\\tFalse\\tSOFT\\t2\\tFalse\\tNone\\tNone\\tNone\\t53.666\\t38.509\\t2023-03-04 15:12:21.456000\\tNone\\t2023-03-04 15:12:21.456000\\n3\\t1\\tVER\\t3\\tNone\\tNone\\tNone\\t288.0\\t322.0\\tTrue\\tSOFT\\t3\\tFalse\\tNone\\t91.295\\t29.152\\t39.195\\t22.948\\t2023-03-04 15:14:33.391000\\tNone\\tNone\\n*/', name='sql_db_schema', id='58fa77cf-9ca2-44ce-a6b4-7c64c258de10', tool_call_id='call_2M65yPLFFCjHsTc2rLUobtt4')]}}\n",
630
- "----\n",
631
- "{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_aYYhIdyMvFlatT5NmwZFz9a4', 'function': {'arguments': '{\"query\":\"SELECT COUNT(*) AS FastestLaps FROM Laps WHERE driver_name = \\'Lewis Hamilton\\' AND is_personal_best = 1\"}', 'name': 'sql_db_query_checker'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 42, 'prompt_tokens': 1514, 'total_tokens': 1556, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_482c22a7bc', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-7be6718b-adb2-443f-9913-cba293c2a94c-0', tool_calls=[{'name': 'sql_db_query_checker', 'args': {'query': \"SELECT COUNT(*) AS FastestLaps FROM Laps WHERE driver_name = 'Lewis Hamilton' AND is_personal_best = 1\"}, 'id': 'call_aYYhIdyMvFlatT5NmwZFz9a4', 'type': 'tool_call'}], usage_metadata={'input_tokens': 1514, 'output_tokens': 42, 'total_tokens': 1556, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}})]}}\n",
632
- "----\n",
633
- "{'tools': {'messages': [ToolMessage(content=\"```sql\\nSELECT COUNT(*) AS FastestLaps FROM Laps WHERE driver_name = 'Lewis Hamilton' AND is_personal_best = 1\\n```\", name='sql_db_query_checker', id='ee92b264-a20a-4276-b93d-0eb88da8d647', tool_call_id='call_aYYhIdyMvFlatT5NmwZFz9a4')]}}\n",
634
- "----\n",
635
- "{'agent': {'messages': [AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_Tp5C2y8jZLjLAZoH2qTmwrTH', 'function': {'arguments': '{\"query\":\"SELECT COUNT(*) AS FastestLaps FROM Laps WHERE driver_name = \\'Lewis Hamilton\\' AND is_personal_best = 1\"}', 'name': 'sql_db_query'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 41, 'prompt_tokens': 1598, 'total_tokens': 1639, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 1408}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_482c22a7bc', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-e7a82348-e88d-466e-9e6d-7e2ef17297de-0', tool_calls=[{'name': 'sql_db_query', 'args': {'query': \"SELECT COUNT(*) AS FastestLaps FROM Laps WHERE driver_name = 'Lewis Hamilton' AND is_personal_best = 1\"}, 'id': 'call_Tp5C2y8jZLjLAZoH2qTmwrTH', 'type': 'tool_call'}], usage_metadata={'input_tokens': 1598, 'output_tokens': 41, 'total_tokens': 1639, 'input_token_details': {'cache_read': 1408}, 'output_token_details': {'reasoning': 0}})]}}\n",
636
- "----\n",
637
- "{'tools': {'messages': [ToolMessage(content='[(0,)]', name='sql_db_query', id='3748a9fc-1225-43ab-8c7d-a2ff46a37c98', tool_call_id='call_Tp5C2y8jZLjLAZoH2qTmwrTH')]}}\n",
638
- "----\n",
639
- "{'agent': {'messages': [AIMessage(content='Lewis Hamilton has achieved 0 fastest laps.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 10, 'prompt_tokens': 1652, 'total_tokens': 1662, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 1536}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_482c22a7bc', 'finish_reason': 'stop', 'logprobs': None}, id='run-91be8556-79ff-4d3f-8128-d912462b21f0-0', usage_metadata={'input_tokens': 1652, 'output_tokens': 10, 'total_tokens': 1662, 'input_token_details': {'cache_read': 1536}, 'output_token_details': {'reasoning': 0}})]}}\n",
640
- "----\n"
641
- ]
642
- }
643
- ],
644
- "source": [
645
- "for s in agent.stream(\n",
646
- " {\"messages\": [HumanMessage(\n",
647
- " content=\"How many fastest laps did Lewis Hamilton?\")]}\n",
648
- "):\n",
649
- " print(s)\n",
650
- " print(\"----\")"
651
- ]
652
- },
653
- {
654
- "cell_type": "code",
655
- "execution_count": 22,
656
- "metadata": {},
657
- "outputs": [
658
- {
659
- "name": "stderr",
660
- "output_type": "stream",
661
- "text": [
662
- "Failed to multipart ingest runs: langsmith.utils.LangSmithError: Failed to POST https://api.smith.langchain.com/runs/multipart in LangSmith API. HTTPError('400 Client Error: Bad Request for url: https://api.smith.langchain.com/runs/multipart', '{\"detail\":\"Empty request\"}')\n"
663
- ]
664
- },
665
- {
666
- "data": {
667
- "text/plain": [
668
- "{'messages': [HumanMessage(content='How many fastest laps did Verstappen achieve?', additional_kwargs={}, response_metadata={}, id='8ccf219e-5657-4b54-8ef1-a0652669dc5a'),\n",
669
- " AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_O6knt1bzCUHBEOi3kBB819FI', 'function': {'arguments': '{\"query\":\"Verstappen\"}', 'name': 'search_proper_nouns'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 19, 'prompt_tokens': 650, 'total_tokens': 669, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_482c22a7bc', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-2d3a3fb6-a33d-4eb0-82a6-fbb702a61bd4-0', tool_calls=[{'name': 'search_proper_nouns', 'args': {'query': 'Verstappen'}, 'id': 'call_O6knt1bzCUHBEOi3kBB819FI', 'type': 'tool_call'}], usage_metadata={'input_tokens': 650, 'output_tokens': 19, 'total_tokens': 669, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}}),\n",
670
- " ToolMessage(content='Max Verstappen\\n\\nNico Hulkenberg\\n\\nLewis Hamilton\\n\\nLance Stroll\\n\\nValtteri Bottas', name='search_proper_nouns', id='94669590-6e71-48d6-b2e1-674593107eac', tool_call_id='call_O6knt1bzCUHBEOi3kBB819FI'),\n",
671
- " AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_xs6oJjmQm0nrkEfsHUGlq0l9', 'function': {'arguments': '{\"query\":\"SELECT COUNT(*) as fastest_laps_count FROM Laps WHERE driver_name = \\'Max Verstappen\\' AND is_fastest_lap = 1;\"}', 'name': 'sql_db_query_checker'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 45, 'prompt_tokens': 703, 'total_tokens': 748, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_482c22a7bc', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-4ef76281-738f-478a-a41a-af9710e961be-0', tool_calls=[{'name': 'sql_db_query_checker', 'args': {'query': \"SELECT COUNT(*) as fastest_laps_count FROM Laps WHERE driver_name = 'Max Verstappen' AND is_fastest_lap = 1;\"}, 'id': 'call_xs6oJjmQm0nrkEfsHUGlq0l9', 'type': 'tool_call'}], usage_metadata={'input_tokens': 703, 'output_tokens': 45, 'total_tokens': 748, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}}),\n",
672
- " ToolMessage(content=\"```sql\\nSELECT COUNT(*) as fastest_laps_count FROM Laps WHERE driver_name = 'Max Verstappen' AND is_fastest_lap = 1;\\n```\", name='sql_db_query_checker', id='4b1ac6b7-c831-4e13-9459-a5a8d6218f04', tool_call_id='call_xs6oJjmQm0nrkEfsHUGlq0l9'),\n",
673
- " AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_958H9JRomjsCZ3UWWM80VBQR', 'function': {'arguments': '{\"query\":\"SELECT COUNT(*) as fastest_laps_count FROM Laps WHERE driver_name = \\'Max Verstappen\\' AND is_fastest_lap = 1;\"}', 'name': 'sql_db_query'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 44, 'prompt_tokens': 792, 'total_tokens': 836, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_482c22a7bc', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-0b3546dc-01be-4062-b8b1-086d9e3e26bd-0', tool_calls=[{'name': 'sql_db_query', 'args': {'query': \"SELECT COUNT(*) as fastest_laps_count FROM Laps WHERE driver_name = 'Max Verstappen' AND is_fastest_lap = 1;\"}, 'id': 'call_958H9JRomjsCZ3UWWM80VBQR', 'type': 'tool_call'}], usage_metadata={'input_tokens': 792, 'output_tokens': 44, 'total_tokens': 836, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}}),\n",
674
- " ToolMessage(content=\"Error: (sqlite3.OperationalError) no such column: is_fastest_lap\\n[SQL: SELECT COUNT(*) as fastest_laps_count FROM Laps WHERE driver_name = 'Max Verstappen' AND is_fastest_lap = 1;]\\n(Background on this error at: https://sqlalche.me/e/20/e3q8)\", name='sql_db_query', id='54cb39eb-bac0-4a43-b991-b2b422817e7a', tool_call_id='call_958H9JRomjsCZ3UWWM80VBQR'),\n",
675
- " AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_L2G2iHs2GkXm9xpbRjsXFt4N', 'function': {'arguments': '{\"table_names\":\"Laps\"}', 'name': 'sql_db_schema'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 919, 'total_tokens': 936, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_482c22a7bc', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-c802e2d9-a552-4521-8859-44364fa9ef95-0', tool_calls=[{'name': 'sql_db_schema', 'args': {'table_names': 'Laps'}, 'id': 'call_L2G2iHs2GkXm9xpbRjsXFt4N', 'type': 'tool_call'}], usage_metadata={'input_tokens': 919, 'output_tokens': 17, 'total_tokens': 936, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}}),\n",
676
- " ToolMessage(content='\\nCREATE TABLE \"Laps\" (\\n\\tlap_id INTEGER, \\n\\tsession_id INTEGER, \\n\\tdriver_name TEXT NOT NULL, \\n\\tlap_number INTEGER NOT NULL, \\n\\tstint INTEGER, \\n\\tsector_1_speed_trap_in_km REAL, \\n\\tsector_2_speed_trap_in_km REAL, \\n\\tfinish_line_speed_trap_in_km REAL, \\n\\tlongest_strait_speed_trap_in_km REAL, \\n\\tis_personal_best BOOLEAN, \\n\\ttyre_compound TEXT, \\n\\ttyre_life_in_laps INTEGER, \\n\\tis_fresh_tyre BOOLEAN, \\n\\tposition INTEGER, \\n\\tlap_time_in_seconds REAL, \\n\\tsector_1_time_in_seconds REAL, \\n\\tsector_2_time_in_seconds REAL, \\n\\tsector_3_time_in_seconds REAL, \\n\\tlap_start_time_in_datetime DATETIME, \\n\\tpin_in_time_in_datetime DATETIME, \\n\\tpin_out_time_in_datetime DATETIME, \\n\\tPRIMARY KEY (lap_id), \\n\\tFOREIGN KEY(session_id) REFERENCES \"Sessions\" (session_id), \\n\\tUNIQUE (session_id, driver_name, lap_number)\\n)\\n\\n/*\\n3 rows from Laps table:\\nlap_id\\tsession_id\\tdriver_name\\tlap_number\\tstint\\tsector_1_speed_trap_in_km\\tsector_2_speed_trap_in_km\\tfinish_line_speed_trap_in_km\\tlongest_strait_speed_trap_in_km\\tis_personal_best\\ttyre_compound\\ttyre_life_in_laps\\tis_fresh_tyre\\tposition\\tlap_time_in_seconds\\tsector_1_time_in_seconds\\tsector_2_time_in_seconds\\tsector_3_time_in_seconds\\tlap_start_time_in_datetime\\tpin_in_time_in_datetime\\tpin_out_time_in_datetime\\n1\\t1\\tVER\\t1\\tNone\\tNone\\tNone\\tNone\\t178.0\\tFalse\\tSOFT\\t1\\tTrue\\tNone\\tNone\\tNone\\tNone\\tNone\\t2023-03-04 15:04:00.840000\\t2023-03-04 15:06:07.931000\\t2023-03-04 15:04:00.840000\\n2\\t1\\tVER\\t2\\tNone\\tNone\\tNone\\t288.0\\t141.0\\tFalse\\tSOFT\\t2\\tFalse\\tNone\\tNone\\tNone\\t53.666\\t38.509\\t2023-03-04 15:12:21.456000\\tNone\\t2023-03-04 15:12:21.456000\\n3\\t1\\tVER\\t3\\tNone\\tNone\\tNone\\t288.0\\t322.0\\tTrue\\tSOFT\\t3\\tFalse\\tNone\\t91.295\\t29.152\\t39.195\\t22.948\\t2023-03-04 15:14:33.391000\\tNone\\tNone\\n*/', name='sql_db_schema', id='3db63be6-4e5e-4994-817e-414d087edf65', tool_call_id='call_L2G2iHs2GkXm9xpbRjsXFt4N'),\n",
677
- " AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_EIQeVdEYOGNiGCOMEnjzCfAn', 'function': {'arguments': '{\"query\":\"SELECT COUNT(*) as fastest_laps_count FROM Laps WHERE driver_name = \\'VER\\' AND is_personal_best = 1;\"}', 'name': 'sql_db_query_checker'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 42, 'prompt_tokens': 1541, 'total_tokens': 1583, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_482c22a7bc', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-c912c195-8503-4a0d-9879-4d970aec1170-0', tool_calls=[{'name': 'sql_db_query_checker', 'args': {'query': \"SELECT COUNT(*) as fastest_laps_count FROM Laps WHERE driver_name = 'VER' AND is_personal_best = 1;\"}, 'id': 'call_EIQeVdEYOGNiGCOMEnjzCfAn', 'type': 'tool_call'}], usage_metadata={'input_tokens': 1541, 'output_tokens': 42, 'total_tokens': 1583, 'input_token_details': {'cache_read': 0}, 'output_token_details': {'reasoning': 0}}),\n",
678
- " ToolMessage(content=\"```sql\\nSELECT COUNT(*) as fastest_laps_count FROM Laps WHERE driver_name = 'VER' AND is_personal_best = 1;\\n```\", name='sql_db_query_checker', id='cde57c67-a90c-43ef-bcc8-d4f6fbc2d96b', tool_call_id='call_EIQeVdEYOGNiGCOMEnjzCfAn'),\n",
679
- " AIMessage(content='', additional_kwargs={'tool_calls': [{'id': 'call_e52OzWNQWFn5ZcVz28Ii2TSG', 'function': {'arguments': '{\"query\":\"SELECT COUNT(*) as fastest_laps_count FROM Laps WHERE driver_name = \\'VER\\' AND is_personal_best = 1;\"}', 'name': 'sql_db_query'}, 'type': 'function'}], 'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 41, 'prompt_tokens': 1624, 'total_tokens': 1665, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 1536}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_482c22a7bc', 'finish_reason': 'tool_calls', 'logprobs': None}, id='run-f9e102db-f36a-448d-a0e8-188c7d38f969-0', tool_calls=[{'name': 'sql_db_query', 'args': {'query': \"SELECT COUNT(*) as fastest_laps_count FROM Laps WHERE driver_name = 'VER' AND is_personal_best = 1;\"}, 'id': 'call_e52OzWNQWFn5ZcVz28Ii2TSG', 'type': 'tool_call'}], usage_metadata={'input_tokens': 1624, 'output_tokens': 41, 'total_tokens': 1665, 'input_token_details': {'cache_read': 1536}, 'output_token_details': {'reasoning': 0}}),\n",
680
- " ToolMessage(content='[(24,)]', name='sql_db_query', id='470aa971-0d39-4026-9e8e-e4dc5d219051', tool_call_id='call_e52OzWNQWFn5ZcVz28Ii2TSG'),\n",
681
- " AIMessage(content='Max Verstappen achieved a total of 24 fastest laps.', additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 1678, 'total_tokens': 1691, 'completion_tokens_details': {'audio_tokens': None, 'reasoning_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': None, 'cached_tokens': 1536}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_482c22a7bc', 'finish_reason': 'stop', 'logprobs': None}, id='run-99c62bec-1d38-4ea2-ba44-67ffdc425335-0', usage_metadata={'input_tokens': 1678, 'output_tokens': 13, 'total_tokens': 1691, 'input_token_details': {'cache_read': 1536}, 'output_token_details': {'reasoning': 0}})]}"
682
- ]
683
- },
684
- "execution_count": 22,
685
- "metadata": {},
686
- "output_type": "execute_result"
687
- }
688
- ],
689
- "source": [
690
- "agent.invoke({\"messages\": [HumanMessage(\n",
691
- " content=\"How many fastest laps did Verstappen achieve?\")]})"
692
- ]
693
- }
694
- ],
695
- "metadata": {
696
- "kernelspec": {
697
- "display_name": "Python 3 (ipykernel)",
698
- "language": "python",
699
- "name": "python3"
700
- },
701
- "language_info": {
702
- "codemirror_mode": {
703
- "name": "ipython",
704
- "version": 3
705
- },
706
- "file_extension": ".py",
707
- "mimetype": "text/x-python",
708
- "name": "python",
709
- "nbconvert_exporter": "python",
710
- "pygments_lexer": "ipython3",
711
- "version": "3.11.10"
712
- }
713
- },
714
- "nbformat": 4,
715
- "nbformat_minor": 2
716
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
briefing_generator/poetry.lock DELETED
The diff for this file is too large to render. See raw diff
 
briefing_generator/pyproject.toml DELETED
@@ -1,27 +0,0 @@
1
- [tool.poetry]
2
- name = "briefing-generator"
3
- version = "0.1.0"
4
- description = "Using LLMs to analyse and generate race reports"
5
- authors = ["Lucas Draichi <lucasdraichi@gmail.com>"]
6
- license = "MIT"
7
- readme = "README.md"
8
- package-mode = false
9
-
10
- [tool.poetry.dependencies]
11
- python = "^3.11"
12
- langchain = "^0.3.4"
13
- langgraph = "^0.2.39"
14
- langchain-community = "^0.3.3"
15
- langchain-openai = "^0.2.3"
16
- faiss-cpu = "^1.9.0"
17
- gradio = "^5.3.0"
18
- langchain-google-genai = "^2.0.1"
19
-
20
- [tool.poetry.group.dev.dependencies]
21
- ipykernel = "^6.29.5"
22
- notebook = "^7.2.2"
23
- py-mon = "^2.0.5"
24
-
25
- [build-system]
26
- requires = ["poetry-core"]
27
- build-backend = "poetry.core.masonry.api"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
poetry.lock CHANGED
The diff for this file is too large to render. See raw diff
 
pyproject.toml CHANGED
@@ -1,7 +1,7 @@
1
  [tool.poetry]
2
- name = "fastf1-predictions"
3
  version = "0.1.0"
4
- description = "Using linear algebra (AI) with formula 1 data"
5
  authors = ["Lucas Draichi <lucasdraichi@gmail.com>"]
6
  license = "MIT"
7
  readme = "README.md"
@@ -9,15 +9,18 @@ package-mode = false
9
 
10
  [tool.poetry.dependencies]
11
  python = "^3.11"
12
- llama-index = "^0.10.58"
13
- rich = "^13.7.1"
14
- fastf1 = "^3.4.0"
15
- seaborn = "^0.13.2"
16
- plotly = "^5.23.0"
17
- pandas = "^2.2.2"
18
- huggingface-hub = {extras = ["inference"], version = "^0.24.3"}
19
- python-dotenv = "^1.0.1"
20
 
 
 
 
 
21
 
22
  [build-system]
23
  requires = ["poetry-core"]
 
1
  [tool.poetry]
2
+ name = "briefing-generator"
3
  version = "0.1.0"
4
+ description = "Using LLMs to analyse and generate race reports"
5
  authors = ["Lucas Draichi <lucasdraichi@gmail.com>"]
6
  license = "MIT"
7
  readme = "README.md"
 
9
 
10
  [tool.poetry.dependencies]
11
  python = "^3.11"
12
+ langchain = "^0.3.4"
13
+ langgraph = "^0.2.39"
14
+ langchain-community = "^0.3.3"
15
+ langchain-openai = "^0.2.3"
16
+ faiss-cpu = "^1.9.0"
17
+ gradio = "^5.3.0"
18
+ langchain-google-genai = "^2.0.1"
 
19
 
20
+ [tool.poetry.group.dev.dependencies]
21
+ ipykernel = "^6.29.5"
22
+ notebook = "^7.2.2"
23
+ py-mon = "^2.0.5"
24
 
25
  [build-system]
26
  requires = ["poetry-core"]