url
stringlengths 30
161
| markdown
stringlengths 27
670k
| last_modified
stringclasses 1
value |
---|---|---|
https://github.com/langchain-ai/langchain/blob/master/templates/rag-elasticsearch/README.md |
# rag-elasticsearch
This template performs RAG using [Elasticsearch](https://python.langchain.com/docs/integrations/vectorstores/elasticsearch).
It relies on sentence transformer `MiniLM-L6-v2` for embedding passages and questions.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
To connect to your Elasticsearch instance, use the following environment variables:
```bash
export ELASTIC_CLOUD_ID = <ClOUD_ID>
export ELASTIC_USERNAME = <ClOUD_USERNAME>
export ELASTIC_PASSWORD = <ClOUD_PASSWORD>
```
For local development with Docker, use:
```bash
export ES_URL="http://localhost:9200"
```
And run an Elasticsearch instance in Docker with
```bash
docker run -p 9200:9200 -e "discovery.type=single-node" -e "xpack.security.enabled=false" -e "xpack.security.http.ssl.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:8.9.0
```
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-elasticsearch
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-elasticsearch
```
And add the following code to your `server.py` file:
```python
from rag_elasticsearch import chain as rag_elasticsearch_chain
add_routes(app, rag_elasticsearch_chain, path="/rag-elasticsearch")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-elasticsearch/playground](http://127.0.0.1:8000/rag-elasticsearch/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-elasticsearch")
```
For loading the fictional workplace documents, run the following command from the root of this repository:
```bash
python ingest.py
```
However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders).
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-elasticsearch/ingest.py | import os
from langchain_community.document_loaders import JSONLoader
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_elasticsearch import ElasticsearchStore
from langchain_text_splitters import RecursiveCharacterTextSplitter
ELASTIC_CLOUD_ID = os.getenv("ELASTIC_CLOUD_ID")
ELASTIC_USERNAME = os.getenv("ELASTIC_USERNAME", "elastic")
ELASTIC_PASSWORD = os.getenv("ELASTIC_PASSWORD")
ES_URL = os.getenv("ES_URL", "http://localhost:9200")
if ELASTIC_CLOUD_ID and ELASTIC_USERNAME and ELASTIC_PASSWORD:
es_connection_details = {
"es_cloud_id": ELASTIC_CLOUD_ID,
"es_user": ELASTIC_USERNAME,
"es_password": ELASTIC_PASSWORD,
}
else:
es_connection_details = {"es_url": ES_URL}
# Metadata extraction function
def metadata_func(record: dict, metadata: dict) -> dict:
metadata["name"] = record.get("name")
metadata["summary"] = record.get("summary")
metadata["url"] = record.get("url")
metadata["category"] = record.get("category")
metadata["updated_at"] = record.get("updated_at")
return metadata
## Load Data
loader = JSONLoader(
file_path="./data/documents.json",
jq_schema=".[]",
content_key="content",
metadata_func=metadata_func,
)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=800, chunk_overlap=250)
all_splits = text_splitter.split_documents(loader.load())
# Add to vectorDB
vectorstore = ElasticsearchStore.from_documents(
documents=all_splits,
embedding=HuggingFaceEmbeddings(
model_name="all-MiniLM-L6-v2", model_kwargs={"device": "cpu"}
),
**es_connection_details,
index_name="workplace-search-example",
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-elasticsearch/main.py | from rag_elasticsearch import chain
if __name__ == "__main__":
questions = [
"What is the nasa sales team?",
"What is our work from home policy?",
"Does the company own my personal project?",
"How does compensation work?",
]
response = chain.invoke(
{
"question": questions[0],
"chat_history": [],
}
)
print(response)
follow_up_question = "What are their objectives?"
response = chain.invoke(
{
"question": follow_up_question,
"chat_history": [
"What is the nasa sales team?",
"The sales team of NASA consists of Laura Martinez, the Area "
"Vice-President of North America, and Gary Johnson, the Area "
"Vice-President of South America. (Sales Organization Overview)",
],
}
)
print(response)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-elasticsearch/rag_elasticsearch/__init__.py | from rag_elasticsearch.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-elasticsearch/rag_elasticsearch/chain.py | from operator import itemgetter
from typing import List, Optional, Tuple
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_core.messages import BaseMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import format_document
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from langchain_elasticsearch import ElasticsearchStore
from .connection import es_connection_details
from .prompts import CONDENSE_QUESTION_PROMPT, DOCUMENT_PROMPT, LLM_CONTEXT_PROMPT
# Setup connecting to Elasticsearch
vectorstore = ElasticsearchStore(
**es_connection_details,
embedding=HuggingFaceEmbeddings(
model_name="all-MiniLM-L6-v2", model_kwargs={"device": "cpu"}
),
index_name="workplace-search-example",
)
retriever = vectorstore.as_retriever()
# Set up LLM to user
llm = ChatOpenAI(temperature=0)
def _combine_documents(
docs, document_prompt=DOCUMENT_PROMPT, document_separator="\n\n"
):
doc_strings = [format_document(doc, document_prompt) for doc in docs]
return document_separator.join(doc_strings)
def _format_chat_history(chat_history: List[Tuple]) -> str:
buffer = ""
for dialogue_turn in chat_history:
human = "Human: " + dialogue_turn[0]
ai = "Assistant: " + dialogue_turn[1]
buffer += "\n" + "\n".join([human, ai])
return buffer
class ChainInput(BaseModel):
chat_history: Optional[List[BaseMessage]] = Field(
description="Previous chat messages."
)
question: str = Field(..., description="The question to answer.")
_inputs = RunnableParallel(
standalone_question=RunnablePassthrough.assign(
chat_history=lambda x: _format_chat_history(x["chat_history"])
)
| CONDENSE_QUESTION_PROMPT
| llm
| StrOutputParser(),
)
_context = {
"context": itemgetter("standalone_question") | retriever | _combine_documents,
"question": lambda x: x["standalone_question"],
}
chain = _inputs | _context | LLM_CONTEXT_PROMPT | llm | StrOutputParser()
chain = chain.with_types(input_type=ChainInput)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-elasticsearch/rag_elasticsearch/connection.py | import os
ELASTIC_CLOUD_ID = os.getenv("ELASTIC_CLOUD_ID")
ELASTIC_USERNAME = os.getenv("ELASTIC_USERNAME", "elastic")
ELASTIC_PASSWORD = os.getenv("ELASTIC_PASSWORD")
ES_URL = os.getenv("ES_URL", "http://localhost:9200")
if ELASTIC_CLOUD_ID and ELASTIC_USERNAME and ELASTIC_PASSWORD:
es_connection_details = {
"es_cloud_id": ELASTIC_CLOUD_ID,
"es_user": ELASTIC_USERNAME,
"es_password": ELASTIC_PASSWORD,
}
else:
es_connection_details = {"es_url": ES_URL}
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-elasticsearch/rag_elasticsearch/prompts.py | from langchain_core.prompts import ChatPromptTemplate, PromptTemplate
# Used to condense a question and chat history into a single question
condense_question_prompt_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language. If there is no chat history, just rephrase the question to be a standalone question.
Chat History:
{chat_history}
Follow Up Input: {question}
""" # noqa: E501
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(
condense_question_prompt_template
)
# RAG Prompt to provide the context and question for LLM to answer
# We also ask the LLM to cite the source of the passage it is answering from
llm_context_prompt_template = """
Use the following passages to answer the user's question.
Each passage has a SOURCE which is the title of the document. When answering, cite source name of the passages you are answering from below the answer in a unique bullet point list.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
----
{context}
----
Question: {question}
""" # noqa: E501
LLM_CONTEXT_PROMPT = ChatPromptTemplate.from_template(llm_context_prompt_template)
# Used to build a context window from passages retrieved
document_prompt_template = """
---
NAME: {name}
PASSAGE:
{page_content}
---
"""
DOCUMENT_PROMPT = PromptTemplate.from_template(document_prompt_template)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-conversation/README.md |
# rag-conversation
This template is used for [conversational](https://python.langchain.com/docs/expression_language/cookbook/retrieval#conversational-retrieval-chain) [retrieval](https://python.langchain.com/docs/use_cases/question_answering/), which is one of the most popular LLM use-cases.
It passes both a conversation history and retrieved documents into an LLM for synthesis.
## Environment Setup
This template uses Pinecone as a vectorstore and requires that `PINECONE_API_KEY`, `PINECONE_ENVIRONMENT`, and `PINECONE_INDEX` are set.
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-conversation
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-conversation
```
And add the following code to your `server.py` file:
```python
from rag_conversation import chain as rag_conversation_chain
add_routes(app, rag_conversation_chain, path="/rag-conversation")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-conversation/playground](http://127.0.0.1:8000/rag-conversation/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-conversation")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-conversation/rag_conversation.ipynb | {
"cells": [
{
"cell_type": "markdown",
"id": "424a9d8d",
"metadata": {},
"source": [
"## Run Template\n",
"\n",
"In `server.py`, set -\n",
"```\n",
"add_routes(app, chain_rag_conv, path=\"/rag_conversation\")\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "5f521923",
"metadata": {},
"outputs": [],
"source": [
"from langserve.client import RemoteRunnable\n",
"\n",
"rag_app = RemoteRunnable(\"http://0.0.0.0:8001/rag_conversation\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "679bd83b",
"metadata": {},
"outputs": [],
"source": [
"question = \"How does agent memory work?\"\n",
"answer = rag_app.invoke(\n",
" {\n",
" \"question\": question,\n",
" \"chat_history\": [],\n",
" }\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "94a05616",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'Based on the given context, it is mentioned that the design of generative agents combines LLM (which stands for language, learning, and memory) with memory mechanisms. However, the specific workings of agent memory are not explicitly described in the given context.'"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"answer"
]
},
{
"cell_type": "code",
"execution_count": 9,
"id": "ce206c8a",
"metadata": {},
"outputs": [],
"source": [
"chat_history = [(question, answer)]\n",
"answer = rag_app.invoke(\n",
" {\n",
" \"question\": \"What are the different types?\",\n",
" \"chat_history\": chat_history,\n",
" }\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"id": "4626f167",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"\"Based on the given context, two types of memory are mentioned: short-term memory and long-term memory. \\n\\n1. Short-term memory: It refers to the ability of the agent to retain and recall information for a short period. In the context, short-term memory is described as the in-context learning that allows the model to learn.\\n\\n2. Long-term memory: It refers to the capability of the agent to retain and recall information over extended periods. In the context, long-term memory is described as the ability to retain and recall infinite information by leveraging an external vector store and fast retrieval.\\n\\nIt's important to note that these are just the types of memory mentioned in the given context. There may be other types of memory as well, depending on the specific design and implementation of the agent.\""
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"answer"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-conversation/rag_conversation/__init__.py | from rag_conversation.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-conversation/rag_conversation/chain.py | import os
from operator import itemgetter
from typing import List, Tuple
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import OpenAIEmbeddings
from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
format_document,
)
from langchain_core.prompts.prompt import PromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.runnables import (
RunnableBranch,
RunnableLambda,
RunnableParallel,
RunnablePassthrough,
)
from langchain_pinecone import PineconeVectorStore
if os.environ.get("PINECONE_API_KEY", None) is None:
raise Exception("Missing `PINECONE_API_KEY` environment variable.")
if os.environ.get("PINECONE_ENVIRONMENT", None) is None:
raise Exception("Missing `PINECONE_ENVIRONMENT` environment variable.")
PINECONE_INDEX_NAME = os.environ.get("PINECONE_INDEX", "langchain-test")
### Ingest code - you may need to run this the first time
# # Load
# from langchain_community.document_loaders import WebBaseLoader
# loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
# data = loader.load()
# # Split
# from langchain_text_splitters import RecursiveCharacterTextSplitter
# text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
# all_splits = text_splitter.split_documents(data)
# # Add to vectorDB
# vectorstore = PineconeVectorStore.from_documents(
# documents=all_splits, embedding=OpenAIEmbeddings(), index_name=PINECONE_INDEX_NAME
# )
# retriever = vectorstore.as_retriever()
vectorstore = PineconeVectorStore.from_existing_index(
PINECONE_INDEX_NAME, OpenAIEmbeddings()
)
retriever = vectorstore.as_retriever()
# Condense a chat history and follow-up question into a standalone question
_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:""" # noqa: E501
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)
# RAG answer synthesis prompt
template = """Answer the question based only on the following context:
<context>
{context}
</context>"""
ANSWER_PROMPT = ChatPromptTemplate.from_messages(
[
("system", template),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{question}"),
]
)
# Conversational Retrieval Chain
DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(template="{page_content}")
def _combine_documents(
docs, document_prompt=DEFAULT_DOCUMENT_PROMPT, document_separator="\n\n"
):
doc_strings = [format_document(doc, document_prompt) for doc in docs]
return document_separator.join(doc_strings)
def _format_chat_history(chat_history: List[Tuple[str, str]]) -> List:
buffer = []
for human, ai in chat_history:
buffer.append(HumanMessage(content=human))
buffer.append(AIMessage(content=ai))
return buffer
# User input
class ChatHistory(BaseModel):
chat_history: List[Tuple[str, str]] = Field(..., extra={"widget": {"type": "chat"}})
question: str
_search_query = RunnableBranch(
# If input includes chat_history, we condense it with the follow-up question
(
RunnableLambda(lambda x: bool(x.get("chat_history"))).with_config(
run_name="HasChatHistoryCheck"
), # Condense follow-up question and chat into a standalone_question
RunnablePassthrough.assign(
chat_history=lambda x: _format_chat_history(x["chat_history"])
)
| CONDENSE_QUESTION_PROMPT
| ChatOpenAI(temperature=0)
| StrOutputParser(),
),
# Else, we have no chat history, so just pass through the question
RunnableLambda(itemgetter("question")),
)
_inputs = RunnableParallel(
{
"question": lambda x: x["question"],
"chat_history": lambda x: _format_chat_history(x["chat_history"]),
"context": _search_query | retriever | _combine_documents,
}
).with_types(input_type=ChatHistory)
chain = _inputs | ANSWER_PROMPT | ChatOpenAI() | StrOutputParser()
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-conversation-zep/README.md | # rag-conversation-zep
This template demonstrates building a RAG conversation app using Zep.
Included in this template:
- Populating a [Zep Document Collection](https://docs.getzep.com/sdk/documents/) with a set of documents (a Collection is analogous to an index in other Vector Databases).
- Using Zep's [integrated embedding](https://docs.getzep.com/deployment/embeddings/) functionality to embed the documents as vectors.
- Configuring a LangChain [ZepVectorStore Retriever](https://docs.getzep.com/sdk/documents/) to retrieve documents using Zep's built, hardware accelerated in [Maximal Marginal Relevance](https://docs.getzep.com/sdk/search_query/) (MMR) re-ranking.
- Prompts, a simple chat history data structure, and other components required to build a RAG conversation app.
- The RAG conversation chain.
## About [Zep - Fast, scalable building blocks for LLM Apps](https://www.getzep.com/)
Zep is an open source platform for productionizing LLM apps. Go from a prototype built in LangChain or LlamaIndex, or a custom app, to production in minutes without rewriting code.
Key Features:
- Fast! Zep’s async extractors operate independently of the your chat loop, ensuring a snappy user experience.
- Long-term memory persistence, with access to historical messages irrespective of your summarization strategy.
- Auto-summarization of memory messages based on a configurable message window. A series of summaries are stored, providing flexibility for future summarization strategies.
- Hybrid search over memories and metadata, with messages automatically embedded on creation.
- Entity Extractor that automatically extracts named entities from messages and stores them in the message metadata.
- Auto-token counting of memories and summaries, allowing finer-grained control over prompt assembly.
- Python and JavaScript SDKs.
Zep project: https://github.com/getzep/zep | Docs: https://docs.getzep.com/
## Environment Setup
Set up a Zep service by following the [Quick Start Guide](https://docs.getzep.com/deployment/quickstart/).
## Ingesting Documents into a Zep Collection
Run `python ingest.py` to ingest the test documents into a Zep Collection. Review the file to modify the Collection name and document source.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-conversation-zep
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-conversation-zep
```
And add the following code to your `server.py` file:
```python
from rag_conversation_zep import chain as rag_conversation_zep_chain
add_routes(app, rag_conversation_zep_chain, path="/rag-conversation-zep")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-conversation-zep/playground](http://127.0.0.1:8000/rag-conversation-zep/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-conversation-zep")
``` | Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-conversation-zep/ingest.py | # Ingest Documents into a Zep Collection
import os
from langchain_community.document_loaders import WebBaseLoader
from langchain_community.embeddings import FakeEmbeddings
from langchain_community.vectorstores.zep import CollectionConfig, ZepVectorStore
from langchain_text_splitters import RecursiveCharacterTextSplitter
ZEP_API_URL = os.environ.get("ZEP_API_URL", "http://localhost:8000")
ZEP_API_KEY = os.environ.get("ZEP_API_KEY", None)
ZEP_COLLECTION_NAME = os.environ.get("ZEP_COLLECTION", "langchaintest")
collection_config = CollectionConfig(
name=ZEP_COLLECTION_NAME,
description="Zep collection for LangChain",
metadata={},
embedding_dimensions=1536,
is_auto_embedded=True,
)
# Load
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
data = loader.load()
# Split
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
all_splits = text_splitter.split_documents(data)
# Add to vectorDB
vectorstore = ZepVectorStore.from_documents(
documents=all_splits,
collection_name=ZEP_COLLECTION_NAME,
config=collection_config,
api_url=ZEP_API_URL,
api_key=ZEP_API_KEY,
embedding=FakeEmbeddings(size=1),
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-conversation-zep/rag_conversation_zep/__init__.py | from rag_conversation_zep.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-conversation-zep/rag_conversation_zep/chain.py | import os
from operator import itemgetter
from typing import List, Tuple
from langchain_community.chat_models import ChatOpenAI
from langchain_community.vectorstores.zep import CollectionConfig, ZepVectorStore
from langchain_core.documents import Document
from langchain_core.messages import AIMessage, BaseMessage, HumanMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
format_document,
)
from langchain_core.prompts.prompt import PromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.runnables import (
ConfigurableField,
RunnableBranch,
RunnableLambda,
RunnableParallel,
RunnablePassthrough,
)
from langchain_core.runnables.utils import ConfigurableFieldSingleOption
ZEP_API_URL = os.environ.get("ZEP_API_URL", "http://localhost:8000")
ZEP_API_KEY = os.environ.get("ZEP_API_KEY", None)
ZEP_COLLECTION_NAME = os.environ.get("ZEP_COLLECTION", "langchaintest")
collection_config = CollectionConfig(
name=ZEP_COLLECTION_NAME,
description="Zep collection for LangChain",
metadata={},
embedding_dimensions=1536,
is_auto_embedded=True,
)
vectorstore = ZepVectorStore(
collection_name=ZEP_COLLECTION_NAME,
config=collection_config,
api_url=ZEP_API_URL,
api_key=ZEP_API_KEY,
embedding=None,
)
# Zep offers native, hardware-accelerated MMR. Enabling this will improve
# the diversity of results, but may also reduce relevance. You can tune
# the lambda parameter to control the tradeoff between relevance and diversity.
# Enabling is a good default.
retriever = vectorstore.as_retriever().configurable_fields(
search_type=ConfigurableFieldSingleOption(
id="search_type",
options={"Similarity": "similarity", "Similarity with MMR Reranking": "mmr"},
default="mmr",
name="Search Type",
description="Type of search to perform: 'similarity' or 'mmr'",
),
search_kwargs=ConfigurableField(
id="search_kwargs",
name="Search kwargs",
description=(
"Specify 'k' for number of results to return and 'lambda_mult' for tuning"
" MMR relevance vs diversity."
),
),
)
# Condense a chat history and follow-up question into a standalone question
_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:""" # noqa: E501
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(_template)
# RAG answer synthesis prompt
template = """Answer the question based only on the following context:
<context>
{context}
</context>"""
ANSWER_PROMPT = ChatPromptTemplate.from_messages(
[
("system", template),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{question}"),
]
)
# Conversational Retrieval Chain
DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(template="{page_content}")
def _combine_documents(
docs: List[Document],
document_prompt: PromptTemplate = DEFAULT_DOCUMENT_PROMPT,
document_separator: str = "\n\n",
):
doc_strings = [format_document(doc, document_prompt) for doc in docs]
return document_separator.join(doc_strings)
def _format_chat_history(chat_history: List[Tuple[str, str]]) -> List[BaseMessage]:
buffer: List[BaseMessage] = []
for human, ai in chat_history:
buffer.append(HumanMessage(content=human))
buffer.append(AIMessage(content=ai))
return buffer
_condense_chain = (
RunnablePassthrough.assign(
chat_history=lambda x: _format_chat_history(x["chat_history"])
)
| CONDENSE_QUESTION_PROMPT
| ChatOpenAI(temperature=0)
| StrOutputParser()
)
_search_query = RunnableBranch(
# If input includes chat_history, we condense it with the follow-up question
(
RunnableLambda(lambda x: bool(x.get("chat_history"))).with_config(
run_name="HasChatHistoryCheck"
),
# Condense follow-up question and chat into a standalone_question
_condense_chain,
),
# Else, we have no chat history, so just pass through the question
RunnableLambda(itemgetter("question")),
)
# User input
class ChatHistory(BaseModel):
chat_history: List[Tuple[str, str]] = Field(..., extra={"widget": {"type": "chat"}})
question: str
_inputs = RunnableParallel(
{
"question": lambda x: x["question"],
"chat_history": lambda x: _format_chat_history(x["chat_history"]),
"context": _search_query | retriever | _combine_documents,
}
).with_types(input_type=ChatHistory)
chain = _inputs | ANSWER_PROMPT | ChatOpenAI() | StrOutputParser()
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-codellama-fireworks/README.md |
# rag-codellama-fireworks
This template performs RAG on a codebase.
It uses codellama-34b hosted by Fireworks' [LLM inference API](https://blog.fireworks.ai/accelerating-code-completion-with-fireworks-fast-llm-inference-f4e8b5ec534a).
## Environment Setup
Set the `FIREWORKS_API_KEY` environment variable to access the Fireworks models.
You can obtain it from [here](https://app.fireworks.ai/login?callbackURL=https://app.fireworks.ai).
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-codellama-fireworks
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-codellama-fireworks
```
And add the following code to your `server.py` file:
```python
from rag_codellama_fireworks import chain as rag_codellama_fireworks_chain
add_routes(app, rag_codellama_fireworks_chain, path="/rag-codellama-fireworks")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-codellama-fireworks/playground](http://127.0.0.1:8000/rag-codellama-fireworks/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-codellama-fireworks")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-codellama-fireworks/rag_codellama_fireworks.ipynb | {
"cells": [
{
"cell_type": "markdown",
"id": "681a5d1e",
"metadata": {},
"source": [
"## Run Template"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d774be2a",
"metadata": {},
"outputs": [],
"source": [
"from langserve.client import RemoteRunnable\n",
"\n",
"rag_app = RemoteRunnable(\"http://localhost:8000/rag-codellama-fireworks\")\n",
"rag_app.invoke(\"How can I initialize a ReAct agent?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-codellama-fireworks/rag_codellama_fireworks/__init__.py | from rag_codellama_fireworks.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-codellama-fireworks/rag_codellama_fireworks/chain.py | import os
from git import Repo
from langchain_community.document_loaders.generic import GenericLoader
from langchain_community.document_loaders.parsers import LanguageParser
from langchain_community.embeddings import GPT4AllEmbeddings
from langchain_community.llms.fireworks import Fireworks
from langchain_community.vectorstores import Chroma
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from langchain_text_splitters import Language, RecursiveCharacterTextSplitter
# Check API key
if os.environ.get("FIREWORKS_API_KEY", None) is None:
raise Exception("Missing `FIREWORKS_API_KEY` environment variable.")
# Load codebase
# Set local path
repo_path = "/Users/rlm/Desktop/tmp_repo"
# Use LangChain as an example
repo = Repo.clone_from("https://github.com/langchain-ai/langchain", to_path=repo_path)
loader = GenericLoader.from_filesystem(
repo_path + "/libs/langchain/langchain",
glob="**/*",
suffixes=[".py"],
parser=LanguageParser(language=Language.PYTHON, parser_threshold=500),
)
documents = loader.load()
# Split
python_splitter = RecursiveCharacterTextSplitter.from_language(
language=Language.PYTHON, chunk_size=2000, chunk_overlap=200
)
texts = python_splitter.split_documents(documents)
# Add to vectorDB
vectorstore = Chroma.from_documents(
documents=texts,
collection_name="codebase-rag",
embedding=GPT4AllEmbeddings(),
)
retriever = vectorstore.as_retriever()
# RAG prompt
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
# Initialize a Fireworks model
model = Fireworks(model="accounts/fireworks/models/llama-v2-34b-code-instruct")
# RAG chain
chain = (
RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
| prompt
| model
| StrOutputParser()
)
# Add typing for input
class Question(BaseModel):
__root__: str
chain = chain.with_types(input_type=Question)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-chroma/README.md |
# rag-chroma
This template performs RAG using Chroma and OpenAI.
The vectorstore is created in `chain.py` and by default indexes a [popular blog posts on Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) for question-answering.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-chroma
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-chroma
```
And add the following code to your `server.py` file:
```python
from rag_chroma import chain as rag_chroma_chain
add_routes(app, rag_chroma_chain, path="/rag-chroma")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-chroma/playground](http://127.0.0.1:8000/rag-chroma/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-chroma")
``` | Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-chroma/rag_chroma.ipynb | {
"cells": [
{
"cell_type": "markdown",
"id": "681a5d1e",
"metadata": {},
"source": [
"## Run Template\n",
"\n",
"In `server.py`, set -\n",
"```\n",
"add_routes(app, chain_rag_conv, path=\"/rag-chroma\")\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d774be2a",
"metadata": {},
"outputs": [],
"source": [
"from langserve.client import RemoteRunnable\n",
"\n",
"rag_app = RemoteRunnable(\"http://localhost:8001/rag-chroma\")\n",
"rag_app.invoke(\"Where id Harrison work\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-chroma/rag_chroma/__init__.py | from rag_chroma.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-chroma/rag_chroma/chain.py | from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import OpenAIEmbeddings
from langchain_community.vectorstores import Chroma
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
# Example for document loading (from url), splitting, and creating vectostore
"""
# Load
from langchain_community.document_loaders import WebBaseLoader
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
data = loader.load()
# Split
from langchain_text_splitters import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
all_splits = text_splitter.split_documents(data)
# Add to vectorDB
vectorstore = Chroma.from_documents(documents=all_splits,
collection_name="rag-chroma",
embedding=OpenAIEmbeddings(),
)
retriever = vectorstore.as_retriever()
"""
# Embed a single document as a test
vectorstore = Chroma.from_texts(
["harrison worked at kensho"],
collection_name="rag-chroma",
embedding=OpenAIEmbeddings(),
)
retriever = vectorstore.as_retriever()
# RAG prompt
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
# LLM
model = ChatOpenAI()
# RAG chain
chain = (
RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
| prompt
| model
| StrOutputParser()
)
# Add typing for input
class Question(BaseModel):
__root__: str
chain = chain.with_types(input_type=Question)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-chroma-private/README.md |
# rag-chroma-private
This template performs RAG with no reliance on external APIs.
It utilizes Ollama the LLM, GPT4All for embeddings, and Chroma for the vectorstore.
The vectorstore is created in `chain.py` and by default indexes a [popular blog posts on Agents](https://lilianweng.github.io/posts/2023-06-23-agent/) for question-answering.
## Environment Setup
To set up the environment, you need to download Ollama.
Follow the instructions [here](https://python.langchain.com/docs/integrations/chat/ollama).
You can choose the desired LLM with Ollama.
This template uses `llama2:7b-chat`, which can be accessed using `ollama pull llama2:7b-chat`.
There are many other options available [here](https://ollama.ai/library).
This package also uses [GPT4All](https://python.langchain.com/docs/integrations/text_embedding/gpt4all) embeddings.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-chroma-private
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-chroma-private
```
And add the following code to your `server.py` file:
```python
from rag_chroma_private import chain as rag_chroma_private_chain
add_routes(app, rag_chroma_private_chain, path="/rag-chroma-private")
```
(Optional) Let's now configure LangSmith. LangSmith will help us trace, monitor and debug LangChain applications. You can sign up for LangSmith [here](https://smith.langchain.com/). If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-chroma-private/playground](http://127.0.0.1:8000/rag-chroma-private/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-chroma-private")
```
The package will create and add documents to the vector database in `chain.py`. By default, it will load a popular blog post on agents. However, you can choose from a large number of document loaders [here](https://python.langchain.com/docs/integrations/document_loaders).
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-chroma-private/rag_chroma_private.ipynb | {
"cells": [
{
"cell_type": "markdown",
"id": "232fd40d-cf6a-402d-bcb8-414184a8e924",
"metadata": {},
"source": [
"## Run Template\n",
"\n",
"In `server.py`, set -\n",
"```\n",
"add_routes(app, chain_private, path=\"/rag_chroma_private\")\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "ce39d358-1934-4404-bd3e-3fd497974aff",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" Based on the provided context, agent memory is a long-term memory module that records a comprehensive list of agents' experiences in natural language. Each element is an observation or event directly provided by the agent, and inter-agent communication can trigger new natural language statements. The agent memory is complemented by several key components, including LLM (large language model) as the agent's brain, planning, reflection, and memory mechanisms. The design of generative agents combines LLM with memory, planning, and reflection mechanisms to enable agents to behave conditioned on past experiences and interact with other agents. The agent learns to call external APIs for missing information, including current information, code execution capability, access to proprietary information sources, and more. In summary, the agent memory works by recording and storing observations and events in natural language, allowing the agent to retrieve and use this information to inform its behavior.\n"
]
}
],
"source": [
"from langserve.client import RemoteRunnable\n",
"\n",
"rag_app = RemoteRunnable(\"http://0.0.0.0:8001/rag_chroma_private/\")\n",
"rag_app.invoke(\"How does agent memory work?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-chroma-private/rag_chroma_private/__init__.py | from rag_chroma_private.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-chroma-private/rag_chroma_private/chain.py | # Load
from langchain_community.chat_models import ChatOllama
from langchain_community.document_loaders import WebBaseLoader
from langchain_community.embeddings import GPT4AllEmbeddings
from langchain_community.vectorstores import Chroma
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from langchain_text_splitters import RecursiveCharacterTextSplitter
loader = WebBaseLoader("https://lilianweng.github.io/posts/2023-06-23-agent/")
data = loader.load()
# Split
text_splitter = RecursiveCharacterTextSplitter(chunk_size=500, chunk_overlap=0)
all_splits = text_splitter.split_documents(data)
# Add to vectorDB
vectorstore = Chroma.from_documents(
documents=all_splits,
collection_name="rag-private",
embedding=GPT4AllEmbeddings(),
)
retriever = vectorstore.as_retriever()
# Prompt
# Optionally, pull from the Hub
# from langchain import hub
# prompt = hub.pull("rlm/rag-prompt")
# Or, define your own:
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
# LLM
# Select the LLM that you downloaded
ollama_llm = "llama2:7b-chat"
model = ChatOllama(model=ollama_llm)
# RAG chain
chain = (
RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
| prompt
| model
| StrOutputParser()
)
# Add typing for input
class Question(BaseModel):
__root__: str
chain = chain.with_types(input_type=Question)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-chroma-multi-modal/README.md |
# rag-chroma-multi-modal
Multi-modal LLMs enable visual assistants that can perform question-answering about images.
This template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.
It uses OpenCLIP embeddings to embed all of the slide images and stores them in Chroma.
Given a question, relevant slides are retrieved and passed to GPT-4V for answer synthesis.
![Diagram illustrating the workflow of a multi-modal LLM visual assistant using OpenCLIP embeddings and GPT-4V for question-answering based on slide deck images.](https://github.com/langchain-ai/langchain/assets/122662504/b3bc8406-48ae-4707-9edf-d0b3a511b200 "Workflow Diagram for Multi-modal LLM Visual Assistant")
## Input
Supply a slide deck as pdf in the `/docs` directory.
By default, this template has a slide deck about Q3 earnings from DataDog, a public technology company.
Example questions to ask can be:
```
How many customers does Datadog have?
What is Datadog platform % Y/Y growth in FY20, FY21, and FY22?
```
To create an index of the slide deck, run:
```
poetry install
python ingest.py
```
## Storage
This template will use [OpenCLIP](https://github.com/mlfoundations/open_clip) multi-modal embeddings to embed the images.
You can select different embedding model options (see results [here](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_results.csv)).
The first time you run the app, it will automatically download the multimodal embedding model.
By default, LangChain will use an embedding model with moderate performance but lower memory requirements, `ViT-H-14`.
You can choose alternative `OpenCLIPEmbeddings` models in `rag_chroma_multi_modal/ingest.py`:
```
vectorstore_mmembd = Chroma(
collection_name="multi-modal-rag",
persist_directory=str(re_vectorstore_path),
embedding_function=OpenCLIPEmbeddings(
model_name="ViT-H-14", checkpoint="laion2b_s32b_b79k"
),
)
```
## LLM
The app will retrieve images based on similarity between the text input and the image, which are both mapped to multi-modal embedding space. It will then pass the images to GPT-4V.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI GPT-4V.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-chroma-multi-modal
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-chroma-multi-modal
```
And add the following code to your `server.py` file:
```python
from rag_chroma_multi_modal import chain as rag_chroma_multi_modal_chain
add_routes(app, rag_chroma_multi_modal_chain, path="/rag-chroma-multi-modal")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-chroma-multi-modal/playground](http://127.0.0.1:8000/rag-chroma-multi-modal/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-chroma-multi-modal")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-chroma-multi-modal/ingest.py | import os
from pathlib import Path
import pypdfium2 as pdfium
from langchain_community.vectorstores import Chroma
from langchain_experimental.open_clip import OpenCLIPEmbeddings
def get_images_from_pdf(pdf_path, img_dump_path):
"""
Extract images from each page of a PDF document and save as JPEG files.
:param pdf_path: A string representing the path to the PDF file.
:param img_dump_path: A string representing the path to dummp images.
"""
pdf = pdfium.PdfDocument(pdf_path)
n_pages = len(pdf)
for page_number in range(n_pages):
page = pdf.get_page(page_number)
bitmap = page.render(scale=1, rotation=0, crop=(0, 0, 0, 0))
pil_image = bitmap.to_pil()
pil_image.save(f"{img_dump_path}/img_{page_number + 1}.jpg", format="JPEG")
# Load PDF
doc_path = Path(__file__).parent / "docs/DDOG_Q3_earnings_deck.pdf"
img_dump_path = Path(__file__).parent / "docs/"
rel_doc_path = doc_path.relative_to(Path.cwd())
rel_img_dump_path = img_dump_path.relative_to(Path.cwd())
print("pdf index")
pil_images = get_images_from_pdf(rel_doc_path, rel_img_dump_path)
print("done")
vectorstore = Path(__file__).parent / "chroma_db_multi_modal"
re_vectorstore_path = vectorstore.relative_to(Path.cwd())
# Load embedding function
print("Loading embedding function")
embedding = OpenCLIPEmbeddings(model_name="ViT-H-14", checkpoint="laion2b_s32b_b79k")
# Create chroma
vectorstore_mmembd = Chroma(
collection_name="multi-modal-rag",
persist_directory=str(Path(__file__).parent / "chroma_db_multi_modal"),
embedding_function=embedding,
)
# Get image URIs
image_uris = sorted(
[
os.path.join(rel_img_dump_path, image_name)
for image_name in os.listdir(rel_img_dump_path)
if image_name.endswith(".jpg")
]
)
# Add images
print("Embedding images")
vectorstore_mmembd.add_images(uris=image_uris)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-chroma-multi-modal/rag_chroma_multi_modal.ipynb | {
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "681a5d1e",
"metadata": {},
"source": [
"## Run Template\n",
"\n",
"In `server.py`, set -\n",
"```\n",
"add_routes(app, chain_rag_conv, path=\"/rag-chroma-multi-modal\")\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d774be2a",
"metadata": {},
"outputs": [],
"source": [
"from langserve.client import RemoteRunnable\n",
"\n",
"rag_app = RemoteRunnable(\"http://localhost:8001/rag-chroma-multi-modal\")\n",
"rag_app.invoke(\"What is the projected TAM for observability expected for each year through 2026?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-chroma-multi-modal/rag_chroma_multi_modal/__init__.py | from rag_chroma_multi_modal.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-chroma-multi-modal/rag_chroma_multi_modal/chain.py | import base64
import io
from pathlib import Path
from langchain_community.chat_models import ChatOpenAI
from langchain_community.vectorstores import Chroma
from langchain_core.documents import Document
from langchain_core.messages import HumanMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
from langchain_experimental.open_clip import OpenCLIPEmbeddings
from PIL import Image
def resize_base64_image(base64_string, size=(128, 128)):
"""
Resize an image encoded as a Base64 string.
:param base64_string: A Base64 encoded string of the image to be resized.
:param size: A tuple representing the new size (width, height) for the image.
:return: A Base64 encoded string of the resized image.
"""
img_data = base64.b64decode(base64_string)
img = Image.open(io.BytesIO(img_data))
resized_img = img.resize(size, Image.LANCZOS)
buffered = io.BytesIO()
resized_img.save(buffered, format=img.format)
return base64.b64encode(buffered.getvalue()).decode("utf-8")
def get_resized_images(docs):
"""
Resize images from base64-encoded strings.
:param docs: A list of base64-encoded image to be resized.
:return: Dict containing a list of resized base64-encoded strings.
"""
b64_images = []
for doc in docs:
if isinstance(doc, Document):
doc = doc.page_content
resized_image = resize_base64_image(doc, size=(1280, 720))
b64_images.append(resized_image)
return {"images": b64_images}
def img_prompt_func(data_dict, num_images=2):
"""
GPT-4V prompt for image analysis.
:param data_dict: A dict with images and a user-provided question.
:param num_images: Number of images to include in the prompt.
:return: A list containing message objects for each image and the text prompt.
"""
messages = []
if data_dict["context"]["images"]:
for image in data_dict["context"]["images"][:num_images]:
image_message = {
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{image}"},
}
messages.append(image_message)
text_message = {
"type": "text",
"text": (
"You are an analyst tasked with answering questions about visual content.\n"
"You will be give a set of image(s) from a slide deck / presentation.\n"
"Use this information to answer the user question. \n"
f"User-provided question: {data_dict['question']}\n\n"
),
}
messages.append(text_message)
return [HumanMessage(content=messages)]
def multi_modal_rag_chain(retriever):
"""
Multi-modal RAG chain,
:param retriever: A function that retrieves the necessary context for the model.
:return: A chain of functions representing the multi-modal RAG process.
"""
# Initialize the multi-modal Large Language Model with specific parameters
model = ChatOpenAI(temperature=0, model="gpt-4-vision-preview", max_tokens=1024)
# Define the RAG pipeline
chain = (
{
"context": retriever | RunnableLambda(get_resized_images),
"question": RunnablePassthrough(),
}
| RunnableLambda(img_prompt_func)
| model
| StrOutputParser()
)
return chain
# Load chroma
vectorstore_mmembd = Chroma(
collection_name="multi-modal-rag",
persist_directory=str(Path(__file__).parent.parent / "chroma_db_multi_modal"),
embedding_function=OpenCLIPEmbeddings(
model_name="ViT-H-14", checkpoint="laion2b_s32b_b79k"
),
)
# Make retriever
retriever_mmembd = vectorstore_mmembd.as_retriever()
# Create RAG chain
chain = multi_modal_rag_chain(retriever_mmembd)
# Add typing for input
class Question(BaseModel):
__root__: str
chain = chain.with_types(input_type=Question)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-chroma-multi-modal-multi-vector/README.md |
# rag-chroma-multi-modal-multi-vector
Multi-modal LLMs enable visual assistants that can perform question-answering about images.
This template create a visual assistant for slide decks, which often contain visuals such as graphs or figures.
It uses GPT-4V to create image summaries for each slide, embeds the summaries, and stores them in Chroma.
Given a question, relevant slides are retrieved and passed to GPT-4V for answer synthesis.
![Diagram illustrating the multi-modal LLM process with a slide deck, captioning, storage, question input, and answer synthesis with year-over-year growth percentages.](https://github.com/langchain-ai/langchain/assets/122662504/5277ef6b-d637-43c7-8dc1-9b1567470503 "Multi-modal LLM Process Diagram")
## Input
Supply a slide deck as pdf in the `/docs` directory.
By default, this template has a slide deck about Q3 earnings from DataDog, a public technology company.
Example questions to ask can be:
```
How many customers does Datadog have?
What is Datadog platform % Y/Y growth in FY20, FY21, and FY22?
```
To create an index of the slide deck, run:
```
poetry install
python ingest.py
```
## Storage
Here is the process the template will use to create an index of the slides (see [blog](https://blog.langchain.dev/multi-modal-rag-template/)):
* Extract the slides as a collection of images
* Use GPT-4V to summarize each image
* Embed the image summaries using text embeddings with a link to the original images
* Retrieve relevant image based on similarity between the image summary and the user input question
* Pass those images to GPT-4V for answer synthesis
By default, this will use [LocalFileStore](https://python.langchain.com/docs/integrations/stores/file_system) to store images and Chroma to store summaries.
For production, it may be desirable to use a remote option such as Redis.
You can set the `local_file_store` flag in `chain.py` and `ingest.py` to switch between the two options.
For Redis, the template will use [UpstashRedisByteStore](https://python.langchain.com/docs/integrations/stores/upstash_redis).
We will use Upstash to store the images, which offers Redis with a REST API.
Simply login [here](https://upstash.com/) and create a database.
This will give you a REST API with:
* `UPSTASH_URL`
* `UPSTASH_TOKEN`
Set `UPSTASH_URL` and `UPSTASH_TOKEN` as environment variables to access your database.
We will use Chroma to store and index the image summaries, which will be created locally in the template directory.
## LLM
The app will retrieve images based on similarity between the text input and the image summary, and pass the images to GPT-4V.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI GPT-4V.
Set `UPSTASH_URL` and `UPSTASH_TOKEN` as environment variables to access your database if you use `UpstashRedisByteStore`.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-chroma-multi-modal-multi-vector
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-chroma-multi-modal-multi-vector
```
And add the following code to your `server.py` file:
```python
from rag_chroma_multi_modal_multi_vector import chain as rag_chroma_multi_modal_chain_mv
add_routes(app, rag_chroma_multi_modal_chain_mv, path="/rag-chroma-multi-modal-multi-vector")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-chroma-multi-modal-multi-vector/playground](http://127.0.0.1:8000/rag-chroma-multi-modal-multi-vector/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-chroma-multi-modal-multi-vector")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-chroma-multi-modal-multi-vector/ingest.py | import base64
import io
import os
import uuid
from io import BytesIO
from pathlib import Path
import pypdfium2 as pdfium
from langchain.retrievers.multi_vector import MultiVectorRetriever
from langchain.storage import LocalFileStore, UpstashRedisByteStore
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import OpenAIEmbeddings
from langchain_community.vectorstores import Chroma
from langchain_core.documents import Document
from langchain_core.messages import HumanMessage
from PIL import Image
def image_summarize(img_base64, prompt):
"""
Make image summary
:param img_base64: Base64 encoded string for image
:param prompt: Text prompt for summarizatiomn
:return: Image summarization prompt
"""
chat = ChatOpenAI(model="gpt-4-vision-preview", max_tokens=1024)
msg = chat.invoke(
[
HumanMessage(
content=[
{"type": "text", "text": prompt},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{img_base64}"},
},
]
)
]
)
return msg.content
def generate_img_summaries(img_base64_list):
"""
Generate summaries for images
:param img_base64_list: Base64 encoded images
:return: List of image summaries and processed images
"""
# Store image summaries
image_summaries = []
processed_images = []
# Prompt
prompt = """You are an assistant tasked with summarizing images for retrieval. \
These summaries will be embedded and used to retrieve the raw image. \
Give a concise summary of the image that is well optimized for retrieval."""
# Apply summarization to images
for i, base64_image in enumerate(img_base64_list):
try:
image_summaries.append(image_summarize(base64_image, prompt))
processed_images.append(base64_image)
except Exception as e:
print(f"Error with image {i+1}: {e}")
return image_summaries, processed_images
def get_images_from_pdf(pdf_path):
"""
Extract images from each page of a PDF document and save as JPEG files.
:param pdf_path: A string representing the path to the PDF file.
"""
pdf = pdfium.PdfDocument(pdf_path)
n_pages = len(pdf)
pil_images = []
for page_number in range(n_pages):
page = pdf.get_page(page_number)
bitmap = page.render(scale=1, rotation=0, crop=(0, 0, 0, 0))
pil_image = bitmap.to_pil()
pil_images.append(pil_image)
return pil_images
def resize_base64_image(base64_string, size=(128, 128)):
"""
Resize an image encoded as a Base64 string
:param base64_string: Base64 string
:param size: Image size
:return: Re-sized Base64 string
"""
# Decode the Base64 string
img_data = base64.b64decode(base64_string)
img = Image.open(io.BytesIO(img_data))
# Resize the image
resized_img = img.resize(size, Image.LANCZOS)
# Save the resized image to a bytes buffer
buffered = io.BytesIO()
resized_img.save(buffered, format=img.format)
# Encode the resized image to Base64
return base64.b64encode(buffered.getvalue()).decode("utf-8")
def convert_to_base64(pil_image):
"""
Convert PIL images to Base64 encoded strings
:param pil_image: PIL image
:return: Re-sized Base64 string
"""
buffered = BytesIO()
pil_image.save(buffered, format="JPEG") # You can change the format if needed
img_str = base64.b64encode(buffered.getvalue()).decode("utf-8")
img_str = resize_base64_image(img_str, size=(960, 540))
return img_str
def create_multi_vector_retriever(
vectorstore, image_summaries, images, local_file_store
):
"""
Create retriever that indexes summaries, but returns raw images or texts
:param vectorstore: Vectorstore to store embedded image sumamries
:param image_summaries: Image summaries
:param images: Base64 encoded images
:param local_file_store: Use local file storage
:return: Retriever
"""
# File storage option
if local_file_store:
store = LocalFileStore(
str(Path(__file__).parent / "multi_vector_retriever_metadata")
)
else:
# Initialize the storage layer for images using Redis
UPSTASH_URL = os.getenv("UPSTASH_URL")
UPSTASH_TOKEN = os.getenv("UPSTASH_TOKEN")
store = UpstashRedisByteStore(url=UPSTASH_URL, token=UPSTASH_TOKEN)
# Doc ID
id_key = "doc_id"
# Create the multi-vector retriever
retriever = MultiVectorRetriever(
vectorstore=vectorstore,
byte_store=store,
id_key=id_key,
)
# Helper function to add documents to the vectorstore and docstore
def add_documents(retriever, doc_summaries, doc_contents):
doc_ids = [str(uuid.uuid4()) for _ in doc_contents]
summary_docs = [
Document(page_content=s, metadata={id_key: doc_ids[i]})
for i, s in enumerate(doc_summaries)
]
retriever.vectorstore.add_documents(summary_docs)
retriever.docstore.mset(list(zip(doc_ids, doc_contents)))
add_documents(retriever, image_summaries, images)
return retriever
# Load PDF
doc_path = Path(__file__).parent / "docs/DDOG_Q3_earnings_deck.pdf"
rel_doc_path = doc_path.relative_to(Path.cwd())
print("Extract slides as images")
pil_images = get_images_from_pdf(rel_doc_path)
# Convert to b64
images_base_64 = [convert_to_base64(i) for i in pil_images]
# Image summaries
print("Generate image summaries")
image_summaries, images_base_64_processed = generate_img_summaries(images_base_64)
# The vectorstore to use to index the images summaries
vectorstore_mvr = Chroma(
collection_name="image_summaries",
persist_directory=str(Path(__file__).parent / "chroma_db_multi_modal"),
embedding_function=OpenAIEmbeddings(),
)
# Create documents
images_base_64_processed_documents = [
Document(page_content=i) for i in images_base_64_processed
]
# Create retriever
retriever_multi_vector_img = create_multi_vector_retriever(
vectorstore_mvr,
image_summaries,
images_base_64_processed_documents,
local_file_store=True,
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-chroma-multi-modal-multi-vector/rag_chroma_multi_modal_multi_vector.ipynb | {
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "681a5d1e",
"metadata": {},
"source": [
"## Run Template\n",
"\n",
"In `server.py`, set -\n",
"```\n",
"add_routes(app, chain_rag_conv, path=\"/rag-chroma-multi-modal-multi-vector\")\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d774be2a",
"metadata": {},
"outputs": [],
"source": [
"from langserve.client import RemoteRunnable\n",
"\n",
"rag_app = RemoteRunnable(\"http://localhost:8001/rag-chroma-multi-modal-multi-vector\")\n",
"rag_app.invoke(\"What is the projected TAM for observability expected for each year through 2026?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-chroma-multi-modal-multi-vector/rag_chroma_multi_modal_multi_vector/__init__.py | from rag_chroma_multi_modal_multi_vector.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-chroma-multi-modal-multi-vector/rag_chroma_multi_modal_multi_vector/chain.py | import base64
import io
import os
from pathlib import Path
from langchain.pydantic_v1 import BaseModel
from langchain.retrievers.multi_vector import MultiVectorRetriever
from langchain.storage import LocalFileStore, UpstashRedisByteStore
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import OpenAIEmbeddings
from langchain_community.vectorstores import Chroma
from langchain_core.documents import Document
from langchain_core.messages import HumanMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runnables import RunnableLambda, RunnablePassthrough
from PIL import Image
def resize_base64_image(base64_string, size=(128, 128)):
"""
Resize an image encoded as a Base64 string.
:param base64_string: A Base64 encoded string of the image to be resized.
:param size: A tuple representing the new size (width, height) for the image.
:return: A Base64 encoded string of the resized image.
"""
img_data = base64.b64decode(base64_string)
img = Image.open(io.BytesIO(img_data))
resized_img = img.resize(size, Image.LANCZOS)
buffered = io.BytesIO()
resized_img.save(buffered, format=img.format)
return base64.b64encode(buffered.getvalue()).decode("utf-8")
def get_resized_images(docs):
"""
Resize images from base64-encoded strings.
:param docs: A list of base64-encoded image to be resized.
:return: Dict containing a list of resized base64-encoded strings.
"""
b64_images = []
for doc in docs:
if isinstance(doc, Document):
doc = doc.page_content
resized_image = resize_base64_image(doc, size=(1280, 720))
b64_images.append(resized_image)
return {"images": b64_images}
def img_prompt_func(data_dict, num_images=2):
"""
GPT-4V prompt for image analysis.
:param data_dict: A dict with images and a user-provided question.
:param num_images: Number of images to include in the prompt.
:return: A list containing message objects for each image and the text prompt.
"""
messages = []
if data_dict["context"]["images"]:
for image in data_dict["context"]["images"][:num_images]:
image_message = {
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{image}"},
}
messages.append(image_message)
text_message = {
"type": "text",
"text": (
"You are an analyst tasked with answering questions about visual content.\n"
"You will be give a set of image(s) from a slide deck / presentation.\n"
"Use this information to answer the user question. \n"
f"User-provided question: {data_dict['question']}\n\n"
),
}
messages.append(text_message)
return [HumanMessage(content=messages)]
def multi_modal_rag_chain(retriever):
"""
Multi-modal RAG chain,
:param retriever: A function that retrieves the necessary context for the model.
:return: A chain of functions representing the multi-modal RAG process.
"""
# Initialize the multi-modal Large Language Model with specific parameters
model = ChatOpenAI(temperature=0, model="gpt-4-vision-preview", max_tokens=1024)
# Define the RAG pipeline
chain = (
{
"context": retriever | RunnableLambda(get_resized_images),
"question": RunnablePassthrough(),
}
| RunnableLambda(img_prompt_func)
| model
| StrOutputParser()
)
return chain
# Flag
local_file_store = True
# Load chroma
vectorstore_mvr = Chroma(
collection_name="image_summaries",
persist_directory=str(Path(__file__).parent.parent / "chroma_db_multi_modal"),
embedding_function=OpenAIEmbeddings(),
)
if local_file_store:
store = LocalFileStore(
str(Path(__file__).parent.parent / "multi_vector_retriever_metadata")
)
else:
# Load redis
UPSTASH_URL = os.getenv("UPSTASH_URL")
UPSTASH_TOKEN = os.getenv("UPSTASH_TOKEN")
store = UpstashRedisByteStore(url=UPSTASH_URL, token=UPSTASH_TOKEN)
#
id_key = "doc_id"
# Create the multi-vector retriever
retriever = MultiVectorRetriever(
vectorstore=vectorstore_mvr,
byte_store=store,
id_key=id_key,
)
# Create RAG chain
chain = multi_modal_rag_chain(retriever)
# Add typing for input
class Question(BaseModel):
__root__: str
chain = chain.with_types(input_type=Question)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-azure-search/README.md | # rag-azure-search
This template performs RAG on documents using [Azure AI Search](https://learn.microsoft.com/azure/search/search-what-is-azure-search) as the vectorstore and Azure OpenAI chat and embedding models.
For additional details on RAG with Azure AI Search, refer to [this notebook](https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/vectorstores/azuresearch.ipynb).
## Environment Setup
***Prerequisites:*** Existing [Azure AI Search](https://learn.microsoft.com/azure/search/search-what-is-azure-search) and [Azure OpenAI](https://learn.microsoft.com/azure/ai-services/openai/overview) resources.
***Environment Variables:***
To run this template, you'll need to set the following environment variables:
***Required:***
- AZURE_SEARCH_ENDPOINT - The endpoint of the Azure AI Search service.
- AZURE_SEARCH_KEY - The API key for the Azure AI Search service.
- AZURE_OPENAI_ENDPOINT - The endpoint of the Azure OpenAI service.
- AZURE_OPENAI_API_KEY - The API key for the Azure OpenAI service.
- AZURE_EMBEDDINGS_DEPLOYMENT - Name of the Azure OpenAI deployment to use for embeddings.
- AZURE_CHAT_DEPLOYMENT - Name of the Azure OpenAI deployment to use for chat.
***Optional:***
- AZURE_SEARCH_INDEX_NAME - Name of an existing Azure AI Search index to use. If not provided, an index will be created with name "rag-azure-search".
- OPENAI_API_VERSION - Azure OpenAI API version to use. Defaults to "2023-05-15".
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-azure-search
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-azure-search
```
And add the following code to your `server.py` file:
```python
from rag_azure_search import chain as rag_azure_search_chain
add_routes(app, rag_azure_search_chain, path="/rag-azure-search")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-azure-search/playground](http://127.0.0.1:8000/rag-azure-search/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-azure-search")
``` | Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-azure-search/rag_azure_search/__init__.py | from rag_azure_search.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-azure-search/rag_azure_search/chain.py | import os
from langchain_community.vectorstores.azuresearch import AzureSearch
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
from langchain_openai import AzureChatOpenAI, AzureOpenAIEmbeddings
if not os.getenv("AZURE_OPENAI_ENDPOINT"):
raise ValueError("Please set the environment variable AZURE_OPENAI_ENDPOINT")
if not os.getenv("AZURE_OPENAI_API_KEY"):
raise ValueError("Please set the environment variable AZURE_OPENAI_API_KEY")
if not os.getenv("AZURE_EMBEDDINGS_DEPLOYMENT"):
raise ValueError("Please set the environment variable AZURE_EMBEDDINGS_DEPLOYMENT")
if not os.getenv("AZURE_CHAT_DEPLOYMENT"):
raise ValueError("Please set the environment variable AZURE_CHAT_DEPLOYMENT")
if not os.getenv("AZURE_SEARCH_ENDPOINT"):
raise ValueError("Please set the environment variable AZURE_SEARCH_ENDPOINT")
if not os.getenv("AZURE_SEARCH_KEY"):
raise ValueError("Please set the environment variable AZURE_SEARCH_KEY")
api_version = os.getenv("OPENAI_API_VERSION", "2023-05-15")
index_name = os.getenv("AZURE_SEARCH_INDEX_NAME", "rag-azure-search")
embeddings = AzureOpenAIEmbeddings(
deployment=os.environ["AZURE_EMBEDDINGS_DEPLOYMENT"],
api_version=api_version,
chunk_size=1,
)
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=os.environ["AZURE_SEARCH_ENDPOINT"],
azure_search_key=os.environ["AZURE_SEARCH_KEY"],
index_name=index_name,
embedding_function=embeddings.embed_query,
)
"""
(Optional) Example document -
Uncomment the following code to load the document into the vector store
or substitute with your own.
"""
# import pathlib
# from langchain.text_splitter import CharacterTextSplitter
# from langchain_community.document_loaders import TextLoader
# current_file_path = pathlib.Path(__file__).resolve()
# root_directory = current_file_path.parents[3]
# target_file_path = \
# root_directory / "docs" / "docs" / "modules" / "state_of_the_union.txt"
# loader = TextLoader(str(target_file_path), encoding="utf-8")
# documents = loader.load()
# text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
# docs = text_splitter.split_documents(documents)
# vector_store.add_documents(documents=docs)
# RAG prompt
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
# Perform a similarity search
retriever = vector_store.as_retriever()
_prompt = ChatPromptTemplate.from_template(template)
_model = AzureChatOpenAI(
deployment_name=os.environ["AZURE_CHAT_DEPLOYMENT"],
api_version=api_version,
)
chain = (
RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
| _prompt
| _model
| StrOutputParser()
)
# Add typing for input
class Question(BaseModel):
__root__: str
chain = chain.with_types(input_type=Question)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-aws-kendra/README.md | # rag-aws-kendra
This template is an application that utilizes Amazon Kendra, a machine learning powered search service, and Anthropic Claude for text generation. The application retrieves documents using a Retrieval chain to answer questions from your documents.
It uses the `boto3` library to connect with the Bedrock service.
For more context on building RAG applications with Amazon Kendra, check [this page](https://aws.amazon.com/blogs/machine-learning/quickly-build-high-accuracy-generative-ai-applications-on-enterprise-data-using-amazon-kendra-langchain-and-large-language-models/).
## Environment Setup
Please ensure to setup and configure `boto3` to work with your AWS account.
You can follow the guide [here](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration).
You should also have a Kendra Index set up before using this template.
You can use [this Cloudformation template](https://github.com/aws-samples/amazon-kendra-langchain-extensions/blob/main/kendra_retriever_samples/kendra-docs-index.yaml) to create a sample index.
This includes sample data containing AWS online documentation for Amazon Kendra, Amazon Lex, and Amazon SageMaker. Alternatively, you can use your own Amazon Kendra index if you have indexed your own dataset.
The following environment variables need to be set:
* `AWS_DEFAULT_REGION` - This should reflect the correct AWS region. Default is `us-east-1`.
* `AWS_PROFILE` - This should reflect your AWS profile. Default is `default`.
* `KENDRA_INDEX_ID` - This should have the Index ID of the Kendra index. Note that the Index ID is a 36 character alphanumeric value that can be found in the index detail page.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-aws-kendra
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-aws-kendra
```
And add the following code to your `server.py` file:
```python
from rag_aws_kendra.chain import chain as rag_aws_kendra_chain
add_routes(app, rag_aws_kendra_chain, path="/rag-aws-kendra")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-aws-kendra/playground](http://127.0.0.1:8000/rag-aws-kendra/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-aws-kendra")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-aws-kendra/main.py | from rag_aws_kendra.chain import chain
if __name__ == "__main__":
query = "Does Kendra support table extraction?"
print(chain.invoke(query))
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-aws-kendra/rag_aws_kendra/chain.py | import os
from langchain.retrievers import AmazonKendraRetriever
from langchain_community.llms.bedrock import Bedrock
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
# Get region and profile from env
region = os.environ.get("AWS_DEFAULT_REGION", "us-east-1")
profile = os.environ.get("AWS_PROFILE", "default")
kendra_index = os.environ.get("KENDRA_INDEX_ID", None)
if not kendra_index:
raise ValueError(
"No value provided in env variable 'KENDRA_INDEX_ID'. "
"A Kendra index is required to run this application."
)
# Set LLM and embeddings
model = Bedrock(
model_id="anthropic.claude-v2",
region_name=region,
credentials_profile_name=profile,
model_kwargs={"max_tokens_to_sample": 200},
)
# Create Kendra retriever
retriever = AmazonKendraRetriever(index_id=kendra_index, top_k=5, region_name=region)
# RAG prompt
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
# RAG
chain = (
RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
| prompt
| model
| StrOutputParser()
)
# Add typing for input
class Question(BaseModel):
__root__: str
chain = chain.with_types(input_type=Question)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-aws-bedrock/README.md |
# rag-aws-bedrock
This template is designed to connect with the AWS Bedrock service, a managed server that offers a set of foundation models.
It primarily uses the `Anthropic Claude` for text generation and `Amazon Titan` for text embedding, and utilizes FAISS as the vectorstore.
For additional context on the RAG pipeline, refer to [this notebook](https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/03_QuestionAnswering/01_qa_w_rag_claude.ipynb).
## Environment Setup
Before you can use this package, ensure that you have configured `boto3` to work with your AWS account.
For details on how to set up and configure `boto3`, visit [this page](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html#configuration).
In addition, you need to install the `faiss-cpu` package to work with the FAISS vector store:
```bash
pip install faiss-cpu
```
You should also set the following environment variables to reflect your AWS profile and region (if you're not using the `default` AWS profile and `us-east-1` region):
* `AWS_DEFAULT_REGION`
* `AWS_PROFILE`
## Usage
First, install the LangChain CLI:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package:
```shell
langchain app new my-app --package rag-aws-bedrock
```
To add this package to an existing project:
```shell
langchain app add rag-aws-bedrock
```
Then add the following code to your `server.py` file:
```python
from rag_aws_bedrock import chain as rag_aws_bedrock_chain
add_routes(app, rag_aws_bedrock_chain, path="/rag-aws-bedrock")
```
(Optional) If you have access to LangSmith, you can configure it to trace, monitor, and debug LangChain applications. If you don't have access, you can skip this section.
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server running locally at [http://localhost:8000](http://localhost:8000)
You can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs) and access the playground at [http://127.0.0.1:8000/rag-aws-bedrock/playground](http://127.0.0.1:8000/rag-aws-bedrock/playground).
You can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-aws-bedrock")
``` | Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-aws-bedrock/main.py | from rag_aws_bedrock.chain import chain
if __name__ == "__main__":
query = "What is this data about?"
print(chain.invoke(query))
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-aws-bedrock/rag_aws_bedrock.ipynb | {
"cells": [
{
"cell_type": "markdown",
"id": "681a5d1e",
"metadata": {},
"source": [
"## Connect to template\n",
"\n",
"In `server.py`, set -\n",
"```\n",
"add_routes(app, chain_ext, path=\"/rag_aws_bedrock\")\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d774be2a",
"metadata": {},
"outputs": [],
"source": [
"from langserve.client import RemoteRunnable\n",
"\n",
"rag_app_pinecone = RemoteRunnable(\"http://0.0.0.0:8001/rag_aws_bedrock\")\n",
"rag_app_pinecone.invoke(\"What are the different types of agent memory\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-aws-bedrock/rag_aws_bedrock/__init__.py | from rag_aws_bedrock.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-aws-bedrock/rag_aws_bedrock/chain.py | import os
from langchain_community.embeddings import BedrockEmbeddings
from langchain_community.llms.bedrock import Bedrock
from langchain_community.vectorstores import FAISS
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
# Get region and profile from env
region = os.environ.get("AWS_DEFAULT_REGION", "us-east-1")
profile = os.environ.get("AWS_PROFILE", "default")
# Set LLM and embeddings
model = Bedrock(
model_id="anthropic.claude-v2",
region_name=region,
credentials_profile_name=profile,
model_kwargs={"max_tokens_to_sample": 200},
)
bedrock_embeddings = BedrockEmbeddings(model_id="amazon.titan-embed-text-v1")
# Add to vectorDB
vectorstore = FAISS.from_texts(
["harrison worked at kensho"], embedding=bedrock_embeddings
)
retriever = vectorstore.as_retriever()
# Get retriever from vectorstore
retriever = vectorstore.as_retriever()
# RAG prompt
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
# RAG
chain = (
RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
| prompt
| model
| StrOutputParser()
)
# Add typing for input
class Question(BaseModel):
__root__: str
chain = chain.with_types(input_type=Question)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-astradb/README.md |
# rag-astradb
This template will perform RAG using Astra DB (`AstraDB` vector store class)
## Environment Setup
An [Astra DB](https://astra.datastax.com) database is required; free tier is fine.
- You need the database **API endpoint** (such as `https://0123...-us-east1.apps.astra.datastax.com`) ...
- ... and a **token** (`AstraCS:...`).
Also, an **OpenAI API Key** is required. _Note that out-of-the-box this demo supports OpenAI only, unless you tinker with the code._
Provide the connection parameters and secrets through environment variables. Please refer to `.env.template` for the variable names.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package rag-astradb
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add rag-astradb
```
And add the following code to your `server.py` file:
```python
from astradb_entomology_rag import chain as astradb_entomology_rag_chain
add_routes(app, astradb_entomology_rag_chain, path="/rag-astradb")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/rag-astradb/playground](http://127.0.0.1:8000/rag-astradb/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/rag-astradb")
```
## Reference
Stand-alone repo with LangServe chain: [here](https://github.com/hemidactylus/langserve_astradb_entomology_rag).
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-astradb/main.py | from astradb_entomology_rag import chain
if __name__ == "__main__":
response = chain.invoke("Are there more coleoptera or bugs?")
print(response)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-astradb/astradb_entomology_rag/__init__.py | import os
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import OpenAIEmbeddings
from langchain_community.vectorstores import AstraDB
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from .populate_vector_store import populate
# inits
llm = ChatOpenAI()
embeddings = OpenAIEmbeddings()
vector_store = AstraDB(
embedding=embeddings,
collection_name="langserve_rag_demo",
token=os.environ["ASTRA_DB_APPLICATION_TOKEN"],
api_endpoint=os.environ["ASTRA_DB_API_ENDPOINT"],
namespace=os.environ.get("ASTRA_DB_KEYSPACE"),
)
retriever = vector_store.as_retriever(search_kwargs={"k": 3})
# For demo reasons, let's ensure there are rows on the vector store.
# Please remove this and/or adapt to your use case!
inserted_lines = populate(vector_store)
if inserted_lines:
print(f"Done ({inserted_lines} lines inserted).")
entomology_template = """
You are an expert entomologist, tasked with answering enthusiast biologists' questions.
You must answer based only on the provided context, do not make up any fact.
Your answers must be concise and to the point, but strive to provide scientific details
(such as family, order, Latin names, and so on when appropriate).
You MUST refuse to answer questions on other topics than entomology,
as well as questions whose answer is not found in the provided context.
CONTEXT:
{context}
QUESTION: {question}
YOUR ANSWER:"""
entomology_prompt = ChatPromptTemplate.from_template(entomology_template)
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| entomology_prompt
| llm
| StrOutputParser()
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/rag-astradb/astradb_entomology_rag/populate_vector_store.py | import os
BASE_DIR = os.path.abspath(os.path.dirname(__file__))
def populate(vector_store):
# is the store empty? find out with a probe search
hits = vector_store.similarity_search_by_vector(
embedding=[0.001] * 1536,
k=1,
)
#
if len(hits) == 0:
# this seems a first run:
# must populate the vector store
src_file_name = os.path.join(BASE_DIR, "..", "sources.txt")
lines = [
line.strip()
for line in open(src_file_name).readlines()
if line.strip()
if line[0] != "#"
]
# deterministic IDs to prevent duplicates on multiple runs
ids = ["_".join(line.split(" ")[:2]).lower().replace(":", "") for line in lines]
#
vector_store.add_texts(texts=lines, ids=ids)
return len(lines)
else:
return 0
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/python-lint/README.md | # python-lint
This agent specializes in generating high-quality Python code with a focus on proper formatting and linting. It uses `black`, `ruff`, and `mypy` to ensure the code meets standard quality checks.
This streamlines the coding process by integrating and responding to these checks, resulting in reliable and consistent code output.
It cannot actually execute the code it writes, as code execution may introduce additional dependencies and potential security vulnerabilities.
This makes the agent both a secure and efficient solution for code generation tasks.
You can use it to generate Python code directly, or network it with planning and execution agents.
## Environment Setup
- Install `black`, `ruff`, and `mypy`: `pip install -U black ruff mypy`
- Set `OPENAI_API_KEY` environment variable.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package python-lint
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add python-lint
```
And add the following code to your `server.py` file:
```python
from python_lint import agent_executor as python_lint_agent
add_routes(app, python_lint_agent, path="/python-lint")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/python-lint/playground](http://127.0.0.1:8000/python-lint/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/python-lint")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/python-lint/python_lint/__init__.py | from python_lint.agent_executor import agent_executor
__all__ = ["agent_executor"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/python-lint/python_lint/agent_executor.py | import os
import re
import subprocess # nosec
import tempfile
from langchain.agents import AgentType, initialize_agent
from langchain.agents.tools import Tool
from langchain.pydantic_v1 import BaseModel, Field, ValidationError, validator
from langchain_community.chat_models import ChatOpenAI
from langchain_core.language_models import BaseLLM
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import ConfigurableField, Runnable
def strip_python_markdown_tags(text: str) -> str:
pat = re.compile(r"```python\n(.*)```", re.DOTALL)
code = pat.match(text)
if code:
return code.group(1)
else:
return text
def format_black(filepath: str):
"""Format a file with black."""
subprocess.run( # nosec
f"black {filepath}",
stderr=subprocess.STDOUT,
text=True,
shell=True,
timeout=3,
check=False,
)
def format_ruff(filepath: str):
"""Run ruff format on a file."""
subprocess.run( # nosec
f"ruff check --fix {filepath}",
shell=True,
text=True,
timeout=3,
universal_newlines=True,
check=False,
)
subprocess.run( # nosec
f"ruff format {filepath}",
stderr=subprocess.STDOUT,
shell=True,
timeout=3,
text=True,
check=False,
)
def check_ruff(filepath: str):
"""Run ruff check on a file."""
subprocess.check_output( # nosec
f"ruff check {filepath}",
stderr=subprocess.STDOUT,
shell=True,
timeout=3,
text=True,
)
def check_mypy(filepath: str, strict: bool = True, follow_imports: str = "skip"):
"""Run mypy on a file."""
cmd = (
f"mypy {'--strict' if strict else ''} "
f"--follow-imports={follow_imports} {filepath}"
)
subprocess.check_output( # nosec
cmd,
stderr=subprocess.STDOUT,
shell=True,
text=True,
timeout=3,
)
class PythonCode(BaseModel):
code: str = Field(
description="Python code conforming to "
"ruff, black, and *strict* mypy standards.",
)
@validator("code")
@classmethod
def check_code(cls, v: str) -> str:
v = strip_python_markdown_tags(v).strip()
try:
with tempfile.NamedTemporaryFile(mode="w", delete=False) as temp_file:
temp_file.write(v)
temp_file_path = temp_file.name
try:
# format with black and ruff
format_black(temp_file_path)
format_ruff(temp_file_path)
except subprocess.CalledProcessError:
pass
# update `v` with formatted code
with open(temp_file_path, "r") as temp_file:
v = temp_file.read()
# check
complaints = dict(ruff=None, mypy=None)
try:
check_ruff(temp_file_path)
except subprocess.CalledProcessError as e:
complaints["ruff"] = e.output
try:
check_mypy(temp_file_path)
except subprocess.CalledProcessError as e:
complaints["mypy"] = e.output
# raise ValueError if ruff or mypy had complaints
if any(complaints.values()):
code_str = f"```{temp_file_path}\n{v}```"
error_messages = [
f"```{key}\n{value}```"
for key, value in complaints.items()
if value
]
raise ValueError("\n\n".join([code_str] + error_messages))
finally:
os.remove(temp_file_path)
return v
def check_code(code: str) -> str:
try:
code_obj = PythonCode(code=code)
return (
f"# LGTM\n"
f"# use the `submit` tool to submit this code:\n\n"
f"```python\n{code_obj.code}\n```"
)
except ValidationError as e:
return e.errors()[0]["msg"]
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a world class Python coder who uses "
"black, ruff, and *strict* mypy for all of your code. "
"Provide complete, end-to-end Python code "
"to meet the user's description/requirements. "
"Always `check` your code. When you're done, "
"you must ALWAYS use the `submit` tool.",
),
(
"human",
": {input}",
),
],
)
check_code_tool = Tool.from_function(
check_code,
name="check-code",
description="Always check your code before submitting it!",
)
submit_code_tool = Tool.from_function(
strip_python_markdown_tags,
name="submit-code",
description="THIS TOOL is the most important. "
"use it to submit your code to the user who requested it... "
"but be sure to `check` it first!",
return_direct=True,
)
tools = [check_code_tool, submit_code_tool]
def get_agent_executor(
llm: BaseLLM,
agent_type: AgentType = AgentType.OPENAI_FUNCTIONS,
) -> Runnable:
_agent_executor = initialize_agent(
tools,
llm,
agent=agent_type,
verbose=True,
handle_parsing_errors=True,
prompt=prompt,
)
return _agent_executor | (lambda output: output["output"])
class Instruction(BaseModel):
__root__: str
agent_executor = (
get_agent_executor(ChatOpenAI(model="gpt-4-1106-preview", temperature=0.0))
.configurable_alternatives(
ConfigurableField("model_name"),
default_key="gpt4turbo",
gpt4=get_agent_executor(ChatOpenAI(model="gpt-4", temperature=0.0)),
gpt35t=get_agent_executor(
ChatOpenAI(model="gpt-3.5-turbo", temperature=0.0),
),
)
.with_types(input_type=Instruction, output_type=str)
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/propositional-retrieval/README.md | # propositional-retrieval
This template demonstrates the multi-vector indexing strategy proposed by Chen, et. al.'s [Dense X Retrieval: What Retrieval Granularity Should We Use?](https://arxiv.org/abs/2312.06648). The prompt, which you can [try out on the hub](https://smith.langchain.com/hub/wfh/proposal-indexing), directs an LLM to generate de-contextualized "propositions" which can be vectorized to increase the retrieval accuracy. You can see the full definition in `proposal_chain.py`.
![Diagram illustrating the multi-vector indexing strategy for information retrieval, showing the process from Wikipedia data through a Proposition-izer to FactoidWiki, and the retrieval of information units for a QA model.](https://github.com/langchain-ai/langchain/raw/master/templates/propositional-retrieval/_images/retriever_diagram.png "Retriever Diagram")
## Storage
For this demo, we index a simple academic paper using the RecursiveUrlLoader, and store all retriever information locally (using chroma and a bytestore stored on the local filesystem). You can modify the storage layer in `storage.py`.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access `gpt-3.5` and the OpenAI Embeddings classes.
## Indexing
Create the index by running the following:
```python
poetry install
poetry run python propositional_retrieval/ingest.py
```
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package propositional-retrieval
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add propositional-retrieval
```
And add the following code to your `server.py` file:
```python
from propositional_retrieval import chain
add_routes(app, chain, path="/propositional-retrieval")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/propositional-retrieval/playground](http://127.0.0.1:8000/propositional-retrieval/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/propositional-retrieval")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/propositional-retrieval/propositional_retrieval.ipynb | {
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "681a5d1e",
"metadata": {},
"source": [
"## Run Template\n",
"\n",
"In `server.py`, set -\n",
"```\n",
"from fastapi import FastAPI\n",
"from langserve import add_routes\n",
"from propositional_retrieval import chain\n",
"\n",
"app = FastAPI(\n",
" title=\"LangChain Server\",\n",
" version=\"1.0\",\n",
" description=\"Retriever and Generator for RAG Chroma Dense Retrieval\",\n",
")\n",
"\n",
"add_routes(app, chain, path=\"/propositional-retrieval\")\n",
"\n",
"if __name__ == \"__main__\":\n",
" import uvicorn\n",
"\n",
" uvicorn.run(app, host=\"localhost\", port=8000)\n",
"\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d774be2a",
"metadata": {},
"outputs": [],
"source": [
"from langserve.client import RemoteRunnable\n",
"\n",
"rag_app = RemoteRunnable(\"http://localhost:8001/propositional-retrieval\")\n",
"rag_app.invoke(\"How are transformers related to convolutional neural networks?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.2"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/propositional-retrieval/propositional_retrieval/__init__.py | from propositional_retrieval.chain import chain
from propositional_retrieval.proposal_chain import proposition_chain
__all__ = ["chain", "proposition_chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/propositional-retrieval/propositional_retrieval/chain.py | from langchain_community.chat_models import ChatOpenAI
from langchain_core.load import load
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnablePassthrough
from propositional_retrieval.constants import DOCSTORE_ID_KEY
from propositional_retrieval.storage import get_multi_vector_retriever
def format_docs(docs: list) -> str:
loaded_docs = [load(doc) for doc in docs]
return "\n".join(
[
f"<Document id={i}>\n{doc.page_content}\n</Document>"
for i, doc in enumerate(loaded_docs)
]
)
def rag_chain(retriever):
"""
The RAG chain
:param retriever: A function that retrieves the necessary context for the model.
:return: A chain of functions representing the multi-modal RAG process.
"""
model = ChatOpenAI(temperature=0, model="gpt-4-1106-preview", max_tokens=1024)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are an AI assistant. Answer based on the retrieved documents:"
"\n<Documents>\n{context}\n</Documents>",
),
("user", "{question}?"),
]
)
# Define the RAG pipeline
chain = (
{
"context": retriever | format_docs,
"question": RunnablePassthrough(),
}
| prompt
| model
| StrOutputParser()
)
return chain
# Create the multi-vector retriever
retriever = get_multi_vector_retriever(DOCSTORE_ID_KEY)
# Create RAG chain
chain = rag_chain(retriever)
# Add typing for input
class Question(BaseModel):
__root__: str
chain = chain.with_types(input_type=Question)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/propositional-retrieval/propositional_retrieval/constants.py | DOCSTORE_ID_KEY = "doc_id"
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/propositional-retrieval/propositional_retrieval/ingest.py | import logging
import uuid
from typing import Sequence
from bs4 import BeautifulSoup as Soup
from langchain_core.documents import Document
from langchain_core.runnables import Runnable
from propositional_retrieval.constants import DOCSTORE_ID_KEY
from propositional_retrieval.proposal_chain import proposition_chain
from propositional_retrieval.storage import get_multi_vector_retriever
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def add_documents(
retriever,
propositions: Sequence[Sequence[str]],
docs: Sequence[Document],
id_key: str = DOCSTORE_ID_KEY,
):
doc_ids = [
str(uuid.uuid5(uuid.NAMESPACE_DNS, doc.metadata["source"])) for doc in docs
]
prop_docs = [
Document(page_content=prop, metadata={id_key: doc_ids[i]})
for i, props in enumerate(propositions)
for prop in props
if prop
]
retriever.vectorstore.add_documents(prop_docs)
retriever.docstore.mset(list(zip(doc_ids, docs)))
def create_index(
docs: Sequence[Document],
indexer: Runnable,
docstore_id_key: str = DOCSTORE_ID_KEY,
):
"""
Create retriever that indexes docs and their propositions
:param docs: Documents to index
:param indexer: Runnable creates additional propositions per doc
:param docstore_id_key: Key to use to store the docstore id
:return: Retriever
"""
logger.info("Creating multi-vector retriever")
retriever = get_multi_vector_retriever(docstore_id_key)
propositions = indexer.batch(
[{"input": doc.page_content} for doc in docs], {"max_concurrency": 10}
)
add_documents(
retriever,
propositions,
docs,
id_key=docstore_id_key,
)
return retriever
if __name__ == "__main__":
# For our example, we'll load docs from the web
from langchain_text_splitters import RecursiveCharacterTextSplitter # noqa
from langchain_community.document_loaders.recursive_url_loader import (
RecursiveUrlLoader,
)
# The attention is all you need paper
# Could add more parsing here, as it's very raw.
loader = RecursiveUrlLoader(
"https://ar5iv.labs.arxiv.org/html/1706.03762",
max_depth=2,
extractor=lambda x: Soup(x, "html.parser").text,
)
data = loader.load()
logger.info(f"Loaded {len(data)} documents")
# Split
text_splitter = RecursiveCharacterTextSplitter(chunk_size=8000, chunk_overlap=0)
all_splits = text_splitter.split_documents(data)
logger.info(f"Split into {len(all_splits)} documents")
# Create retriever
retriever_multi_vector_img = create_index(
all_splits,
proposition_chain,
DOCSTORE_ID_KEY,
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/propositional-retrieval/propositional_retrieval/proposal_chain.py | import logging
from langchain.output_parsers.openai_tools import JsonOutputToolsParser
from langchain_community.chat_models import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableLambda
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# Modified from the paper to be more robust to benign prompt injection
# https://arxiv.org/abs/2312.06648
# @misc{chen2023dense,
# title={Dense X Retrieval: What Retrieval Granularity Should We Use?},
# author={Tong Chen and Hongwei Wang and Sihao Chen and Wenhao Yu and Kaixin Ma
# and Xinran Zhao and Hongming Zhang and Dong Yu},
# year={2023},
# eprint={2312.06648},
# archivePrefix={arXiv},
# primaryClass={cs.CL}
# }
PROMPT = ChatPromptTemplate.from_messages(
[
(
"system",
"""Decompose the "Content" into clear and simple propositions, ensuring they are interpretable out of
context.
1. Split compound sentence into simple sentences. Maintain the original phrasing from the input
whenever possible.
2. For any named entity that is accompanied by additional descriptive information, separate this
information into its own distinct proposition.
3. Decontextualize the proposition by adding necessary modifier to nouns or entire sentences
and replacing pronouns (e.g., "it", "he", "she", "they", "this", "that") with the full name of the
entities they refer to.
4. Present the results as a list of strings, formatted in JSON.
Example:
Input: Title: ¯Eostre. Section: Theories and interpretations, Connection to Easter Hares. Content:
The earliest evidence for the Easter Hare (Osterhase) was recorded in south-west Germany in
1678 by the professor of medicine Georg Franck von Franckenau, but it remained unknown in
other parts of Germany until the 18th century. Scholar Richard Sermon writes that "hares were
frequently seen in gardens in spring, and thus may have served as a convenient explanation for the
origin of the colored eggs hidden there for children. Alternatively, there is a European tradition
that hares laid eggs, since a hare’s scratch or form and a lapwing’s nest look very similar, and
both occur on grassland and are first seen in the spring. In the nineteenth century the influence
of Easter cards, toys, and books was to make the Easter Hare/Rabbit popular throughout Europe.
German immigrants then exported the custom to Britain and America where it evolved into the
Easter Bunny."
Output: [ "The earliest evidence for the Easter Hare was recorded in south-west Germany in
1678 by Georg Franck von Franckenau.", "Georg Franck von Franckenau was a professor of
medicine.", "The evidence for the Easter Hare remained unknown in other parts of Germany until
the 18th century.", "Richard Sermon was a scholar.", "Richard Sermon writes a hypothesis about
the possible explanation for the connection between hares and the tradition during Easter", "Hares
were frequently seen in gardens in spring.", "Hares may have served as a convenient explanation
for the origin of the colored eggs hidden in gardens for children.", "There is a European tradition
that hares laid eggs.", "A hare’s scratch or form and a lapwing’s nest look very similar.", "Both
hares and lapwing’s nests occur on grassland and are first seen in the spring.", "In the nineteenth
century the influence of Easter cards, toys, and books was to make the Easter Hare/Rabbit popular
throughout Europe.", "German immigrants exported the custom of the Easter Hare/Rabbit to
Britain and America.", "The custom of the Easter Hare/Rabbit evolved into the Easter Bunny in
Britain and America."]""", # noqa
),
("user", "Decompose the following:\n{input}"),
]
)
def get_propositions(tool_calls: list) -> list:
if not tool_calls:
raise ValueError("No tool calls found")
return tool_calls[0]["args"]["propositions"]
def empty_proposals(x):
# Model couldn't generate proposals
return []
proposition_chain = (
PROMPT
| ChatOpenAI(model="gpt-3.5-turbo-16k").bind(
tools=[
{
"type": "function",
"function": {
"name": "decompose_content",
"description": "Return the decomposed propositions",
"parameters": {
"type": "object",
"properties": {
"propositions": {
"type": "array",
"items": {"type": "string"},
}
},
"required": ["propositions"],
},
},
}
],
tool_choice={"type": "function", "function": {"name": "decompose_content"}},
)
| JsonOutputToolsParser()
| get_propositions
).with_fallbacks([RunnableLambda(empty_proposals)])
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/propositional-retrieval/propositional_retrieval/storage.py | import logging
from pathlib import Path
from langchain.retrievers.multi_vector import MultiVectorRetriever
from langchain.storage import LocalFileStore
from langchain_community.embeddings import OpenAIEmbeddings
from langchain_community.vectorstores import Chroma
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
def get_multi_vector_retriever(docstore_id_key: str):
"""Create the composed retriever object."""
vectorstore = get_vectorstore()
store = get_docstore()
return MultiVectorRetriever(
vectorstore=vectorstore,
byte_store=store,
id_key=docstore_id_key,
)
def get_vectorstore(collection_name: str = "proposals"):
"""Get the vectorstore used for this example."""
return Chroma(
collection_name=collection_name,
persist_directory=str(Path(__file__).parent.parent / "chroma_db_proposals"),
embedding_function=OpenAIEmbeddings(),
)
def get_docstore():
"""Get the metadata store used for this example."""
return LocalFileStore(
str(Path(__file__).parent.parent / "multi_vector_retriever_metadata")
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/plate-chain/README.md |
# plate-chain
This template enables parsing of data from laboratory plates.
In the context of biochemistry or molecular biology, laboratory plates are commonly used tools to hold samples in a grid-like format.
This can parse the resulting data into standardized (e.g., JSON) format for further processing.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To utilize plate-chain, you must have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
Creating a new LangChain project and installing plate-chain as the only package can be done with:
```shell
langchain app new my-app --package plate-chain
```
If you wish to add this to an existing project, simply run:
```shell
langchain app add plate-chain
```
Then add the following code to your `server.py` file:
```python
from plate_chain import chain as plate_chain
add_routes(app, plate_chain, path="/plate-chain")
```
(Optional) For configuring LangSmith, which helps trace, monitor and debug LangChain applications, use the following code:
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you're in this directory, you can start a LangServe instance directly by:
```shell
langchain serve
```
This starts the FastAPI app with a server running locally at
[http://localhost:8000](http://localhost:8000)
All templates can be viewed at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
Access the playground at [http://127.0.0.1:8000/plate-chain/playground](http://127.0.0.1:8000/plate-chain/playground)
You can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/plate-chain")
``` | Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/plate-chain/plate_chain/__init__.py | from plate_chain.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/plate-chain/plate_chain/chain.py | import base64
import json
from langchain_community.chat_models import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate
from langchain_core.pydantic_v1 import Field
from langserve import CustomUserType
from .prompts import (
AI_REPONSE_DICT,
FULL_PROMPT,
USER_EXAMPLE_DICT,
create_prompt,
)
from .utils import parse_llm_output
llm = ChatOpenAI(temperature=0, model="gpt-4")
prompt = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template(FULL_PROMPT),
("human", "{user_example}"),
("ai", "{ai_response}"),
("human", "{input}"),
],
)
# ATTENTION: Inherit from CustomUserType instead of BaseModel otherwise
# the server will decode it into a dict instead of a pydantic model.
class FileProcessingRequest(CustomUserType):
"""Request including a base64 encoded file."""
# The extra field is used to specify a widget for the playground UI.
file: str = Field(..., extra={"widget": {"type": "base64file"}})
num_plates: int = None
num_rows: int = 8
num_cols: int = 12
def _load_file(request: FileProcessingRequest):
return base64.b64decode(request.file.encode("utf-8")).decode("utf-8")
def _load_prompt(request: FileProcessingRequest):
return create_prompt(
num_plates=request.num_plates,
num_rows=request.num_rows,
num_cols=request.num_cols,
)
def _get_col_range_str(request: FileProcessingRequest):
if request.num_cols:
return f"from 1 to {request.num_cols}"
else:
return ""
def _get_json_format(request: FileProcessingRequest):
return json.dumps(
[
{
"row_start": 12,
"row_end": 12 + request.num_rows - 1,
"col_start": 1,
"col_end": 1 + request.num_cols - 1,
"contents": "Entity ID",
}
]
)
chain = (
{
# Should add validation to ensure numeric indices
"input": _load_file,
"hint": _load_prompt,
"col_range_str": _get_col_range_str,
"json_format": _get_json_format,
"user_example": lambda x: USER_EXAMPLE_DICT[x.num_rows * x.num_cols],
"ai_response": lambda x: AI_REPONSE_DICT[x.num_rows * x.num_cols],
}
| prompt
| llm
| StrOutputParser()
| parse_llm_output
).with_types(input_type=FileProcessingRequest)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/plate-chain/plate_chain/prompts.py | from typing import Optional
FULL_PROMPT = """# Context
- Plate-based data is rectangular and could be situated anywhere within the dataset.
- The first item in every row is the row index
{hint}
# Rules
- Ignore all data which is not part of the plate.
- Row identifiers start with a single letter of the alphabet.
- The header row of the plate has monotonically increasing integers {col_range_str}.
- The header row should NOT be considered the starting row of the plate.
# Output
- Use 0-indexing for row and column numbers.
- Do NOT include the header row or header column in the output calcuation.
- Produce your output as JSON. ONLY RETURN THE JSON. The format should be:
```json
{json_format}
```
"""
NUM_PLATES_PROMPT = """- There {num_plates_str} in this data."""
ROWS_PROMPT = """- Each plate has {num_rows} rows."""
COLS_PROMPT = """- Each plate has {num_cols} columns."""
GENERIC_PLATES_PROMPT = """
- There may be multiple plates.
- Plate consist of 24 (4x6), 96 (8x12), 384 (16x24), or 1536 (32 x 48) wells.
"""
HUMAN_24W_PROMPT = "0,,,,,,,\n1,,1,2,3,4,5,6\n2,A,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006\n3,B,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006\n4,C,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006\n5,D,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006\n" # noqa: E501
AI_24W_RESPONSE = '[{"row_start": 2, "row_end": 5, "col_start": 1, "col_end": 6, "contents": "SB_ID"}]' # noqa: E501
HUMAN_96W_PROMPT = "0,,,,,,,,,,,,,\n1,,1,2,3,4,5,6,7,8,9,10,11,12\n2,A,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012\n3,B,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012\n4,C,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012\n5,D,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012\n6,E,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012\n7,F,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012\n8,G,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012\n9,H,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012\n" # noqa: E501
AI_96W_RESPONSE = '[{"row_start": 2, "row_end": 9, "col_start": 1, "col_end": 12, "contents": "SB_ID"}]' # noqa: E501
HUMAN_384W_PROMPT = "0,,,,,,,,,,,,,,,,,,,,,,,,,\n1,,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24\n2,A,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024\n3,B,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024\n4,C,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024\n5,D,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024\n6,E,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024\n7,F,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024\n8,G,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024\n9,H,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024\n10,I,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024\n11,J,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024\n12,K,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024\n13,L,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024\n14,M,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024\n15,N,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024\n" # noqa: E501
# should be 15, 23
AI_384W_RESPONSE = '[{"row_start": 2, "row_end": 17, "col_start": 1, "col_end": 24, "contents": "SB_ID"}]' # noqa: E501
HUMAN_1536W_PROMPT = "0,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,\n1,,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48\n2,A,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n3,B,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n4,C,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n5,D,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n6,E,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n7,F,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n8,G,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n9,H,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n10,I,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n11,J,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n12,K,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n13,L,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n14,M,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n15,N,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n16,O,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n17,P,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n18,Q,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n19,R,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n20,S,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n21,T,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n22,U,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n23,V,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n24,W,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n25,X,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n26,Y,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n27,Z,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n28,AA,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n29,AB,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n30,AC,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n31,AD,SB-001,SB-002,SB-003,SB-004,SB-005,SB-006,SB-007,SB-008,SB-009,SB-010,SB-011,SB-012,SB-013,SB-014,SB-015,SB-016,SB-017,SB-018,SB-019,SB-020,SB-021,SB-022,SB-023,SB-024,SB-025,SB-026,SB-027,SB-028,SB-029,SB-030,SB-031,SB-032,SB-033,SB-034,SB-035,SB-036,SB-037,SB-038,SB-039,SB-040,SB-041,SB-042,SB-043,SB-044,SB-045,SB-046,SB-047,SB-048\n" # noqa: E501
AI_1536W_RESPONSE = '[{"row_start": 2, "row_end": 33, "col_start": 1, "col_end": 48, "contents": "SB_ID"}]' # noqa: E501
USER_EXAMPLE_DICT = {
24: HUMAN_24W_PROMPT,
96: HUMAN_96W_PROMPT,
384: HUMAN_384W_PROMPT,
1536: HUMAN_1536W_PROMPT,
}
AI_REPONSE_DICT = {
24: AI_24W_RESPONSE,
96: AI_96W_RESPONSE,
384: AI_384W_RESPONSE,
1536: AI_1536W_RESPONSE,
}
def create_prompt(
num_plates: Optional[int] = None,
num_rows: Optional[int] = None,
num_cols: Optional[int] = None,
) -> str:
additional_prompts = []
if num_plates:
num_plates_str = f"are {num_plates} plates" if num_plates > 1 else "is 1 plate"
additional_prompts.append(
NUM_PLATES_PROMPT.format(num_plates_str=num_plates_str)
)
if num_rows:
additional_prompts.append(ROWS_PROMPT.format(num_rows=num_rows))
if num_cols:
additional_prompts.append(COLS_PROMPT.format(num_cols=num_cols))
return (
"\n".join(additional_prompts) if additional_prompts else GENERIC_PLATES_PROMPT
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/plate-chain/plate_chain/utils.py | import json
from langchain_core.pydantic_v1 import BaseModel, Field, conint
class LLMPlateResponse(BaseModel):
row_start: conint(ge=0) = Field(
..., description="The starting row of the plate (0-indexed)"
)
row_end: conint(ge=0) = Field(
..., description="The ending row of the plate (0-indexed)"
)
col_start: conint(ge=0) = Field(
..., description="The starting column of the plate (0-indexed)"
)
col_end: conint(ge=0) = Field(
..., description="The ending column of the plate (0-indexed)"
)
contents: str
def parse_llm_output(result: str):
"""
Based on the prompt we expect the result to be a string that looks like:
'[{"row_start": 12, "row_end": 19, "col_start": 1, \
"col_end": 12, "contents": "Entity ID"}]'
We'll load that JSON and turn it into a Pydantic model
"""
return [LLMPlateResponse(**plate_r) for plate_r in json.loads(result)]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/pirate-speak/README.md |
# pirate-speak
This template converts user input into pirate speak.
## Environment Setup
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package pirate-speak
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add pirate-speak
```
And add the following code to your `server.py` file:
```python
from pirate_speak.chain import chain as pirate_speak_chain
add_routes(app, pirate_speak_chain, path="/pirate-speak")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/pirate-speak/playground](http://127.0.0.1:8000/pirate-speak/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/pirate-speak")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/pirate-speak/pirate_speak/chain.py | from langchain_community.chat_models import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
_prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"Translate user input into pirate speak",
),
MessagesPlaceholder("chat_history"),
("human", "{text}"),
]
)
_model = ChatOpenAI()
# if you update this, you MUST also update ../pyproject.toml
# with the new `tool.langserve.export_attr`
chain = _prompt | _model
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/pirate-speak-configurable/README.md | # pirate-speak-configurable
This template converts user input into pirate speak. It shows how you can allow
`configurable_alternatives` in the Runnable, allowing you to select from
OpenAI, Anthropic, or Cohere as your LLM Provider in the playground (or via API).
## Environment Setup
Set the following environment variables to access all 3 configurable alternative
model providers:
- `OPENAI_API_KEY`
- `ANTHROPIC_API_KEY`
- `COHERE_API_KEY`
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package pirate-speak-configurable
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add pirate-speak-configurable
```
And add the following code to your `server.py` file:
```python
from pirate_speak_configurable import chain as pirate_speak_configurable_chain
add_routes(app, pirate_speak_configurable_chain, path="/pirate-speak-configurable")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/pirate-speak-configurable/playground](http://127.0.0.1:8000/pirate-speak-configurable/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/pirate-speak-configurable")
``` | Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/pirate-speak-configurable/pirate_speak_configurable/__init__.py | from pirate_speak_configurable.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/pirate-speak-configurable/pirate_speak_configurable/chain.py | from langchain_community.chat_models import ChatAnthropic, ChatCohere, ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import ConfigurableField
_prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"Translate user input into pirate speak",
),
("human", "{text}"),
]
)
_model = ChatOpenAI().configurable_alternatives(
ConfigurableField(id="llm_provider"),
default_key="openai",
anthropic=ChatAnthropic,
cohere=ChatCohere,
)
# if you update this, you MUST also update ../pyproject.toml
# with the new `tool.langserve.export_attr`
chain = _prompt | _model
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/pii-protected-chatbot/README.md | # pii-protected-chatbot
This template creates a chatbot that flags any incoming PII and doesn't pass it to the LLM.
## Environment Setup
The following environment variables need to be set:
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package pii-protected-chatbot
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add pii-protected-chatbot
```
And add the following code to your `server.py` file:
```python
from pii_protected_chatbot.chain import chain as pii_protected_chatbot
add_routes(app, pii_protected_chatbot, path="/openai-functions-agent")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/pii_protected_chatbot/playground](http://127.0.0.1:8000/pii_protected_chatbot/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/pii_protected_chatbot")
``` | Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/pii-protected-chatbot/pii_protected_chatbot/chain.py | from typing import List, Tuple
from langchain_community.chat_models import ChatOpenAI
from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnablePassthrough
from presidio_analyzer import AnalyzerEngine
# Formatting for chat history
def _format_chat_history(chat_history: List[Tuple[str, str]]):
buffer = []
for human, ai in chat_history:
buffer.append(HumanMessage(content=human))
buffer.append(AIMessage(content=ai))
return buffer
# Prompt we will use
_prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant who speaks like a pirate",
),
MessagesPlaceholder(variable_name="chat_history"),
("human", "{text}"),
]
)
# Model we will use
_model = ChatOpenAI()
# Standard conversation chain.
chat_chain = (
{
"chat_history": lambda x: _format_chat_history(x["chat_history"]),
"text": lambda x: x["text"],
}
| _prompt
| _model
| StrOutputParser()
)
# PII Detection logic
analyzer = AnalyzerEngine()
# You can customize this to detect any PII
def _detect_pii(inputs: dict) -> bool:
analyzer_results = analyzer.analyze(text=inputs["text"], language="en")
return bool(analyzer_results)
# Add logic to route on whether PII has been detected
def _route_on_pii(inputs: dict):
if inputs["pii_detected"]:
# Response if PII is detected
return "Sorry, I can't answer questions that involve PII"
else:
return chat_chain
# Final chain
chain = RunnablePassthrough.assign(
# First detect PII
pii_detected=_detect_pii
) | {
# Then use this information to generate the response
"response": _route_on_pii,
# Return boolean of whether PII is detected so client can decided
# whether or not to include in chat history
"pii_detected": lambda x: x["pii_detected"],
}
# Add typing for playground
class ChainInput(BaseModel):
text: str
chat_history: List[Tuple[str, str]]
chain = chain.with_types(input_type=ChainInput)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/openai-functions-tool-retrieval-agent/README.md | # openai-functions-tool-retrieval-agent
The novel idea introduced in this template is the idea of using retrieval to select the set of tools to use to answer an agent query. This is useful when you have many many tools to select from. You cannot put the description of all the tools in the prompt (because of context length issues) so instead you dynamically select the N tools you do want to consider using at run time.
In this template we will create a somewhat contrived example. We will have one legitimate tool (search) and then 99 fake tools which are just nonsense. We will then add a step in the prompt template that takes the user input and retrieves tool relevant to the query.
This template is based on [this Agent How-To](https://python.langchain.com/docs/modules/agents/how_to/custom_agent_with_tool_retrieval).
## Environment Setup
The following environment variables need to be set:
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Set the `TAVILY_API_KEY` environment variable to access Tavily.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package openai-functions-tool-retrieval-agent
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add openai-functions-tool-retrieval-agent
```
And add the following code to your `server.py` file:
```python
from openai_functions_tool_retrieval_agent import agent_executor as openai_functions_tool_retrieval_agent_chain
add_routes(app, openai_functions_tool_retrieval_agent_chain, path="/openai-functions-tool-retrieval-agent")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/openai-functions-tool-retrieval-agent/playground](http://127.0.0.1:8000/openai-functions-tool-retrieval-agent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/openai-functions-tool-retrieval-agent")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/openai-functions-tool-retrieval-agent/openai_functions_tool_retrieval_agent/__init__.py | from openai_functions_tool_retrieval_agent.agent import agent_executor
__all__ = ["agent_executor"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/openai-functions-tool-retrieval-agent/openai_functions_tool_retrieval_agent/agent.py | from typing import Dict, List, Tuple
from langchain.agents import (
AgentExecutor,
Tool,
)
from langchain.agents.format_scratchpad import format_to_openai_functions
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain_community.chat_models import ChatOpenAI
from langchain_community.embeddings import OpenAIEmbeddings
from langchain_community.tools.convert_to_openai import format_tool_to_openai_function
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_community.vectorstores import FAISS
from langchain_core.documents import Document
from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
)
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.runnables import Runnable, RunnableLambda, RunnableParallel
from langchain_core.tools import BaseTool
# Create the tools
search = TavilySearchAPIWrapper()
description = """"Useful for when you need to answer questions \
about current events or about recent information."""
tavily_tool = TavilySearchResults(api_wrapper=search, description=description)
def fake_func(inp: str) -> str:
return "foo"
fake_tools = [
Tool(
name=f"foo-{i}",
func=fake_func,
description=("a silly function that gets info " f"about the number {i}"),
)
for i in range(99)
]
ALL_TOOLS: List[BaseTool] = [tavily_tool] + fake_tools
# turn tools into documents for indexing
docs = [
Document(page_content=t.description, metadata={"index": i})
for i, t in enumerate(ALL_TOOLS)
]
vector_store = FAISS.from_documents(docs, OpenAIEmbeddings())
retriever = vector_store.as_retriever()
def get_tools(query: str) -> List[Tool]:
docs = retriever.invoke(query)
return [ALL_TOOLS[d.metadata["index"]] for d in docs]
assistant_system_message = """You are a helpful assistant. \
Use tools (only if necessary) to best answer the users questions."""
assistant_system_message = """You are a helpful assistant. \
Use tools (only if necessary) to best answer the users questions."""
prompt = ChatPromptTemplate.from_messages(
[
("system", assistant_system_message),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
def llm_with_tools(input: Dict) -> Runnable:
return RunnableLambda(lambda x: x["input"]) | ChatOpenAI(temperature=0).bind(
functions=input["functions"]
)
def _format_chat_history(chat_history: List[Tuple[str, str]]):
buffer = []
for human, ai in chat_history:
buffer.append(HumanMessage(content=human))
buffer.append(AIMessage(content=ai))
return buffer
agent = (
RunnableParallel(
{
"input": lambda x: x["input"],
"chat_history": lambda x: _format_chat_history(x["chat_history"]),
"agent_scratchpad": lambda x: format_to_openai_functions(
x["intermediate_steps"]
),
"functions": lambda x: [
format_tool_to_openai_function(tool) for tool in get_tools(x["input"])
],
}
)
| {
"input": prompt,
"functions": lambda x: x["functions"],
}
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
)
# LLM chain consisting of the LLM and a prompt
class AgentInput(BaseModel):
input: str
chat_history: List[Tuple[str, str]] = Field(
..., extra={"widget": {"type": "chat", "input": "input", "output": "output"}}
)
agent_executor = AgentExecutor(agent=agent, tools=ALL_TOOLS).with_types(
input_type=AgentInput
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/openai-functions-agent/README.md |
# openai-functions-agent
This template creates an agent that uses OpenAI function calling to communicate its decisions on what actions to take.
This example creates an agent that can optionally look up information on the internet using Tavily's search engine.
## Environment Setup
The following environment variables need to be set:
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Set the `TAVILY_API_KEY` environment variable to access Tavily.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package openai-functions-agent
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add openai-functions-agent
```
And add the following code to your `server.py` file:
```python
from openai_functions_agent import agent_executor as openai_functions_agent_chain
add_routes(app, openai_functions_agent_chain, path="/openai-functions-agent")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/openai-functions-agent/playground](http://127.0.0.1:8000/openai-functions-agent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/openai-functions-agent")
``` | Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/openai-functions-agent/main.py | from openai_functions_agent.agent import agent_executor
if __name__ == "__main__":
question = "who won the womens world cup in 2023?"
print(agent_executor.invoke({"input": question, "chat_history": []}))
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/openai-functions-agent/openai_functions_agent/__init__.py | from openai_functions_agent.agent import agent_executor
__all__ = ["agent_executor"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/openai-functions-agent/openai_functions_agent/agent.py | from typing import List, Tuple
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad import format_to_openai_function_messages
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain_community.chat_models import ChatOpenAI
from langchain_community.tools.convert_to_openai import format_tool_to_openai_function
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.pydantic_v1 import BaseModel, Field
# Create the tool
search = TavilySearchAPIWrapper()
description = """"A search engine optimized for comprehensive, accurate, \
and trusted results. Useful for when you need to answer questions \
about current events or about recent information. \
Input should be a search query. \
If the user is asking about something that you don't know about, \
you should probably use this tool to see if that can provide any information."""
tavily_tool = TavilySearchResults(api_wrapper=search, description=description)
tools = [tavily_tool]
llm = ChatOpenAI(temperature=0)
assistant_system_message = """You are a helpful assistant. \
Use tools (only if necessary) to best answer the users questions."""
prompt = ChatPromptTemplate.from_messages(
[
("system", assistant_system_message),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm_with_tools = llm.bind(functions=[format_tool_to_openai_function(t) for t in tools])
def _format_chat_history(chat_history: List[Tuple[str, str]]):
buffer = []
for human, ai in chat_history:
buffer.append(HumanMessage(content=human))
buffer.append(AIMessage(content=ai))
return buffer
agent = (
{
"input": lambda x: x["input"],
"chat_history": lambda x: _format_chat_history(x["chat_history"]),
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
)
class AgentInput(BaseModel):
input: str
chat_history: List[Tuple[str, str]] = Field(
..., extra={"widget": {"type": "chat", "input": "input", "output": "output"}}
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True).with_types(
input_type=AgentInput
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/openai-functions-agent-gmail/README.md | # OpenAI Functions Agent - Gmail
Ever struggled to reach inbox zero?
Using this template, you can create and customize your very own AI assistant to manage your Gmail account. Using the default Gmail tools, it can read, search through, and draft emails to respond on your behalf. It also has access to a Tavily search engine so it can search for relevant information about any topics or people in the email thread before writing, ensuring the drafts include all the relevant information needed to sound well-informed.
![Animated GIF showing the interface of the Gmail Agent Playground with a cursor interacting with the input field.](./static/gmail-agent-playground.gif "Gmail Agent Playground Interface")
## The details
This assistant uses OpenAI's [function calling](https://python.langchain.com/docs/modules/chains/how_to/openai_functions) support to reliably select and invoke the tools you've provided
This template also imports directly from [langchain-core](https://pypi.org/project/langchain-core/) and [`langchain-community`](https://pypi.org/project/langchain-community/) where appropriate. We have restructured LangChain to let you select the specific integrations needed for your use case. While you can still import from `langchain` (we are making this transition backwards-compatible), we have separated the homes of most of the classes to reflect ownership and to make your dependency lists lighter. Most of the integrations you need can be found in the `langchain-community` package, and if you are just using the core expression language API's, you can even build solely based on `langchain-core`.
## Environment Setup
The following environment variables need to be set:
Set the `OPENAI_API_KEY` environment variable to access the OpenAI models.
Set the `TAVILY_API_KEY` environment variable to access Tavily search.
Create a [`credentials.json`](https://developers.google.com/gmail/api/quickstart/python#authorize_credentials_for_a_desktop_application) file containing your OAuth client ID from Gmail. To customize authentication, see the [Customize Auth](#customize-auth) section below.
_*Note:* The first time you run this app, it will force you to go through a user authentication flow._
(Optional): Set `GMAIL_AGENT_ENABLE_SEND` to `true` (or modify the `agent.py` file in this template) to give it access to the "Send" tool. This will give your assistant permissions to send emails on your behalf without your explicit review, which is not recommended.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package openai-functions-agent-gmail
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add openai-functions-agent-gmail
```
And add the following code to your `server.py` file:
```python
from openai_functions_agent import agent_executor as openai_functions_agent_chain
add_routes(app, openai_functions_agent_chain, path="/openai-functions-agent-gmail")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/openai-functions-agent-gmail/playground](http://127.0.0.1:8000/openai-functions-agent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/openai-functions-agent-gmail")
```
## Customize Auth
```
from langchain_community.tools.gmail.utils import build_resource_service, get_gmail_credentials
# Can review scopes here https://developers.google.com/gmail/api/auth/scopes
# For instance, readonly scope is 'https://www.googleapis.com/auth/gmail.readonly'
credentials = get_gmail_credentials(
token_file="token.json",
scopes=["https://mail.google.com/"],
client_secrets_file="credentials.json",
)
api_resource = build_resource_service(credentials=credentials)
toolkit = GmailToolkit(api_resource=api_resource)
``` | Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/openai-functions-agent-gmail/main.py | from openai_functions_agent.agent import agent_executor
if __name__ == "__main__":
question = (
"Write a draft response to LangChain's last email. "
"First do background research on the sender and topics to make sure you"
" understand the context, then write the draft."
)
print(agent_executor.invoke({"input": question, "chat_history": []}))
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/openai-functions-agent-gmail/openai_functions_agent/__init__.py | from openai_functions_agent.agent import agent_executor
__all__ = ["agent_executor"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/openai-functions-agent-gmail/openai_functions_agent/agent.py | import os
from typing import List, Tuple
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad import format_to_openai_function_messages
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
from langchain_community.chat_models import ChatOpenAI
from langchain_community.tools.convert_to_openai import format_tool_to_openai_function
from langchain_community.tools.gmail import (
GmailCreateDraft,
GmailGetMessage,
GmailGetThread,
GmailSearch,
GmailSendMessage,
)
from langchain_community.tools.gmail.utils import build_resource_service
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
from langchain_core.messages import AIMessage, HumanMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.tools import tool
@tool
def search_engine(query: str, max_results: int = 5) -> str:
""""A search engine optimized for comprehensive, accurate, \
and trusted results. Useful for when you need to answer questions \
about current events or about recent information. \
Input should be a search query. \
If the user is asking about something that you don't know about, \
you should probably use this tool to see if that can provide any information."""
return TavilySearchAPIWrapper().results(query, max_results=max_results)
# Create the tools
tools = [
GmailCreateDraft(),
GmailGetMessage(),
GmailGetThread(),
GmailSearch(),
search_engine,
]
if os.environ.get("GMAIL_AGENT_ENABLE_SEND") == "true":
tools.append(GmailSendMessage())
current_user = (
build_resource_service().users().getProfile(userId="me").execute()["emailAddress"]
)
assistant_system_message = """You are a helpful assistant aiding a user with their \
emails. Use tools (only if necessary) to best answer \
the users questions.\n\nCurrent user: {user}"""
prompt = ChatPromptTemplate.from_messages(
[
("system", assistant_system_message),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
).partial(user=current_user)
llm = ChatOpenAI(model="gpt-4-1106-preview", temperature=0)
llm_with_tools = llm.bind(functions=[format_tool_to_openai_function(t) for t in tools])
def _format_chat_history(chat_history: List[Tuple[str, str]]):
buffer = []
for human, ai in chat_history:
buffer.append(HumanMessage(content=human))
buffer.append(AIMessage(content=ai))
return buffer
agent = (
{
"input": lambda x: x["input"],
"chat_history": lambda x: _format_chat_history(x["chat_history"]),
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
)
class AgentInput(BaseModel):
input: str
chat_history: List[Tuple[str, str]] = Field(
..., extra={"widget": {"type": "chat", "input": "input", "output": "output"}}
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True).with_types(
input_type=AgentInput
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/nvidia-rag-canonical/README.md |
# nvidia-rag-canonical
This template performs RAG using Milvus Vector Store and NVIDIA Models (Embedding and Chat).
## Environment Setup
You should export your NVIDIA API Key as an environment variable.
If you do not have an NVIDIA API Key, you can create one by following these steps:
1. Create a free account with the [NVIDIA GPU Cloud](https://catalog.ngc.nvidia.com/) service, which hosts AI solution catalogs, containers, models, etc.
2. Navigate to `Catalog > AI Foundation Models > (Model with API endpoint)`.
3. Select the `API` option and click `Generate Key`.
4. Save the generated key as `NVIDIA_API_KEY`. From there, you should have access to the endpoints.
```shell
export NVIDIA_API_KEY=...
```
For instructions on hosting the Milvus Vector Store, refer to the section at the bottom.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To use the NVIDIA models, install the Langchain NVIDIA AI Endpoints package:
```shell
pip install -U langchain_nvidia_aiplay
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package nvidia-rag-canonical
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add nvidia-rag-canonical
```
And add the following code to your `server.py` file:
```python
from nvidia_rag_canonical import chain as nvidia_rag_canonical_chain
add_routes(app, nvidia_rag_canonical_chain, path="/nvidia-rag-canonical")
```
If you want to set up an ingestion pipeline, you can add the following code to your `server.py` file:
```python
from nvidia_rag_canonical import ingest as nvidia_rag_ingest
add_routes(app, nvidia_rag_ingest, path="/nvidia-rag-ingest")
```
Note that for files ingested by the ingestion API, the server will need to be restarted for the newly ingested files to be accessible by the retriever.
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you DO NOT already have a Milvus Vector Store you want to connect to, see `Milvus Setup` section below before proceeding.
If you DO have a Milvus Vector Store you want to connect to, edit the connection details in `nvidia_rag_canonical/chain.py`
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/nvidia-rag-canonical/playground](http://127.0.0.1:8000/nvidia-rag-canonical/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/nvidia-rag-canonical")
```
## Milvus Setup
Use this step if you need to create a Milvus Vector Store and ingest data.
We will first follow the standard Milvus setup instructions [here](https://milvus.io/docs/install_standalone-docker.md).
1. Download the Docker Compose YAML file.
```shell
wget https://github.com/milvus-io/milvus/releases/download/v2.3.3/milvus-standalone-docker-compose.yml -O docker-compose.yml
```
2. Start the Milvus Vector Store container
```shell
sudo docker compose up -d
```
3. Install the PyMilvus package to interact with the Milvus container.
```shell
pip install pymilvus
```
4. Let's now ingest some data! We can do that by moving into this directory and running the code in `ingest.py`, eg:
```shell
python ingest.py
```
Note that you can (and should!) change this to ingest data of your choice.
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/nvidia-rag-canonical/ingest.py | import getpass
import os
from langchain.document_loaders import PyPDFLoader
from langchain.vectorstores.milvus import Milvus
from langchain_nvidia_aiplay import NVIDIAEmbeddings
from langchain_text_splitters.character import CharacterTextSplitter
if os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"):
print("Valid NVIDIA_API_KEY already in environment. Delete to reset")
else:
nvapi_key = getpass.getpass("NVAPI Key (starts with nvapi-): ")
assert nvapi_key.startswith("nvapi-"), f"{nvapi_key[:5]}... is not a valid key"
os.environ["NVIDIA_API_KEY"] = nvapi_key
# Note: if you change this, you should also change it in `nvidia_rag_canonical/chain.py`
EMBEDDING_MODEL = "nvolveqa_40k"
HOST = "127.0.0.1"
PORT = "19530"
COLLECTION_NAME = "test"
embeddings = NVIDIAEmbeddings(model=EMBEDDING_MODEL)
if __name__ == "__main__":
# Load docs
loader = PyPDFLoader("https://www.ssa.gov/news/press/factsheets/basicfact-alt.pdf")
data = loader.load()
# Split docs
text_splitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=100)
docs = text_splitter.split_documents(data)
# Insert the documents in Milvus Vector Store
vector_db = Milvus.from_documents(
docs,
embeddings,
collection_name=COLLECTION_NAME,
connection_args={"host": HOST, "port": PORT},
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/nvidia-rag-canonical/nvidia_rag_canonical.ipynb | {
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"id": "681a5d1e",
"metadata": {},
"source": [
"## Connect to template\n",
"\n",
"In `server.py`, set -\n",
"```\n",
"add_routes(app, nvidia_rag_canonical_chain, path=\"/nvidia_rag_canonical\")\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d774be2a",
"metadata": {},
"outputs": [],
"source": [
"from langserve.client import RemoteRunnable\n",
"\n",
"rag_app = RemoteRunnable(\"http://0.0.0.0:8000/nvidia_rag_canonical\")\n",
"rag_app.invoke(\"How many Americans receive Social Security Benefits?\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/nvidia-rag-canonical/nvidia_rag_canonical/__init__.py | from nvidia_rag_canonical.chain import chain, ingest
__all__ = ["chain", "ingest"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/nvidia-rag-canonical/nvidia_rag_canonical/chain.py | import getpass
import os
from langchain_community.document_loaders import PyPDFLoader
from langchain_community.vectorstores import Milvus
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import (
RunnableLambda,
RunnableParallel,
RunnablePassthrough,
)
from langchain_nvidia_aiplay import ChatNVIDIA, NVIDIAEmbeddings
from langchain_text_splitters.character import CharacterTextSplitter
EMBEDDING_MODEL = "nvolveqa_40k"
CHAT_MODEL = "llama2_13b"
HOST = "127.0.0.1"
PORT = "19530"
COLLECTION_NAME = "test"
INGESTION_CHUNK_SIZE = 500
INGESTION_CHUNK_OVERLAP = 0
if os.environ.get("NVIDIA_API_KEY", "").startswith("nvapi-"):
print("Valid NVIDIA_API_KEY already in environment. Delete to reset")
else:
nvapi_key = getpass.getpass("NVAPI Key (starts with nvapi-): ")
assert nvapi_key.startswith("nvapi-"), f"{nvapi_key[:5]}... is not a valid key"
os.environ["NVIDIA_API_KEY"] = nvapi_key
# Read from Milvus Vector Store
embeddings = NVIDIAEmbeddings(model=EMBEDDING_MODEL)
vectorstore = Milvus(
connection_args={"host": HOST, "port": PORT},
collection_name=COLLECTION_NAME,
embedding_function=embeddings,
)
retriever = vectorstore.as_retriever()
# RAG prompt
template = """<s>[INST] <<SYS>>
Use the following context to answer the user's question. If you don't know the answer,
just say that you don't know, don't try to make up an answer.
<</SYS>>
<s>[INST] Context: {context} Question: {question} Only return the helpful
answer below and nothing else. Helpful answer:[/INST]"
"""
prompt = ChatPromptTemplate.from_template(template)
# RAG
model = ChatNVIDIA(model=CHAT_MODEL)
chain = (
RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
| prompt
| model
| StrOutputParser()
)
# Add typing for input
class Question(BaseModel):
__root__: str
chain = chain.with_types(input_type=Question)
def _ingest(url: str) -> dict:
"""Load and ingest the PDF file from the URL"""
loader = PyPDFLoader(url)
data = loader.load()
# Split docs
text_splitter = CharacterTextSplitter(
chunk_size=INGESTION_CHUNK_SIZE, chunk_overlap=INGESTION_CHUNK_OVERLAP
)
docs = text_splitter.split_documents(data)
# Insert the documents in Milvus Vector Store
_ = Milvus.from_documents(
documents=docs,
embedding=embeddings,
collection_name=COLLECTION_NAME,
connection_args={"host": HOST, "port": PORT},
)
return {}
ingest = RunnableLambda(_ingest)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/neo4j-vector-memory/README.md |
# neo4j-vector-memory
This template allows you to integrate an LLM with a vector-based retrieval system using Neo4j as the vector store.
Additionally, it uses the graph capabilities of the Neo4j database to store and retrieve the dialogue history of a specific user's session.
Having the dialogue history stored as a graph allows for seamless conversational flows but also gives you the ability to analyze user behavior and text chunk retrieval through graph analytics.
## Environment Setup
You need to define the following environment variables
```
OPENAI_API_KEY=<YOUR_OPENAI_API_KEY>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Populating with data
If you want to populate the DB with some example data, you can run `python ingest.py`.
The script process and stores sections of the text from the file `dune.txt` into a Neo4j graph database.
Additionally, a vector index named `dune` is created for efficient querying of these embeddings.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U langchain-cli
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package neo4j-vector-memory
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add neo4j-vector-memory
```
And add the following code to your `server.py` file:
```python
from neo4j_vector_memory import chain as neo4j_vector_memory_chain
add_routes(app, neo4j_vector_memory_chain, path="/neo4j-vector-memory")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/neo4j-vector-memory/playground](http://127.0.0.1:8000/neo4j-parent/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-vector-memory")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/neo4j-vector-memory/ingest.py | from pathlib import Path
from langchain_community.document_loaders import TextLoader
from langchain_community.vectorstores import Neo4jVector
from langchain_openai import OpenAIEmbeddings
from langchain_text_splitters import TokenTextSplitter
txt_path = Path(__file__).parent / "dune.txt"
# Load the text file
loader = TextLoader(str(txt_path))
raw_documents = loader.load()
# Define chunking strategy
splitter = TokenTextSplitter(chunk_size=512, chunk_overlap=24)
documents = splitter.split_documents(raw_documents)
# Calculate embedding values and store them in the graph
Neo4jVector.from_documents(
documents,
OpenAIEmbeddings(),
index_name="dune",
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/neo4j-vector-memory/main.py | from neo4j_vector_memory.chain import chain
if __name__ == "__main__":
user_id = "user_id_1"
session_id = "session_id_1"
original_query = "What is the plot of the Dune?"
print(
chain.invoke(
{"question": original_query, "user_id": user_id, "session_id": session_id}
)
)
follow_up_query = "Tell me more about Leto"
print(
chain.invoke(
{"question": follow_up_query, "user_id": user_id, "session_id": session_id}
)
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/neo4j-vector-memory/neo4j_vector_memory/__init__.py | from neo4j_vector_memory.chain import chain
__all__ = ["chain"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/neo4j-vector-memory/neo4j_vector_memory/chain.py | from operator import itemgetter
from langchain_community.vectorstores import Neo4jVector
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import (
ChatPromptTemplate,
MessagesPlaceholder,
PromptTemplate,
)
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from neo4j_vector_memory.history import get_history, save_history
# Define vector retrieval
retrieval_query = "RETURN node.text AS text, score, {id:elementId(node)} AS metadata"
vectorstore = Neo4jVector.from_existing_index(
OpenAIEmbeddings(), index_name="dune", retrieval_query=retrieval_query
)
retriever = vectorstore.as_retriever()
# Define LLM
llm = ChatOpenAI()
# Condense a chat history and follow-up question into a standalone question
condense_template = """Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
Make sure to include all the relevant information.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:""" # noqa: E501
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(condense_template)
# RAG answer synthesis prompt
answer_template = """Answer the question based only on the following context:
<context>
{context}
</context>"""
ANSWER_PROMPT = ChatPromptTemplate.from_messages(
[
("system", answer_template),
MessagesPlaceholder(variable_name="chat_history"),
("user", "{question}"),
]
)
chain = (
RunnablePassthrough.assign(chat_history=get_history)
| RunnablePassthrough.assign(
rephrased_question=CONDENSE_QUESTION_PROMPT | llm | StrOutputParser()
)
| RunnablePassthrough.assign(
context=itemgetter("rephrased_question") | retriever,
)
| RunnablePassthrough.assign(
output=ANSWER_PROMPT | llm | StrOutputParser(),
)
| save_history
)
# Add typing for input
class Question(BaseModel):
question: str
user_id: str
session_id: str
chain = chain.with_types(input_type=Question)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/neo4j-vector-memory/neo4j_vector_memory/history.py | from typing import Any, Dict, List, Union
from langchain.memory import ChatMessageHistory
from langchain_community.graphs import Neo4jGraph
from langchain_core.messages import AIMessage, HumanMessage
graph = Neo4jGraph()
def convert_messages(input: List[Dict[str, Any]]) -> ChatMessageHistory:
history = ChatMessageHistory()
for item in input:
history.add_user_message(item["result"]["question"])
history.add_ai_message(item["result"]["answer"])
return history
def get_history(input: Dict[str, Any]) -> List[Union[HumanMessage, AIMessage]]:
# Lookback conversation window
window = 3
data = graph.query(
"""
MATCH (u:User {id:$user_id})-[:HAS_SESSION]->(s:Session {id:$session_id}),
(s)-[:LAST_MESSAGE]->(last_message)
MATCH p=(last_message)<-[:NEXT*0.."""
+ str(window)
+ """]-()
WITH p, length(p) AS length
ORDER BY length DESC LIMIT 1
UNWIND reverse(nodes(p)) AS node
MATCH (node)-[:HAS_ANSWER]->(answer)
RETURN {question:node.text, answer:answer.text} AS result
""",
params=input,
)
history = convert_messages(data)
return history.messages
def save_history(input: Dict[str, Any]) -> str:
input["context"] = [el.metadata["id"] for el in input["context"]]
has_history = bool(input.pop("chat_history"))
# store history to database
if has_history:
graph.query(
"""
MATCH (u:User {id: $user_id})-[:HAS_SESSION]->(s:Session{id: $session_id}),
(s)-[l:LAST_MESSAGE]->(last_message)
CREATE (last_message)-[:NEXT]->(q:Question
{text:$question, rephrased:$rephrased_question, date:datetime()}),
(q)-[:HAS_ANSWER]->(:Answer {text:$output}),
(s)-[:LAST_MESSAGE]->(q)
DELETE l
WITH q
UNWIND $context AS c
MATCH (n) WHERE elementId(n) = c
MERGE (q)-[:RETRIEVED]->(n)
""",
params=input,
)
else:
graph.query(
"""MERGE (u:User {id: $user_id})
CREATE (u)-[:HAS_SESSION]->(s1:Session {id:$session_id}),
(s1)-[:LAST_MESSAGE]->(q:Question
{text:$question, rephrased:$rephrased_question, date:datetime()}),
(q)-[:HAS_ANSWER]->(:Answer {text:$output})
WITH q
UNWIND $context AS c
MATCH (n) WHERE elementId(n) = c
MERGE (q)-[:RETRIEVED]->(n)
""",
params=input,
)
# Return LLM response to the chain
return input["output"]
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/neo4j-semantic-ollama/README.md | # neo4j-semantic-ollama
This template is designed to implement an agent capable of interacting with a graph database like Neo4j through a semantic layer using Mixtral as a JSON-based agent.
The semantic layer equips the agent with a suite of robust tools, allowing it to interact with the graph database based on the user's intent.
Learn more about the semantic layer template in the [corresponding blog post](https://medium.com/towards-data-science/enhancing-interaction-between-language-models-and-graph-databases-via-a-semantic-layer-0a78ad3eba49) and specifically about [Mixtral agents with Ollama](https://blog.langchain.dev/json-based-agents-with-ollama-and-langchain/).
![Diagram illustrating the workflow of the Neo4j semantic layer with an agent interacting with tools like Information, Recommendation, and Memory, connected to a knowledge graph.](https://raw.githubusercontent.com/langchain-ai/langchain/master/templates/neo4j-semantic-ollama/static/workflow.png "Neo4j Semantic Layer Workflow Diagram")
## Tools
The agent utilizes several tools to interact with the Neo4j graph database effectively:
1. **Information tool**:
- Retrieves data about movies or individuals, ensuring the agent has access to the latest and most relevant information.
2. **Recommendation Tool**:
- Provides movie recommendations based upon user preferences and input.
3. **Memory Tool**:
- Stores information about user preferences in the knowledge graph, allowing for a personalized experience over multiple interactions.
4. **Smalltalk Tool**:
- Allows an agent to deal with smalltalk.
## Environment Setup
Before using this template, you need to set up Ollama and Neo4j database.
1. Follow instructions [here](https://python.langchain.com/docs/integrations/chat/ollama) to download Ollama.
2. Download your LLM of interest:
* This package uses `mixtral`: `ollama pull mixtral`
* You can choose from many LLMs [here](https://ollama.ai/library)
You need to define the following environment variables
```
OLLAMA_BASE_URL=<YOUR_OLLAMA_URL>
NEO4J_URI=<YOUR_NEO4J_URI>
NEO4J_USERNAME=<YOUR_NEO4J_USERNAME>
NEO4J_PASSWORD=<YOUR_NEO4J_PASSWORD>
```
## Populating with data
If you want to populate the DB with an example movie dataset, you can run `python ingest.py`.
The script import information about movies and their rating by users.
Additionally, the script creates two [fulltext indices](https://neo4j.com/docs/cypher-manual/current/indexes-for-full-text-search/), which are used to map information from user input to the database.
## Usage
To use this package, you should first have the LangChain CLI installed:
```shell
pip install -U "langchain-cli[serve]"
```
To create a new LangChain project and install this as the only package, you can do:
```shell
langchain app new my-app --package neo4j-semantic-ollama
```
If you want to add this to an existing project, you can just run:
```shell
langchain app add neo4j-semantic-ollama
```
And add the following code to your `server.py` file:
```python
from neo4j_semantic_layer import agent_executor as neo4j_semantic_agent
add_routes(app, neo4j_semantic_agent, path="/neo4j-semantic-ollama")
```
(Optional) Let's now configure LangSmith.
LangSmith will help us trace, monitor and debug LangChain applications.
You can sign up for LangSmith [here](https://smith.langchain.com/).
If you don't have access, you can skip this section
```shell
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=<your-api-key>
export LANGCHAIN_PROJECT=<your-project> # if not specified, defaults to "default"
```
If you are inside this directory, then you can spin up a LangServe instance directly by:
```shell
langchain serve
```
This will start the FastAPI app with a server is running locally at
[http://localhost:8000](http://localhost:8000)
We can see all templates at [http://127.0.0.1:8000/docs](http://127.0.0.1:8000/docs)
We can access the playground at [http://127.0.0.1:8000/neo4j-semantic-ollama/playground](http://127.0.0.1:8000/neo4j-semantic-ollama/playground)
We can access the template from code with:
```python
from langserve.client import RemoteRunnable
runnable = RemoteRunnable("http://localhost:8000/neo4j-semantic-ollama")
```
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/neo4j-semantic-ollama/ingest.py | from langchain_community.graphs import Neo4jGraph
# Instantiate connection to Neo4j
graph = Neo4jGraph()
# Define unique constraints
graph.query("CREATE CONSTRAINT IF NOT EXISTS FOR (m:Movie) REQUIRE m.id IS UNIQUE;")
graph.query("CREATE CONSTRAINT IF NOT EXISTS FOR (u:User) REQUIRE u.id IS UNIQUE;")
graph.query("CREATE CONSTRAINT IF NOT EXISTS FOR (p:Person) REQUIRE p.name IS UNIQUE;")
graph.query("CREATE CONSTRAINT IF NOT EXISTS FOR (g:Genre) REQUIRE g.name IS UNIQUE;")
# Import movie information
movies_query = """
LOAD CSV WITH HEADERS FROM
'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/movies.csv'
AS row
CALL {
WITH row
MERGE (m:Movie {id:row.movieId})
SET m.released = date(row.released),
m.title = row.title,
m.imdbRating = toFloat(row.imdbRating)
FOREACH (director in split(row.director, '|') |
MERGE (p:Person {name:trim(director)})
MERGE (p)-[:DIRECTED]->(m))
FOREACH (actor in split(row.actors, '|') |
MERGE (p:Person {name:trim(actor)})
MERGE (p)-[:ACTED_IN]->(m))
FOREACH (genre in split(row.genres, '|') |
MERGE (g:Genre {name:trim(genre)})
MERGE (m)-[:IN_GENRE]->(g))
} IN TRANSACTIONS
"""
graph.query(movies_query)
# Import rating information
rating_query = """
LOAD CSV WITH HEADERS FROM
'https://raw.githubusercontent.com/tomasonjo/blog-datasets/main/movies/ratings.csv'
AS row
CALL {
WITH row
MATCH (m:Movie {id:row.movieId})
MERGE (u:User {id:row.userId})
MERGE (u)-[r:RATED]->(m)
SET r.rating = toFloat(row.rating),
r.timestamp = row.timestamp
} IN TRANSACTIONS OF 10000 ROWS
"""
graph.query(rating_query)
# Define fulltext indices
graph.query("CREATE FULLTEXT INDEX movie IF NOT EXISTS FOR (m:Movie) ON EACH [m.title]")
graph.query(
"CREATE FULLTEXT INDEX person IF NOT EXISTS FOR (p:Person) ON EACH [p.name]"
)
| Wed, 26 Jun 2024 13:15:51 GMT |
https://github.com/langchain-ai/langchain/blob/master/templates/neo4j-semantic-ollama/main.py | from neo4j_semantic_ollama import agent_executor
if __name__ == "__main__":
original_query = "What do you know about person John?"
followup_query = "John Travolta"
chat_history = [
(
"What do you know about person John?",
"I found multiple people named John. Could you please specify "
"which one you are interested in? Here are some options:"
"\n\n1. John Travolta\n2. John McDonough",
)
]
print(agent_executor.invoke({"input": original_query}))
print(
agent_executor.invoke({"input": followup_query, "chat_history": chat_history})
)
| Wed, 26 Jun 2024 13:15:51 GMT |